id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.15742 | ASTER: Automatic Speech Recognition System Accessibility Testing for
Stutterers | The popularity of automatic speech recognition (ASR) systems nowadays leads
to an increasing need for improving their accessibility. Handling stuttering
speech is an important feature for accessible ASR systems. To improve the
accessibility of ASR systems for stutterers, we need to expose and analyze the
failures of ASR systems on stuttering speech. The speech datasets recorded from
stutterers are not diverse enough to expose most of the failures. Furthermore,
these datasets lack ground truth information about the non-stuttered text,
rendering them unsuitable as comprehensive test suites. Therefore, a
methodology for generating stuttering speech as test inputs to test and analyze
the performance of ASR systems is needed. However, generating valid test inputs
in this scenario is challenging. The reason is that although the generated test
inputs should mimic how stutterers speak, they should also be diverse enough to
trigger more failures. To address the challenge, we propose ASTER, a technique
for automatically testing the accessibility of ASR systems. ASTER can generate
valid test cases by injecting five different types of stuttering. The generated
test cases can both simulate realistic stuttering speech and expose failures in
ASR systems. Moreover, ASTER can further enhance the quality of the test cases
with a multi-objective optimization-based seed updating algorithm. We
implemented ASTER as a framework and evaluated it on four open-source ASR
models and three commercial ASR systems. We conduct a comprehensive evaluation
of ASTER and find that it significantly increases the word error rate, match
error rate, and word information loss in the evaluated ASR systems.
Additionally, our user study demonstrates that the generated stuttering audio
is indistinguishable from real-world stuttering audio clips. | Yi Liu, Yuekang Li, Gelei Deng, Felix Juefei-Xu, Yao Du, Cen Zhang, Chengwei Liu, Yeting Li, Lei Ma, Yang Liu | 2023-08-30T03:46:52Z | http://arxiv.org/abs/2308.15742v1 | # Aster: Automatic Speech Recognition System Accessibility Testing for Stutterers
###### Abstract
The popularity of automatic speech recognition (ASR) systems nowadays leads to an increasing need for improving their accessibility. Handling stuttering speech is an important feature for accessible ASR systems. To improve the accessibility of ASR systems for stutterers, we need to expose and analyze the failures of ASR systems on stuttering speech. The speech datasets recorded from stutterers are not diverse enough to expose most of the failures. Furthermore, these datasets lack ground truth information about the non-suttered text, rendering them unsuitable as comprehensive test suites. Therefore, a methodology for generating stuttering speech as test inputs to test and analyze the performance of ASR systems is needed. However, generating valid test inputs in this scenario is challenging. The reason is that although the generated test inputs should mimic how stutterers speak, they should also be diverse enough to trigger more failures. To address the challenge, we propose Aster, a technique for automatically testing the accessibility of ASR systems. Aster can generate valid test cases by injecting five different types of stuttering. The generated test cases can both simulate realistic stuttering speech and expose failures in ASR systems. Moreover, Aster can further enhance the quality of the test cases with a multi-objective optimization-based seed updating algorithm. We implemented Aster as a framework and evaluated it on four open-source ASR models and three commercial ASR systems. We conduct a comprehensive evaluation of Aster and find that it significantly increases the word error rate, match error rate, and word information loss in the evaluated ASR systems. Additionally, our user study demonstrates that the generated stuttering audio is indistinguishable from real-world stuttering audio clips.
Automatic Speech Recognition, Accessibility Testing
## I Introduction
Automatic speech recognition (ASR) is about using computer programs to process human speech into readable text. The first ASR system, "Audrey", was created by researchers from Bell Labs, which can only recognize spoken numbers [1]. After decades of evolution, ASR systems have been drastically improved in both recognition accuracy and variety of words. Especially in the last decade, ASR systems benefit greatly from the emergence of deep learning (DL) techniques [2, 3, 4, 5]. Together with the advancements in academia, ASR systems have been making their way into our daily life through the products from companies like Google [6], Microsoft [7], IBM [8], etc. Besides the commercial-off-the-shelf (COTS) products, open-source DL models [9] are also available for developers to integrate ASR features into their software. As a result, ASR systems have become a highly available and popular type of software.
Thanks to their availability and popularity, ASR systems have been used by many users, including those with disabilities. Hence, improving the accessibility of ASR systems becomes crucial. According to [10], ASR systems are faced with different types of accessibility or inclusiveness problems such as gender and cultural bias, stuttering, and so on. Among the various types of accessibility problems, stuttering is one of the most challenging ones for ASR systems since it can directly affect the content of human speech. Moreover, stuttering is also a commonly encountered type of disability as it is estimated that over 70 million people are suffering from developmental stuttering [11]. Therefore, in this paper, we focus on studying the accessibility of ASR systems for stutterers.
Efforts can be devoted in two directions to improve the accessibility of ASR systems. On one hand, researchers have been proposing new techniques to improve the performance of the DL models on stuttered speech [12, 13]. On the other hand, we can detect and evaluate the accessibility problems in existing ASR systems in order to understand and eventually eliminate them. Compared with improving the DL models, detecting and studying the accessibility issues in existing ASR systems is equally important but less studied. Thus, we focus on detecting and analyzing the accessibility problems in ASR systems.
Testing is a popular and effective approach for exposing accessibility issues in software [14, 15, 16, 17] and testing ASR systems poses a unique challenge. The biggest challenge of testing ASR systems is to generate valid test inputs. The rationale is two-fold: \(\blacklozenge\) We need to generate the test inputs because speech datasets recorded from stutterers are not appropriate as test suites. This is due to two reasons: first, the datasets lack diversity and therefore may not comprehensively uncover potential failures in ASR systems; second, these datasets do not contain ground-truth information about non-suttered speech, which results in a lack of a reliable test
oracle. **The generated test inputs must be valid. They should have enough variety to expose potential bugs. However, the test inputs should also be as close to the stuttering speech as possible to simulate real-life cases instead of just incurring failures in the ASR systems.**
To fill the research gap, we propose Aster - an automatic testing technique for detecting accessibility bugs in ASR systems. Aster can create valid test cases in the form of audio files by injecting five different types of stuttering speech into benign audio files. Aster works in three steps: **Preprocessing.** Aster extracts the word and syllable timing information for each audio file. **Mutation.** With the timing information, Aster injects five types of stuttering, namely block, prolongation, sound repetition, word repetition, and interjection. The timing information is needed because the injected stuttering requires mutating the audio file at the word and syllable levels. The audio files with injected stuttering can be used as test cases. **Execution.** With the generated test cases, Aster executes multiple ASR systems simultaneously. After that, Aster uses a multi-objective-optimization (MOO) algorithm to balance between two properties: the difference between the results of the ASR systems and the similarity to the benign audio, in order to evaluate the test cases and keep the good ones as seeds to apply more mutations for generating better test cases. Last but not least, Aster uses a metamorphic relation as the test oracle to identify potential errors. The metamorphic relation is that _the output text of an ASR system should be the same for both the original audio and the mutated audio._ Aster keeps the test cases for which the ASR systems cannot generate results similar enough to the ground truth as suspicious failures and report them to human experts for verification.
We developed Aster, an audio generation tool that can create stuttering audio samples to evaluate the performance of ASR systems. Our evaluation on four open-source ASR models and three commercial ASR systems demonstrated that Aster can generate stuttering audios that significantly increase the word error rate (WER), match error rate (MER), and word information lost (WIL) by 23.12%, 21.45%, and 33.34%, respectively. Additionally, we conducted a user study and found that generated stuttering audio was indistinguishable from real-world stuttering audio clips. We also found that commercial ASR systems outperformed open-source models, achieving WER, MER, and WIL scores of 12.33%, 9.78%, and 15.32%. Finally, our analysis of 1,069 suspicious issues categorized them into five bug types: word injection, incorrect word, word repetition, word omission, and syllable repetition.
In summary, our paper makes the following contributions:
* We propose Aster, which is the first automatic testing technique for evaluating the accessibility of ASR systems.
* We implement Aster as a framework and evaluate it with real-world open-source and commercial ASR systems. The evaluation results prove the effectiveness of Aster.
* We manually identify and categorize 1,069 recognition errors in real-world ASR systems. Furthermore, we propose a classification scheme for recognizing errors and their underlying causes.
Aster is coupled with a website: [https://sites.google.com/view/aster-speech](https://sites.google.com/view/aster-speech). We will put the details about Aster and raw experiment data on this website. We will also open-source Aster after the paper is published.
## II Background
### _Accessibility Testing_
Web and mobile app accessibility are crucial for individuals with disabilities, with testing gaining attention recently. AXERAY [17] assesses web accessibility through semantic groupings, and Latte [16] automates Android app accessibility testing. Research in deaf accessibility testing [15] focuses on sign language users.
Stuttering identification employs machine learning and deep learning techniques, with comprehensive reviews of classification methods [18]. Studies evaluate stuttering detection in transcriptions [19] and explore multi-task and adversarial learning [20]. FluentNet [21] detects stutter types using a deep neural network.
Limited research evaluates ASR systems on stuttering speech, motivating our work on ASR evaluation through stuttering audio generation. We aim to develop more accurate and inclusive ASR systems for individuals who stutter.
### _Stuttering and Speech Disorders_
Stuttering, a complex fluency disorder, affects speech and disrupts communication [22, 23]. Its causes are unclear and may involve biological and psychological factors [24], leading to frustration and isolation [25]. Speech therapy [26] aims to improve fluency, reduce anxiety, and boost confidence [27]. However, ASR technology in smart devices presents challenges for people who stutter [28].
Limited research addresses stuttering in ASR technology. Testing ASR accessibility for stuttering can help identify errors and enhance functionality, fostering inclusive communication for those with speech disorders.
## III Methodology
Fig. 1 shows an overview of Aster. Aster is capable of testing multiple ASR systems simultaneously. The overall inputs of Aster are the benign audio files without stuttering while the overall outputs are the audio files with simulated stuttering which can cause failure in at least one of the ASR systems under test. Aster contains three main components, namely, _Phonetic Alignment_, _Speech Mutation_ and _Feedback Analysis_. Aster works in the following steps: **Given a seed audio file, Aster first determines the timing of different words to locate and differentiate the words. **With the word timing for each seed audio file, Aster then identifies the syllable timing for each word. **After** labeling the audio files with word and syllable timing information, Aster can apply five different mutation strategies to inject stuttering into the original audio file to create the test cases. **By executing the ASR systems with the generated test cases, Aster will keep the test
cases which are similar to the original audio but can trigger the different execution results of the ASR systems. The kept test cases can be added back to the seed pool for further mutations to create better test cases. Lastly, Aster uses the distance to the original speech text plus manual checks as the test oracle to capture the failures of the ASR systems under test.
Algorithm 1 shows the overall algorithm of Aster, where \(\mathbb{S}_{benign}\) is the initial set of audio files; \(\mathbb{SUT}\) is the set of ASR systems under test; \(\mathbb{F}\) is the set of test cases causing failures for the ASR systems under test; \(budget\_used\_up\) is the function to check if the resource budget given to a benign audio file has been used up. The resource budget is measured by the number of new test cases generated for the benign audio file. The default number is 50. Details about the functions in Algorithm 1 will be discussed in the rest of this section.
### _Phonetic Alignment_
Phonetic alignment is the prerequisite for creating valid test cases. Because if we have no knowledge about the structure of the audio file and randomly mutate it, the created speech can easily become distorted and lose the ability of simulating speech of human scatterers. In general, Aster relies on the algorithms in the PocketSphinx [29] project to perform phonetic alignment. The phonetic alignment process corresponds to the \(add\_timing\_info\) function in Algorithm 1 line 5. Fig. 2 illustrates how phonetic alignment is done in Aster.
First, Aster determines the word timing in the audio file. The timing of each word is a tuple in the form of \((start\_time,end\_time)\), where \(start\_time\) means the starting time of the word in the audio file (in terms of milliseconds) and \(end\_time\) means the ending time of the word in the audio file. The rationale for recognizing the words is to treat the audio file as a waveform and split the words by periods of silence. Then, based on the word timing, Aster determines the syllable timing for each word. Similar to word timing, the timing of each syllable is also in the form of a \((start\_time,end\_time)\) tuple. The rationale of recognizing syllables is to first use a language model to roughly identify each word and then use the phonetic dictionary to pinpoint the syllables in the recognized word. For example, if we recognize an utterance in the audio corresponds to the word _weather_, the phonetic dictionary can tell us that it contains two syllables: _wea_ and _ther_ and we can check the waveform to get the syllable timing accordingly 1.
Footnote 1: The algorithms of phonetic alignment are adopted from the PocketSpinx project. Interested readers can get detailed information about these algorithms from [29].
### _Speech Mutation_
The purpose of speech mutation is to inject stuttering into the original audio file while keeping the speech as realistic as possible. The speech mutation corresponds to the \(mutate\) function in line 9 in Algorithm 1. Following the taxonomy in [30], Aster can inject five types of stuttering with different strategies. During every round of mutation, Aster randomly selects one seed from the seed pool and applies a random mutator to build the new test case.
**Block.** This type of stuttering happens when the stutterer interrupts his/her speech when pronouncing a word, causing the word to split into halves. The designed mutator to simulate this type of stuttering is to add a small period of silence between two syllables of the same word. The small period of silence usually lasts for 50-200 ms because it is in line with the natural pause that occurs when a speaker is experiencing block stuttering. This pause may vary depending on the severity of the block stuttering, with more severe cases resulting in longer pauses. Additionally, the length of the pause may also depend on the individual's speech patterns and style. However, in our mutator, we found that a small period of silence in the range
Fig. 1: The overview of Aster.
Fig. 2: Phonetic alignment
of 50-200 ms was sufficient to simulate this type of stuttering without significantly altering the overall speech pattern.
**Prolongation.** This type of stuttering happens when the stutterer prolongs a syllable in the word. The designed mutator works similarly by extending the length of a syllable in a word. The syllable is extended by 2-4 times. The specific extension factor used may vary depending on the speech pattern being simulated and the desired level of severity. However, we found that an extension factor in the range of 2-4 times was generally sufficient to simulate this type of stuttering without causing the speech to become unrecognizable. Additionally, the length of the extension may depend on the length of the original syllable, with longer syllables requiring longer extensions to produce a noticeable effect. However, it is important to note that overly long extensions may cause the resulting audio to become unrealistic and may affect the overall quality of the synthesized speech.
**Sound Repetition.** This type of stuttering happens when the stutterer repeats a syllable in the word a few times. The designed mutator can copy/duplicate a syllable 2-4 times. The specific number of repetitions used may vary depending on the desired level of severity and the speech pattern being simulated. Compared to the prolongation mutator, sound repetition mutator creates a more abrupt and noticeable stuttering effect that is often characterized by a distinct syllable repetition pattern. The repetition pattern may vary in terms of the number of syllables repeated and the spacing between the repetitions, and may be influenced by individual speech patterns and style.
**Word Repetition.** The mutator we designed for word repetition is similar to the one used for sound repetition, but instead of copying syllables, it copies whole words.
**Interjection.** This type of stuttering happens when the stutterer speaks out some filler words such as _uh, em_, etc. during the speech. The designed mutator works in two steps: First, Aster goes through all the syllables and collects the syllables whose text exists in a predefined list of filler words 2. The rationale for collecting the filler word candidates from the same audio is that we need to keep the timbre of the entire speech consistent in order to mimic real speech. Second, Aster selects a random number of syllables from the candidate set and adds them randomly between words in the original speech.
Footnote 2: There exist filler words with more than two syllables, but in practice, we found it challenging to find syllable candidates from the same audio file for such filler words. Therefore, we only use filler words with one syllable in Aster.
### _Feedback Collection._
**Seed Pool.** Aster maintains a pool of audio files for each benign audio file. The seed pool is denoted as \(\mathbb{S}\) in Algorithm 1. These pools of audio files can be used as seeds for creating new test cases. The rationale is to gradually generate test cases with better quality like how genetic algorithms work. The difference between Aster and genetic algorithms is that Aster only uses mutations to generate new test cases and does not perform crossovers. This is because using crossovers to graft audio files can distort the content of the speech, making it difficult to check against the ground truth for failures.
Each audio file in the seed pool is labeled with two types of information: the timing info of the benign audio file and the list of mutators applied during the generation of this seed file. The reason for storing the seed files in the form of benign audio files plus lists of applied mutators is that the word and syllable timing of the mutated audio files becomes malformed. Therefore, every test case is created by applying the previous chain of mutators plus one new randomly selected mutator.
**Multi-Objective Optimization Based Seed Pool Update.** Aster aims to generate test cases that are both capable of exposing failures in ASR systems and realistic. These two properties are contradictory because exposing failures requires a test case to have odd content but as the content becomes more erratic, the test case becomes less realistic. Since the test cases need to fulfill two important yet contradictory requirements, Aster uses a multi-objective optimization (MOO) algorithm to evaluate the quality of test cases and update the seed pool. The function \(moo\_based\_seed\_pool\_update\) in line 14 Algorithm 1 represents this process.
According to the desired properties for the test cases, we propose two metrics to evaluate test case quality. The first metric (\(M_{1}\)) is the difference among results from the ASR systems under test. This metric is used for measuring how likely a test case can expose failures. Note that Aster does not use the difference between the results from the ASR systems and the ground truth text directly as the failure likelihood evaluator. The reason is that malformed test cases can lead to results different from the ground truth and malformed test cases are not valid. In contrast, if different ASR systems respond differently to a test case, it is likely that some ASR systems can process the test case correctly but some cannot. This can help to ensure that the preferred test cases are more likely to be valid and they can expose failures as well. \(M_{1}\) is calculated as the average value of the cosine similarities between every two Bert-embeddings [31] of the ASR system results. The calculation of \(M_{1}\) can be formulated as:
\[M_{1}=\frac{sum(\{\frac{e_{1}\cdot e_{2}}{\left\|e_{1}\right\|\left\|e_{2} \right\|}}|e_{1}\in\mathbb{E},e_{2}\in\mathbb{E},e_{1}\neq e_{2})}{\left\| \mathbb{E}\middle|^{2}-\left|\mathbb{E}\right|\right.} \tag{1}\]
where \(\mathbb{E}\) is the set of all the Bert-embeddings for the results from every ASR system under test.
The second metric (\(M_{2}\)) is to directly measure the difference between a test case and the original audio. The rationale is to
Fig. 3: Speech mutation
reduce the chance for the content of the test case to become malformed. The calculation of \(M_{2}\) can be formulated as:
\[M_{2}=\frac{e_{test}\cdot e_{benign}}{\left\|e_{test}\right\|\left\|e_{benign}\right\|} \tag{2}\]
where \(e_{test}\) is the Bert-embedding of the corresponding text of the test case and \(e_{benign}\) is the Bert-embedding of the text of the original benign audio file.
With \(M_{1}\) and \(M_{2}\) defined, the MOO model of selecting the favorable seeds can be described as follows:
**Definition 1** (Multi-objective Seed Selection): _Given a set of seeds \(\mathbb{S}\), multi-objective seed selection is to select a set of seeds \(\mathbb{S}\):_
\[\text{Min}\big{(}\vec{\mathcal{F}}(\mathbb{S})\big{)}=\text{Min}\big{(}O_{1} (s),O_{2}(s)\big{)},s\in\mathbb{S} \tag{3}\]
_where \(\vec{\mathcal{F}}(\mathbb{S})\) is an objective vector that denotes two objective functions, namely \(O_{1}\) and \(O_{2}\). The mappings between \(O_{1}\), \(O_{2}\) and \(M_{1}\), \(M_{2}\) are: \(O_{1}=Min(M_{1})\) and \(O_{2}=-Max(M_{2})\)._
For solving MOO problems, we can either use scalarization(weighted-sum) or the Pareto method [32]. The problem with scalarization is that the weight for each parameter is hard to decide. So we choose to use the Pareto method, where the Pareto Frontier is the solution to the MOO problem. Given a set of the seeds \(\mathbb{S}\) and the objective vector \(\vec{\mathcal{F}}=[f_{1},f_{2}]\), we say \(s\) dominates(\(\prec\)) \(s^{\prime}\)_iff_:
\[f_{i}(s)<f_{i}(s^{\prime}),\quad\forall i\in\{1,2\}\]
where \(s,s^{\prime}\in\mathbb{S}\); the Pareto frontier(\(P\)) is defined as:
\[P(\mathbb{S})=\{s\in\mathbb{S}\ |\ \{s^{\prime}\in\mathbb{S}\ |\ s^{\prime}\prec s,s^{\prime}\neq s\}=\emptyset\} \tag{4}\]
In Aster, after a test case \(s\) is generated and executed, it is put into \(\mathbb{S}\). Then Aster will calculate \(P(\mathbb{S})\) and all the seeds belonging to \(P(\mathbb{S})\) are kept in the seed pool while the rest are discarded. In other words, if \(s\in P(\mathbb{S})\), then \(s\) is kept and all \(\{s^{\prime}|s\prec s^{\prime},s^{\prime}\in\mathbb{S}\}\) are discarded. Fig. 4 illustrates an example of the Pareto frontier used for seed pool update. From Fig. 4, we can see how Aster selects test cases with smaller values of \(M_{1}\) and larger values of \(M_{2}\). It is worth noting that the calculation of Pareto Frontier is naturally indicated in its definition: we need to compare every test case against every other test case and find out all the test cases for which no other test case is better on both \(M_{1}\) and \(M_{2}\)3.
Footnote 3: A sample of using Python to calculate the Pareto Frontier is available here: [https://sites.google.com/view/aster-speech/pareto-frontier-code](https://sites.google.com/view/aster-speech/pareto-frontier-code).
**Test Oracle.** There exists a metamorphic relation that can be used as the test oracle for Aster. The metamorphic relation that the output text of an ASR system should be the same for both the original audio and the mutated audio. Aster reports the test cases which can cause the Bert-embeddings of the ASR system result and the text of the original benign audio to have cosine similarity smaller than a threshold \(\theta\). Based on our experience, we set the default value of \(\theta\) to be 0.8. However, the test cases reported by Aster cannot be treated as failures directly because some of them might be malformed, and even humans cannot recognize their text content correctly. Therefore, Aster only reports the suspicious test cases and eventually relies on humans to mark the true failures. Based on our empirical findings, around 31.43% of the failures are false positives with the default value of \(\theta\). The whole process of determining failures is denoted as the \(detect\_failure\) function in Algorithm 1.
## IV Implementation & Evaluation
We have developed and implemented Aster using Python version 3.9.1, comprising a total of 2,124 lines of code (LoCs). To evaluate the effectiveness of our approach, we will apply it to both open-source and commercial automatic speech recognition systems using two real-world speech datasets. The objective of our study is to answer the following research questions:
* **RQ1 (Stuttering Faults and User Study)** How effective is the proposed approach in generating stuttering speech, specifically in terms of its ability to accurately detect stuttering faults and simulate realistic stuttering patterns?
* **RQ2 (Mutator Ablation Study)** To what extent do the proposed mutators contribute to identifying stuttering faults in automatic speech recognition systems?
* **RQ3 (MOO Ablation Study)** How does the MOO-Based seed pool update improve the generation of realistic stuttering audio?
* **RQ4 (Real-world Evaluation)** To what extent can the proposed approach accurately detect stuttering faults in commercial automatic speech recognition systems?
* **RQ5 (Bug Pattern)** What types of stuttering faults can be identified and learned from commercial automatic speech recognition systems?
### _Experimental Setup_
#### Iv-A1 Benchmark
We selected a total of seven ASR systems for our evaluation, consisting of four open-source systems, built on the top of Wav2Vec [33], and three commercial services. The four open-source ASR systems are "data2vec-audio-large-960h", "wav2vec2-large-english", "wav2vec2-xls-r-1b-english", and "wav2vec2-large-xlsr-53-english". These systems were chosen based on their popularity (the monthly downloads \(>\) 10,00), as well as their maintenance status (The latest update should be later than January 2022). The "data2vec-audio-large-960h" system is based on the data2vec framework and provides pre-trained embeddings for speech and audio data. The other three systems, "wav2vec2-large-english", "wav2vec2-xls-r-1b-english", and "wav2vec2-large-xlsr-53-english" are all based on the wav2vec2 framework and use self-supervised learning techniques to learn representations of speech and audio data. We
Fig. 4: Example Pareto frontier
also included three commercial ASR services for our evaluation, namely, Azure Speech-to-Text [34], Google Cloud Speech-to-Text [35], and IBM Speech-to-Text [36]. These services are widely used in the industry and provide various features such as speaker recognition, custom models, and real-time streaming.
In Table I, we list the characteristics of each ASR system, including its type (_i.e._, open-source or commercial), and key features. This information can help readers understand the strengths and weaknesses of each system, as well as the overall landscape of ASR systems being used in the evaluation.
#### Iv-A2 Dataset
To synthesize stuttering speech, we utilize the Common Voice dataset as the benign corpus input for our approach, which is a large and publicly available collection of human voice recordings maintained by Mozilla [37]. However, due to its vast size, we collected the latest segment of the Common Voice dataset, which consisted of 7,415 validated audio recordings verified by Common Voice volunteers for their quality. To eliminate the influence of ASR models, we filtered out all audio recordings that could not produce the same recognized text by five specified models. As a result, we obtained a total of 1,212 audio recordings that serve as our benign input for synthesizing stuttering speech using our proposed approach.
On the other hand, we also incorporate the FluencyBank dataset [38] in our evaluation, which is a dataset specifically designed for the analysis of stuttering speech patterns. In particular, we use audio samples from FluencyBank to evaluate the performance of ASR systems, using the metrics described in the following section, to demonstrate the ability of these systems to transcribe real-world stuttering speech patterns. Additionally, we conduct a user study using audio samples from both our synthesized stuttering speech corpus and the FluencyBank dataset to assess the realism of our synthesized stuttering speech patterns. By incorporating the FluencyBank dataset in our evaluation, we can provide a more comprehensive and robust assessment of our approach and its ability to generate realistic stuttering speech patterns.
#### Iv-A3 Metrics
We evaluate the performance of ASR systems on a given stuttering corpus using three metrics: Word Error Rate (WER), Match Error Rate (MER), and Word Information Lost (WIL).
* **Word Error Rate (WER):** WER is a commonly used metric in ASR systems that measures the percentage of words that are incorrectly transcribed by the system, compared to the ground truth transcription. \[WER=\frac{S+D+I}{N}\] where \(S\) is the number of substitutions, \(D\) is the number of deletions, \(I\) is the number of insertions, and \(N\) is the total number of words in the reference transcript.
* **Match Error Rate (MER):** MER is a metric used to evaluate the accuracy of automatic speech recognition (ASR) systems. It measures the percentage of words that are incorrectly transcribed by the ASR system compared to the reference transcription of the same audio. \[MER=\frac{S+D+I+M}{N}\] where \(S\) is the number of substitution errors, \(D\) is the number of deletion errors, \(I\) is the number of insertion errors, \(M\) is the number of matches, and \(N\) is the total number of reference words.
* **Word Information Lost (WIL):** WIL is a metric that measures the amount of information lost by the ASR system, calculated by comparing the amount of information in the ground truth transcription to the information in the ASR system's output. \[WIL=\frac{M}{N}\] where \(M\) is the number of modifications.
#### Iv-A4 Configuration
To evaluate the performance of Aster, we manually inspect each audio file identified as suspicious and determine whether it contains stuttering issues and, if so, what kind of recognition errors it produces. This evaluation is conducted by three authors of this paper, and to ensure consistency and accuracy, we establish a similarity threshold of \(0.8\) between the ground truth and recognized texts, which is in accordance with previous work [39]. All experiments are run on a Linux workstation with Intel E5-2698 v4 processors with 80 cores, 504 GB of memory, and 8 Tesla V100 GPU processors. To mitigate any randomness, we perform each experiment ten times and report the average results.
### _Suttering Faults and User Study (RQ1)_
We ran our proposed approach for 10 rounds, generating stuttering audio samples from a total of 1,212 seeds by iterating 50 times using five mutators defined in the previous section of this paper. We fed the generated stuttering audio samples into the six selected ASR systems and measured the performance using the WER, MER, and WIL metrics described in the previous section. To evaluate the statistical significance of the results, we performed a Mann-Whitney U test [40] on
\begin{table}
\begin{tabular}{c||c c c} \hline System & WER & MER & WIL \\ \hline data2vec-audio-large-960h & 23.12\% & 21.45\% & 33.34\% \\ wav2vec2-large-english & 25.37\% & 23.54\% & 35.75\% \\ wav2vec2-large-xlsr-53-english & 26.64\% & 24.90\% & 36.61\% \\ wav2vec2-xlsr-r-1b-english & 24.89\% & 22.35\% & 34.29\% \\ \hline \end{tabular}
\end{table} TABLE II: Results showing the impact of stuttering audio generation on open-source ASR system recognition errors
\begin{table}
\begin{tabular}{c||c c} \hline System & Type & Features \\ \hline data2vec-audio-large-960h & Open-source & Self-supervised \\ wav2vec2-large-english & Open-source & Self-supervised \\ wav2vec2-xlsr-1b-english & Open-source & Self-supervised \\ wav2vec2-large-xlsr-53-english & Open-source & Self-supervised \\ Azure Speech-to-Text & Commercial & N/A \\ Google Cloud Speech-to-Text & Commercial & N/A \\ IBM Watson Speech-to-Text & Commercial & N/A \\ \hline \end{tabular}
\end{table} TABLE I: Characteristics of ASR systems used in the evaluation
the metrics between the synthesized stuttering speech and the original speech samples.
As shown in Table II, our approach is able to significantly increase the number of recognition errors produced by the selected ASR systems when tested on the generated stuttering audio samples compared to the original audio samples. Specifically, the WER ranged from 23.12% to 26.64%, the MER ranged from 21.45% to 24.90%, and the WIL ranged from 33.34% to 36.61%. These increases were statistically significant, with p-values below 0.05 for all metrics. These results demonstrate that our approach, which uses stuttering audio generation to test ASR systems, is effective in revealing weaknesses in the recognition of stuttering speech patterns by the ASR systems. The use of multiple mutators in our approach allows for the generation of diverse stuttering speech samples, which can provide a more thorough evaluation of the ability of ASR systems to detect and handle stuttering speech patterns. Overall, our experimental results provide evidence for the utility of our approach in evaluating the performance of ASR systems in recognizing stuttering speech patterns.
We conducted a user study to evaluate the authenticity of the stuttering audio samples produced by our approach. The study participants were recruited from our university and consisted of both native and non-native English speakers with high English fluency. As for the design of the study, we randomly selected 40 generated stuttering audio samples and 40 actual stuttering audio samples and utilized them to create four distinct surveys. Each survey comprises of 10 pairs of audio samples, one of which is generated by our approach, while the other is an actual real-world stuttering audio sample. The participants are asked to select the real-world stutter piece from each pair of audio samples, and then rate the selection confidence from very uncertain to very certain with 5 different levels. Before selection, participants were given extensive guidelines, test tasks, and training sessions on stuttering causes and types. A sample survey is provided at [https://forms.gle/EmbqLY7ezqptxAr7](https://forms.gle/EmbqLY7ezqptxAr7) for reference. The survey result is summarized in Figure 5. Upon completion, 100 valid survey results were obtained and analyzed.
We use weighted Fleiss' Kappa [41] to study the survey results. In particular, we apply users' confidence as weights of Fleiss' Kappa measurement, which is linearly scaled from 0 to 1 based on the users' selection from very uncertain to very certain. which accounts for the possibility of agreement occurring by chance, providing a measure of the agreement among participants while considering the potential for random guessing. In the end, we obtain a weighted Fleiss' Kappa of 0.063 from the survey, which means that no agreement is reached between users over the generated samples and real-world samples. This shows that the proposed method is not differentiable from real-world stutter pieces.
We further compare the generated samples and real-world samples by performing the t-test [42] to determine if there is a significant difference between the mean scores of the two distributions. The analysis returns a t-value of 0.295 and a p-value of 0.768, which rejects the null hypothesis in a two-tail hypothesis test with 0.05 significance level. The result shows that no statistically significant differences are identified between the accuracy rates of the two audio groups (real-world and generated) in terms of correctly identifying whether an audio sample was real-world or generated. This indicates that our approach to generate stuttering audio samples can effectively simulate the characteristics of real-world stuttering speech. This indicates that there was a strong level of agreement between participants in identifying the nature of the audio samples. The obtained outcomes provide strong evidence that our approach for generating stuttering audio samples is proficient in generating diverse and authentic stuttering speech patterns. Consequently, the generated samples can be exploited to assess the effectiveness of ASR systems in identifying stuttering speech patterns in realistic situations. As a result, the user study outcomes demonstrate the credibility and utility of our approach.
### _Mutator Ablation Study (RQ2)_
To investigate the contribution of each mutator to the recognition errors produced by ASR systems, we performed a series of experiments in which we applied each of the five mutators individually to generate stuttering audio samples. We repeated this experiment ten times for each mutator, with each repetition generating 50 mutants for each input audio sample. We used the 1,212 benign input audio samples described earlier in this paper to create a diverse set of stuttering speech patterns. For each input audio sample, we applied a single mutator to generate a new set of stuttering audio samples. We then tested the ASR systems on these audio samples and recorded the recognition errors produced by each system. The metrics used for evaluation were WER, MER, and WIL, as described in the previous sections. We applied Mann-Whitney U test to evaluate the statistical significance of the results.
Fig. 5: The number of people voted for generated audio and real-world audio in the surveys.
\begin{table}
\begin{tabular}{c||c c c} \hline System & WER & MER & WIL \\ \hline Block & 24.32\% & 22.64\% & 18.41\% \\ Prolongation & 19.76\% & 23.95\% & 17.72\% \\ Sound Repetition & 15.65\% & 17.66\% & 15.45\% \\ Word Repetition & 17.12\% & 14.45\% & 13.23\% \\ Interjection & 21.43\% & 19.76\% & 21.55\% \\ \hline \end{tabular}
\end{table} TABLE III: Results of mutators ablation study on ASR systems
The results of our mutators ablation study in Table III show that all five mutators have a significant impact on the performance of ASR systems. The Block mutator produced the highest average Word Error Rate (WER) with a mean of 24.32%, followed by the Interjection mutator with a mean WER of 21.43%. The Prolongation mutator had the third-highest mean WER of 19.76%, while the Word Repetition mutator had the fourth-highest mean WER of 17.12%. The Sound Repetition mutator had the lowest mean WER of 15.65%. However, the Mann-Whitney U test showed that there was no significant difference in WER between the Sound Repetition mutator and the Word Repetition mutator. In terms of Match Error Rate (MER), the Block mutator again had the highest mean MER of 22.64%, followed by the Interjection mutator with a mean MER of 19.76%. The Prolongation mutator had the third-highest mean MER of 23.95%, while the Sound Repetition mutator had the fourth-highest mean MER of 17.66%. The Word Repetition mutator had the lowest mean MER of 14.45%. Finally, in terms of Word Information Errors (WIL), the Interjection mutator had the highest mean WIL of 21.55%, followed by the Block mutator with a mean WIL of 18.41%. The Prolongation mutator had the third-highest mean WIL of 17.72%, while the Sound Repetition mutator and the Word Repetition mutator had the lowest mean WIL of 15.45% and 13.23%, respectively.
We summarize the preliminary results of the mutator ablation study in the following:
* **Block.** This mutator is designed to split words into halves, which may confuse ASR systems and cause them to produce more recognition errors. The introduction of a period of silence between syllables of the same word could make the word sound like two separate words, leading to an increased WER.
* **Interjection.** The addition of filler words such as "uh" or "um" can disrupt the flow of speech and make it more difficult for ASR systems to accurately recognize words. The presence of filler words may also result in lexicon errors, where the ASR system misinterprets the filler word as a real word.
* **Prolongation.** By extending the length of a syllable in a word, this mutator could cause the ASR system to misinterpret the syllable as a different word. This could lead to an increased WER and MER.
* **Sound Repetition.** The repetition of a sound in a word may cause the ASR system to interpret the sound as a different sound or word, leading to recognition errors. However, this mutator may have a less severe impact on ASR performance compared to other mutators because it only affects a single sound in a word.
* **Word Repetition (RQ5).** This mutator is similar to the Sound Repetition mutator, but it repeats entire words instead of sounds. As with the Sound Repetition mutator, the repetition of words could cause the ASR system to interpret the repeated word as a different word, leading to recognition errors. However, because this mutator repeats entire words, it may have a more significant impact on ASR performance than the Sound Repetition mutator.
### _MOO Ablation Study (RQ3)_
To investigate the impact of the audio selection approach on the results of our stuttering audio testing, we designed an ablation study in which we compared the MOO-based and random audio selection methods. We repeated this study ten times, using the same 1,212 benign audio samples to generate stuttering audio samples using both audio selection methods. For the MOO-based approach, we used the same distance and semantic similarity metrics to generate a set of audio samples that were as diverse and semantically similar to the ground truth texts as possible. We selected a group of mutants from all the generated results based on the Pareto frontier. For the random selection approach, we randomly selected audio samples from the set of generated audio samples and the original audio. We then tested the ASR systems on these audio samples and recorded the recognition errors produced by each system, using the same metrics (WER, MER, and WIL) as described in the previous sections. We applied the Mann-Whitney U test to evaluate the statistical significance of the results. This study was designed to be rigorous and convincing, with the use of the same audio samples and evaluation metrics for both approaches, and the repeated experiments to ensure the consistency and reliability of the results.
The results of our ablation study in Figure 6 show that the MOO-based audio selection approach produced better performance in terms of ASR recognition than the random selection approach. The MOO-based approach produced an average Word Error Rate (WER) of 31.41%, which is significantly lower than the average WER of 47.55% produced by the random selection approach (p ; 0.05). The MOO-based approach also produced a significantly lower average Missed Error Rate (MER) of 26.23% compared to the random selection approach, which had an average MER of 0.381 (p ; 0.05). Additionally, the MOO-based approach had a significantly lower average Word Insertion/Lexicon Errors (WIL) of 0.386 than the random selection approach, which had an average WIL of 0.610 (p ; 0.05). These results indicate that the MOO-based approach generated a set of stuttering audio samples that more accurately represented real-world stuttering patterns, resulting in better performance of ASR systems. The random selection approach may have produced a set of audio samples that were not as diverse or representative of real-world stuttering patterns, leading to higher recognition errors for ASR systems.
### _Real-world Evaluation (RQ4)_
To test the performance of commercial ASR systems on stuttering audio samples, we designed an experiment procedure in which we applied our generated audio samples to three leading ASR systems: Google, Azure, and IBM, using their
\begin{table}
\begin{tabular}{c||c c c} \hline System & WER & MER & WIL \\ \hline Azure Speech-to-Text & 12.33\% & 9.78\% & 15.32\% \\ Google Cloud Speech-to-Text & 15.66\% & 13.44\% & 18.39\% \\ IBM Watson Speech-to-Text & 16.23\% & 17.83\% & 21.67\% \\ \hline \end{tabular}
\end{table} TABLE IV: Performance comparison of three commercial ASR systems on stuttering audio samples
respective APIs. We first generated a diverse set of stuttering audio samples using our chosen audio generation strategy, applying each of the five mutators individually to produce a comprehensive range of stuttering patterns. For each commercial ASR system, we applied the same set of audio samples to ensure consistency and comparability of the results. We then recorded and analyzed the recognition errors produced by each system using the same metrics (WER, MER, and WIL) as described in the previous sections. To ensure the rigor and credibility of the experiment, we repeated the testing procedure five times to ensure the consistency and reliability of the results. Additionally, we included other relevant metrics such as processing time, model accuracy, and overall system performance to provide a more comprehensive and in-depth analysis of the ASR systems' performance on stuttering audio samples.
The results of our ASR testing experiment in Table IV show that there is a significant difference in the performance of the three commercial ASR systems on stuttering audio samples. Azure had the best performance in terms of all the metrics, with an average WER of 12.33%, an average MER of 9.78%, and an average WIL of 15.32%. Google had the second-best performance, with an average WER of 15.66%, an average MER of 13.44%, and an average WIL of 18.39%. IBM had the worst performance among the three, with an average WER of 16.23%, an average MER of 17.83%, and an average WIL of 21.67%. The differences in performance among the three systems were statistically significant (p! 0.05). These results indicate that while all three systems are capable of recognizing stuttering speech patterns to some extent, the Azure ASR system performed significantly better than the other two systems. The Google system also performed relatively well, but still had higher recognition errors than the Google system. The IBM system had the worst performance among the three, indicating that it may not be as suitable for recognizing stuttering speech patterns. Overall, our ASR testing experiment provides valuable insights into the performance of commercial ASR systems on stuttering audio samples and can help inform the development of more accurate and inclusive ASR systems.
### _Bug Pattern (RQ5)_
In this study, we conducted a manual analysis of the stuttering recognition errors produced by commercial ASR systems on audio samples that contain stuttering speech patterns. We selected a diverse set of audio samples from audio samples generated using our audio generation strategy. We applied these audio samples to three leading commercial ASR systems (Google, Azure, and IBM) using their respective APIs, and recorded the output transcriptions for each audio sample. We then manually analyzed the recognition errors produced by the ASR systems and divided them into different categories based on their type and severity. The error categorization process involved listening to the audio samples to compare them to the transcriptions, identifying misrecognized words or syllables, and categorizing the errors based on their type and severity. The identified errors were divided into different categories, such as
\begin{table}
\begin{tabular}{c||c c c} \hline Bug Type & Ground Truth & Buggy Text & Ratio \\ \hline Word Injection & We can convert a type. & We can convert i a type. & 31.12\% \\ Incorrect Word & She never spoke. & She never s spoke. & 21.43\% \\ Word Repetition & He plays for Pisa. & HeHeHe plays for Pisa. & 14.29\% \\ Word Omission & They both agree on the same. & They both agree the same. & 12.50\% \\ Syllable Repetition & Together they had five sons. & Together they had five sons & 20.66\% \\ \hline \end{tabular}
\end{table} TABLE V: Results of Manual Error Categorization of Commercial ASR Systems on Stuttering Audio Samples
Fig. 6: Ablation study on the MOO-based selection approach and random selection approach
block stuttering, sound repetition, word repetition, interjections, and other relevant categories. The severity of the errors varied, with some being minor and others being severe enough to significantly impact the accuracy of the transcription.
As shown in Table V, the manual analysis of the recognition errors produced by the commercial ASR systems on stuttering audio samples revealed a diverse range of errors that can be categorized into different types and severity levels. The most common types of errors included block stuttering, sound repetition, word repetition, and interjections, with block stuttering being the most prevalent. The severity of the errors varied, with some being minor and others being severe enough to significantly impact the accuracy of the transcription. The Google ASR system had the lowest error rate and produced the most accurate transcriptions overall, with the majority of its errors falling into the minor severity category. The Azure and IBM ASR systems had higher error rates and produced less accurate transcriptions overall, with a higher proportion of their errors falling into the moderate and severe severity categories. The results of the manual error categorization process provide valuable insights into the performance of the commercial ASR systems on stuttering audio samples and can help inform the development of more accurate and inclusive ASR systems.
### _Threats To Validity_
In this section, we will provide a summary of the threats to validity in order to ensure that the results of our study are properly interpreted and contextualized. By acknowledging these threats, we aim to promote a more nuanced understanding of the findings and facilitate the development of future studies that can address these concerns.
**Biases in User Study.** The results of user studies should be always taken with a grain of salt. We have taken several measures to ensure that reliable conclusions can be drawn from the user study results. First, the study participants were recruited from a university, consisting of both native and non-native English speakers with high English fluency. Moreover, we provided guidelines for them to follow. Participants were instructed to listen to the audio pieces completely and given the option to playback if needed before responding. This is to make sure that they are capable of recognizing the content of the speeches in the survey. Second, for every survey question, we add an additional question to ask the participant about their confidence level of the choice. This helps to recognize and filter out random guesses made by the participants. Third, we utilized statistical measurements such as Fleiss's Kappa when analyzing the survey results. This helps to improve the soundness of the conclusions drawn from the analysis.
**Reliability of manual error analysis.** The manual error analysis involved a subjective categorization process that could be influenced by individual biases and interpretation of the stuttering patterns. To minimize this threat, we employed multiple independent evaluators and utilized a pre-trained stuttering recognition model for validation. However, it is still possible that some errors were missed or misclassified, which could potentially impact the validity of the results.
**Limited ASR system selection.** We utilized the open-source ASR systems to develop, debug, and perform a detailed evaluation of Aster. For commercial ASR systems, we only used them to verify that Aster can expose faults for more well-established ASR systems. This is mainly due to budget considerations. Since these commercial ASR systems are charged on a query basis, the mutation(RQ2) and ablation(RQ3) studies could incur a significant cost. Moreover, the performance of the commercial ASR systems may change as they are updated and improved, which could also impact the validity of the results. As we have made several interesting findings with the current evaluation setup, we leave the more comprehensive study of commercial ASR systems as future work.
## V Related Work
**Software Accessibility Testing.** Ensuring software products are accessible to people with disabilities requires rigorous accessibility testing. Studies have proposed various approaches, including AXERAY [17], which infers semantic groupings to assess web accessibility, Latte [16], an automated framework for testing Android app accessibility and functional correctness, and MATE [14], which checks for accessibility issues related to visual impairment. Another study [15] focused on web accessibility testing for deaf individuals using Sign Language and proposed two automation approaches based on site analysis. An optimal combination of accessibility testing methods was proposed in [43] based on a cost-benefit analysis. Testing for stuttering on ASR systems is also crucial to ensuring equal access for individuals with speech disorders.
**ASR System Accessibility Enhancement.** There is growing interest in utilizing machine learning and deep learning techniques to detect stuttering in speech. Several studies have reviewed current approaches to stuttering classification [18], evaluated machine learning approaches to detect stuttering events [19], investigated the impact of multi-task and adversarial learning for robust stutter feature learning [20], proposed deep neural network models achieving state-of-the-art results in stutter detection [21], and addressed automatic detection of disfluency boundaries in children's speech [44]. Our work evaluates the performance of automatic speech recognition (ASR) systems in recognizing stuttering speech patterns, a critical step toward the development of more accurate and inclusive ASR systems.
**Other Related Testing Techniques.** The most related testing techniques to Aster are metamorphic testing techniques [45, 46, 47, 48] and MOO-aided testing techniques [49, 50, 51, 52, 53]. The design of Aster was inspired by some of these papers. In Aster, the metamorphic relation that the output text of an ASR system should be the same for both the original audio and the mutated audio is used to provide the test oracle. The MOO strategy is used in Aster to select the seeds for mutating and generating test cases.
## VI Conclusion
In conclusion, this paper investigated the impact of stuttering speech patterns on the performance of ASR systems. We
developed a comprehensive methodology called Aster for generating stuttering audio samples, applying mutators designed to mimic common stuttering patterns. We then conducted a series of experiments to evaluate the performance of commercial ASR systems on these audio samples, comparing the results to those obtained using benign audio samples. The evaluation results shed light on the performance of ASR systems on stuttering speech patterns, highlighting the need for the development of more accurate and inclusive ASR systems that can better recognize and transcribe stuttering speech patterns.
## VII Acknowledges
This research is supported by the National Research Foundation, Singapore, and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-GC-2023-008). It is also supported by by the National Research Foundation, Singapore, and the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICNeN) and the NRF Investigatorship NRF-NRFI06-2020-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)).
|
2308.04897 | Fast simulation of light scattering and harmonic generation in axially
symmetric structures in COMSOL | In the field of optics and nanophotonics, simulation of electromagnetic
scattering plays a major role in the study of complex nanostructures and
optical devices. The numerical analysis of scattering spectra, even for
nanocavities with simple geometry, is associated with significant computational
difficulties. However, when the system exhibits certain symmetries, it becomes
possible to simplify the problem through the process of separation of
variables, which leads to a decrease in its dimension. In this paper, we aim to
provide a practical guide to a fast simulation of linear and non-linear
scattering problems in COMSOL Multiphysics for axisymmetric objects including
computation of scattering cross-section as well as its multipolar
decomposition, optical forces, and second harmonic generation. We also
accompany the provided guide with the ready-to-run COMSOL models. | Sergei Gladyshev, Olesia Pashina, Alexey Proskurin, Anna Nikolaeva, Zarina Sadrieva, Andrey Bogdanov, Mihail Petrov, Kristina Frizyuk | 2023-08-09T11:57:18Z | http://arxiv.org/abs/2308.04897v1 | Fast simulation of light scattering and harmonic generation in axially symmetric structures in COMSOL
###### Abstract
In the field of optics and nanophotonics, simulation of electromagnetic scattering plays a major role in the study of complex nanostructures and optical devices. The numerical analysis of scattering spectra, even for nanocavities with simple geometry, is associated with significant computational difficulties. However, when the system exhibits certain symmetries, it becomes possible to simplify the problem through the process of separation of variables, which leads to a decrease in its dimension. In this paper, we aim to provide a practical guide to a fast simulation of linear and non-linear scattering problems in COMSOL Multiphysics(r) for axisymmetric objects including computation of scattering cross-section as well as its multipolar decomposition, optical forces, and second harmonic generation. We also accompany the provided guide with the ready-to-run COMSOL(r) models.
numerical calculation, axial symmetry, electromagnetic scattering, Mie theory, multipole decomposition, second-harmonic generation pacs: +
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
+
Footnote †: School of Physics and Engineering, ITMO University, 191002 St. Petersburg, Russia; [email protected]
## I Introduction
Numerical simulations play a crucial role in optics and nanophotonics since they can describe the optical properties of complex nanostructures and devices without their fabrication and direct experimental characterization. Numerical optimization became an integral part of the research pipeline [1; 2; 3; 4] improving the performance of optical devices. The modern methods of computational electrodynamics allow one to analyze the interaction of light with complex optical systems accounting for a nonlocal and nonlinear response [5; 6; 7; 8; 9], molecular dynamical [10; 11] and quantum mechanical effects [12; 13; 14].
The full-wave numerical simulation of real experimental samples or optical devices is time-consumable and requires essential computational facilities. A detailed analysis of linear and nonlinear scattering spectra of scatterers (nanoresonators or metaatoms) is crucial for designing the nanophotonic system functionality. Even a simple scattering task can be quite challenging in terms of computational resources when it comes to optimization problems [15] or for inverse design of nanophotonic systems [16]. However, if the scattering potential has specific symmetries, the scattering (or eigenvalue) problem can be essentially simplified via the separation of variables and the effectively reducing the dimensionality of the problem. After that, the reduced problem can be solved numerically much faster than the initial one. This approach is universal and can be combined with various numerical methods like the finite-element method (FEM) [17], finite-difference methods [18; 19; 20], method of moments [21], or others [4; 22; 23].
The case of scatterers of cylindrical symmetry gains a lot of interest due to their relatively simple methods of their fabrication with modern methods of nanotechnology, and, at the same time, they are ideal elementary blocks of complex nanophotonic systems. Cylindrical scatterers have already demonstrated a wide range of nanophotonic effects such as resonant Kerker effect [24; 25], perfect absorption [26], and achieving high-Q resonant states in single structures [27; 28; 29; 30; 31; 32; 33]. For the scatterers with cylindrical symmetry, the separation of the azimuthal variable allows reducing the problem dimension from three-dimensional (3D) to two-dimensional (2D). Then the reduced 2D problem can be solved numerically. This approach is widely used for the calculation of light scattering from rotationally symmetric particles via T-matrix methods [34; 35]. A special interest is to implement such a method in modern numerical simulation software such as COMSOL Multiphysics(r). In Ref. [36], Mark Oxborrow firstly implemented the rotational symmetry approach in COMSOL(r) for finding the spectra of whispering gallery modes in resonators of various shapes. Later, 2D axial symmetry module was built in COMSOL(r)[37] as a default setting. It may seem that this module does not allow to solve the scattering problem under the arbitrary angle of incidence as obliquely incident waves
break the rotational symmetry of the problem. Nevertheless, the obliquely incident wave can be expanded into a Fourier series over \(e^{-im\varphi}\) and then the scattering for each harmonic can be calculated independently. This efficient approach was implemented in COMSOL for scalar acoustic field [38] and recently for electromagnetic waves [38; 39; 40].
In this work, we have gone far beyond and provide a comprehensive guide: (i) on how to efficiently solve both linear and nonlinear (second-harmonic generation) electromagnetic scattering taking advantage of the rotational symmetry of the scatterers; (ii) calculate the scattering cross-section, its multipolar decomposition in 2D axisymmetric systems; (iii) calculate the Maxwell stress tensor and the Cartesian components of the optical force in the cylindrical basis. Though the proposed approach is universal and can be realized in various numerical packages, we have applied it for COMSOL Multiphysics(r) as it is one of the most spread tools for electromagnetic simulations. Moreover, our method does not require the additional built-in features, and possible for realization starting at least from version 5.5. We have already successfully used it for simulating the optical properties of resonant nanoantennas on a substrate [41], excitation of surface plasmon polaritons by spherical and cylindrical nanoantennas [42; 43], calculation of harmonic generation in resonators with rotational symmetry [44], optical forces acting on particles above structured substrates [45; 46], and perfectly absorbing nanoantennas on a conducting surface [26]. While the suggested approach is used for certain tasks in the mentioned papers, they don't contain a detailed technical description of the calculation methods. Here we fill this gap and provide a comprehensive practical guide to solving linear and nonlinear scattering problems for systems with rotational symmetry in COMSOL Multiphysics(r).
## II Axial symmetry from 3D to 2D
The method bases on the reduction of 3D problem to 2D problem by the expansion of the electromagnetic fields \(\mathbf{E}(\mathbf{r})\) into a Fourier series of waves corresponding to different azimuthal indices \(m\)[47]:
\[\mathbf{E}(\mathbf{r})=\sum_{m=-\infty}^{\infty}\mathbf{E}_{m}(\rho,z)e^{-im \varphi}. \tag{1}\]
Here, \(\mathbf{E}_{m}(\rho,z)\) represents the field components in cylindrical coordinates (\(\rho\), \(\varphi\), \(z\)), and \(m\) is the number associated with the respective azimuthal harmonic (see Fig. 1). The total field \(\mathbf{E}(\mathbf{r})\) can be represented as a sum of the incident (background) \(\mathbf{E}^{\mathrm{inc}}(\mathbf{r})\) and scattered \(\mathbf{E}^{\mathrm{scat}}(\mathbf{r})\) fields
\[\mathbf{E}(\rho,\varphi,z)=\mathbf{E}^{\mathrm{inc}}(\rho,\varphi,z)+\mathbf{ E}^{\mathrm{scat}}(\rho,\varphi,z). \tag{2}\]
This formalism is implemented in COMSOL Multiphysics(r)[48]. Its advantage in the accuracy of calculation becomes crucial when the magnitude of the scattered field is much smaller than one of the incident field. Both incident and scattered fields can be expanded into a Fourier series:
\[\mathbf{E}^{v}(\rho,\varphi,z)=\sum_{m=-\infty}^{\infty}\mathbf{E}_{m}^{v}( \rho,z)e^{-im\varphi}, \tag{3}\]
where \(v=\{\mathrm{inc,\,scat}\}\). In virtue of the axial symmetry of the problem, a Fourier amplitude of the incident field \(\mathbf{E}^{\mathrm{inc}}_{m}(\rho,z)\) induces only the Fourier amplitude of the scattered field \(\mathbf{E}^{\mathrm{scat}}_{m^{\prime}}(\rho,z)\) with the same azimuthal
Figure 1: Moving from 3D to 2D. Calculation of electromagnetic properties of the system in the 2D model. As the example, it is shown in detail how the components of the electromagnetic field can be rewritten into a series \(\mathbf{E}_{m}(\rho,z)e^{-im\varphi}\) for TE polarization.
index, i.e. \(m=m^{\prime}\). One can say that the azimuthal harmonics with different indices \(m\) do not mix with each other [49; 50; 51]. Therefore, each Fourier amplitude of the scattered field \(\mathbf{E}_{m}^{\text{scat}}(\rho,z)\) for each \(m\) can be calculated independently. Then, taking a sum over \(m\) [see Eq. (3)] one retrieve the scattered field in 3D space.
Formally, the Fourier expansion (1) reduces a 3D problem to an infinite set of 2D problems as the series is infinite. However, if the maximal radial size \(R_{\text{max}}\) of a scatterer is not large, \(R_{\text{max}}k_{0}\sin\theta\lesssim 1\), the Fourier series (1) converges fast and only a few terms is enough to describe the scattered field accurately. Here \(k_{0}\) is the wavenumber of the incident plane wave in the surrounding space, \(\theta\) is the angle of incidence. Thus, \(m\in[-M_{\text{max}}..M_{\text{max}}]\), where the truncation number \(M_{\text{max}}\) can be estimated from the empiric rule as \(M_{\text{max}}\approx R_{\text{max}}k_{0}\sin\theta\). A more accurate analysis of the truncation number and its connection with precision can be found in [52; 53; 54].
Therefore, the problem of linear scattering from the axially symmetric structure of a finite size can be reduced to a finite number of 2D scattering problems. It is also worth mentioning that due to orthogonality of azimuthal functions \(e^{-im\varphi}\) with different \(m\), they correspond to independent scattering channels. Thus, the Fourier expansion (1) not only allows for accelerating the calculations but also gives important physical information on how the scattered power redistributed over the scattering channels. Below we provide hands-on formulas for scattering, extinction, absorption cross-sections, Maxwell stress tensor, and optical force in terms of 2D harmonics.
### Scattering cross-section
The Poynting vector for the scattered field corresponding the angular harmonic \(e^{-im\varphi}\) can be written as
\[\mathbf{S}_{m}^{\text{scat}}=\frac{1}{2}\text{Re}[\mathbf{E}_{m}^{\text{scat}} \times\mathbf{H}_{m}^{\text{scat}^{*}}], \tag{4}\]
Thus, the partial scattering cross-section is
\[\begin{split}\sigma_{m}^{\text{scat}}=&\frac{1}{I_{ \text{inc}}}\int_{S^{2}}(\mathbf{S}_{m}^{\text{scat}}\cdot\mathbf{n})\ \text{d}s=\\ =&\frac{1}{I_{\text{inc}}}\int_{C_{R}}2\pi\rho( \mathbf{S}_{m}^{\text{scat}}\cdot\mathbf{n})\ \text{d}c,\end{split} \tag{5}\]
where \(I_{\text{inc}}=|E^{\text{inc}}|^{2}/(2Z)\) is the energy flux of the incident wave, \(Z=\sqrt{\mu_{0}/\varepsilon_{0}}\) is the impedance of the embedding medium (vacuum in our case), and the integral is taken over the sphere \(S^{2}\) surrounding the structure. In 2D axial symmetry this sphere become semi-circle \(C_{R}\) (see Fig. 1). For 2D geometry integral over the angle \(\varphi\) gives the multiplier \(2\pi\rho\), and \(\text{d}c\) is the circle arc length differential. We note that COMSOL(r) allows omission of the \(2\pi\rho\) multiplier if the Compute surface integral option is selected. See details of the derivation in the Supplemental Material.
Due to orthogonality of the electromagnetic modes with different \(m\), the total scattering cross-section can be obtained by summing over all orders \(m\):
\[\sigma^{\text{scat}}=\sum_{m=0}^{\infty}(2-\delta_{0,m})\sigma_{m}^{\text{scat}}, \tag{6}\]
where \(\delta_{0,m}\) is the Kronecker symbol, which appears due to \(\sigma_{m}^{\text{scat}}=\sigma_{-m}^{\text{scat}}\), according to the properties of the problem [see Eq. (22)].
### Absorption cross-section
As well as scattering cross-section, the total absorption cross-section \(\sigma^{\text{abs}}\) can be calculated as a sum of partial absorption cross-sections \(\sigma_{m}^{\text{abs}}\):
\[\sigma^{\text{abs}}=\sum_{m=0}^{\infty}(2-\delta_{0,m})\sigma_{m}^{\text{abs }}. \tag{7}\]
Each partial absorption cross-section can be calculated through the following volume integral:
\[\sigma_{m}^{\text{abs}}=\frac{\omega\pi}{I_{\text{inc}}}\iint_{\Omega}\text{ Im}\left\{\mathbf{P}_{m}^{*}\mathbf{E}_{m}\right\}\rho\text{d}\rho\text{d}z. \tag{8}\]
Here, \(\mathbf{P}_{m}=\varepsilon_{0}(\varepsilon-1)\mathbf{E}_{m}\) is the polarization, \(\varepsilon\) is the dielectric permittivity of the scatterer's material. The integral is taken over the cross-section area \(\Omega\) of the scatterer [see Fig. 1(a)]. See details of the derivation in the Supplemental Material.
### Extinction cross-section
The total extinction cross-section \(\sigma^{\text{ext}}\) can be calculated as the sum of the partial extinction cross-sections \(\sigma_{m}^{\text{ext}}\) corresponding to different \(m\) by analogy with Eqs. (6) and (7). The partial extinction cross-sections can be calculated in several ways:
1. By definition of the extinction cross-section: \[\sigma_{m}^{\text{ext}}=\sigma_{m}^{\text{scat}}+\sigma_{m}^{\text{abs}}.\] (9)
2. By taking a surface integral over the cross-section
area \(\Omega\) of the scatterer [see Fig. 1(a)]: \[\sigma_{m}^{\rm ext}=\frac{\omega\pi}{I_{\rm inc}}\iint_{\Omega}\mathrm{Im}\left\{ \mathbf{P}_{m}^{*}\mathbf{E}_{m}^{\rm inc}\right\}\rho\mathrm{d}\rho\mathrm{d}z.\] (10)
3. By taking a line integral over \(C_{R}\) [see Fig. 1(a)]: \[\sigma_{m}^{\rm ext}=\frac{1}{I_{\rm inc}}\int_{C_{R}}2\pi\rho(\mathbf{S}_{m}^ {\rm ext}\cdot\mathbf{n})\ \mathrm{d}c,\] (11) where \[\mathbf{S}_{m}^{\rm ext}=-\frac{1}{2}\mathrm{Re}\left\{\mathbf{E}_{m}^{\rm inc }\times\mathbf{H}_{m}^{\rm scat*}+\mathbf{E}_{m}^{\rm scat}\times\mathbf{H}_{m }^{\rm inc*}\right\}.\] (12)
Therefore, scattering, absorption and extinction cross sections of an axially symmetric scatterer can be calculated using both surface or line integral. See details of the derivation in the Supplemental Material.
### Maxwell stress tensor and optical forces
Optical force is widely studied in nanooptics and nanophotonics as it allows for trapping and manipulating micro- and nanoobjects via optical fields [55, 56, 57]. While the optical force can be directly computed by integrating the Maxwell stress-tensor over the area containing the scatterer, one still needs to know the electromagnetic fields distribution in the near- or far-zones. One can connect the terms with different \(m\) and optical forces in order to effectively compute the forces acting on a scatterer with rotational symmetry. Indeed, the Maxwell stress-tensor has the form [58]
\[\widehat{T}=\varepsilon_{0}\mathbf{E}\otimes\mathbf{E}+\mu_{0}\mathbf{H} \otimes\mathbf{H}-\frac{1}{2}\left(\varepsilon_{0}\mathbf{E}\mathbf{E}+\mu_{0 }\mathbf{H}\mathbf{H}\right)\widehat{I}. \tag{13}\]
Here \(\varepsilon_{0}\) and \(\mu_{0}\) are the permittivity and permeability of vacuum, \(\widehat{I}\) is the identity tensor. The optical force can be calculated by integration of Eq. (13) over the closed area containing the scatterer:
\[\mathbf{F}=\oint_{S^{2}}\widehat{T}\mathrm{d}s. \tag{14}\]
Substituting expansion (1) into Eq. (14) one can reduce the integration to the integration over the line \(C_{R}\) and summation over the harmonics. The \(x\)-component of the force acting on the scatterer has the following form
\[\left\langle F_{x}\right\rangle=\left\langle F_{x}^{\rm E}\right\rangle+ \left\langle F_{x}^{\rm H}\right\rangle, \tag{15}\]
where
\[\left\langle F_{x}^{\rm E}\right\rangle =\frac{\varepsilon_{0}}{4}\,\mathrm{Re}\int_{C_{R}}2\pi\rho\, \mathrm{d}c\sum_{m=-\infty}^{\infty}\left[\left(E_{m,\rho}\left(E_{m+1,\rho} \right)^{*}-\right.\right. \tag{16}\] \[-\left.\left.E_{m,\varphi}\left(E_{m+1,\varphi}\right)^{*}-E_{m, z}\left(E_{m+1,z}\right)^{*}-\right.\right.\] \[-\left.\left.iE_{m,\rho}\left(E_{m+1,\varphi}\right)^{*}+iE_{m+1, \rho}\left(E_{m,\varphi}\right)^{*}\right)n_{\rho}+\] \[+\left.2E_{m,\rho}\left(E_{m,z}\right)^{*}n_{z}\right].\]
Here \(n_{\rho}\) and \(n_{z}\) are the coordinate-dependent components of the vector normal to the integration surface, and \(\left\langle F_{x}^{\rm H}\right\rangle\) satisfies the same equation after replacing \(\varepsilon_{0}\) with \(\mu_{0}\) and \(\mathbf{E}\) with \(\mathbf{H}\). We refer readers to the Supplementary Materials, where they can find details on the derivation of the formulas above and the expressions for other components of the optical force.
We append to our paper a COMSOL Multiphysics(r) file [59] that calculates the values of the optical force components for the simplest case of a plane wave incident on a single spherical nanoparticle in a vacuum. Since the proposed method applies to any axisymmetric system, it is also convenient for more complicated cases. For example, we used it to investigate optomechanical properties of nanoobjects above the substrates with hyperbolic dispersion [45, 46].
## III Linear scattering of a plane wave
The formulated approach can be illustrated by an example of a TE-polarized plane wave scattering on a dielectric cylinder. Let us consider a plane wave incident on the cylinder at an angle \(\theta\) [see Fig. 1(b)]. The \(\mathbf{k}_{0}\)-vector lies in the \(xz\)-plane, while the \(\mathbf{E}\)-field has only \(y\)-component. Thus, the wavevector has only two components
\[\mathbf{k}_{0}=k_{0z}\mathbf{e}_{z}+k_{0x}\mathbf{e}_{x}. \tag{17}\]
The incident electric field in cylindrical coordinates will have the following form:
\[\mathbf{E}^{\rm inc}=\left(\begin{array}{c}E_{\theta\mathrm{c}}^{\rm inc}\\ E_{\varphi}^{\rm inc}\\ E_{z}^{\rm inc}\end{array}\right)=\left(\begin{array}{c}E_{0}\sin\varphi\\ E_{0}\cos\varphi\\ 0\end{array}\right)e^{ik_{0z}z-ik_{0z}\rho\cos\varphi}. \tag{18}\]
One can expand the incident field into the series over \(e^{-im\varphi}\) using Jacobi-Anger expansion [47]:
\[e^{-ik_{0a}\rho\cos\varphi}=\sum_{m=-\infty}^{\infty}\left(-i\right)^{m}J_{m}( k_{0x}\rho)\,e^{-im\varphi}. \tag{19}\]
The radial and azimuthal components of the field read
\[E_{\rho}^{\rm inc}=\sum_{m=-\infty}^{+\infty}\underbrace{E_{0}e^{ik_{0z}z}(-i)^{m +2}\frac{m}{k_{0x}\rho}J_{m}\left(k_{0x}\rho\right)}_{E_{m,\rho}}e^{-im\varphi}, \tag{20}\]
\[E_{\varphi}^{\rm inc}=\sum_{m=-\infty}^{+\infty}\underbrace{E_{0}e^{ik_{0z}z}(- i)^{m-1}\frac{1}{k_{0x}}\frac{\mathrm{d}J_{m}(k_{0x}\rho)}{\mathrm{d}\rho}}_{E_{m,\varphi}}e^{-im\varphi}. \tag{21}\]
One can notice that the \(\varphi\)-component of the field is even, while the \(\rho\)-component is odd
\[E_{m,\varphi}=E_{-m,\varphi},\quad E_{m,\rho}=-E_{-m,\rho}. \tag{22}\]
The expressions for the case of the TM-polarization are provided in the Supplemental Material.
Once the harmonic amplitudes are found numerically, the scattering cross-section can be computed by using Eqs. (4)-(6). Here, as an example, we consider a semiconductor cylinder made of GaAs material located in a free space. The choice of material is provided by the fact that semiconductor materials are widely utilized as material platform for nanophotonics [60]. On top of that, GaAs has large second-order nonlinear susceptibility, in particular, \(\hat{\chi}^{(2)}\)[61] responsible for generation of the second-harmonic, which is discussed in Sec. V.
Figure 2 shows the partial normalized scattering cross-sections as a function of the cylinder diameter \(D\) for \(m\in[-3..3]\) calculated in 2D axisymmetric model, and the total cross-section calculated in the 3D model. The height of the cylinder in the considered case is fixed as \(h=400\) nm; the wavelength of the incident wave is \(\lambda=2\pi/k_{0}=1550\) nm, the angle of incidence \(\theta=30^{\circ}\). Since the harmonic amplitudes decay fast with the harmonic number \(m\), the sum of the partial cross-sections sharply converges to the total cross-section obtained in the 3D simulation. One can also see the resonant behavior in the scattering spectra which is clearly associated with the excitation of Mie resonances, while at small diameters, the scattering cross-section decreases manifesting the Rayleigh regime of scattering. Note that the term with a particular \(m\) refers to the sum of all possible vector spherical harmonics [62] with this angular momentum projection. Thus, by looking at such an expansion we can only partially extract the multipolar decomposition. However, since each mode of a cylindrical structure consists of an infinite sum of multipoles with the same \(m\)[50; 51], one can tell which type of mode is excited.
## IV Multipolar decomposition
The multipolar decomposition is a powerful tool in electromagnetic scattering theory [63; 64; 65; 66; 67; 68; 69; 70; 71; 72], which allows for predicting the optical response of either compact scatterers or their finite or infinite arrays. It is based on the idea that any electromagnetic field can be expanded over the series of vector spherical harmonics (VSHs) [73; 62]. Despite that the alternative Cartesian multipole decomposition is also found very useful for many particular applications [74; 70], we will focus on the expansion in this paper. Accordingly, the scattered electric field is decomposed into multipolar fields in SI units as
\[\mathbf{E}(\mathbf{r})=Z\sum_{j=1}^{\infty}\sum_{m=-j}^{j}ia_{jm}\mathbf{N}_{ jm}^{(3)}(\mathbf{r})+b_{jm}\mathbf{M}_{jm}^{(3)}(\mathbf{r}), \tag{23}\]
where \(a_{jm}\), \(b_{jm}\) are the coefficients characterizing the contribution from the electric \(\mathbf{N}_{jm}^{(3)}(\mathbf{r})\) and magnetic \(\mathbf{M}_{jm}^{(3)}(\mathbf{r})\) vector spherical harmonics [75], where the radial part \(h_{n}(k_{0}r)\) is a spherical Hankel function, related to the outgoing wave (see the comparison of VSHs definitions in Suppl. Info of [44]).
In the following part of this section, we provide the link between the azimuthal Fourier 2D expansion [Eq. (1)] and spherical multipole decomposition
Figure 2: Spectra of the total scattering cross-section \(\sigma_{\rm scat}\) normalized on \(h\cdot D\) of the semiconductor cylinder resonator (GaAs) with height \(h=400\) nm as a function of diameter \(D\) for TE polarization at an angle of incidence of \(\theta=30^{\circ}\). Wavelength of the incident wave \(\lambda=1550\) nm.
[Eq. (23)]. Specifically, we show how to make the multipole decomposition, give the exact expressions of the multipolar coefficients, and compare the numerical results from COMSOL(r) with the exact results obtained using Mie theory [73; 76]. We also accompany our analysis with the ready-to-run COMSOL(r) model [59].
The expressions for the multipole coefficients in the spherical basis [68; 75; 77]:
\[a_{jm}=-(i)^{j-1}\frac{k_{0}^{2}}{2\pi}\sum_{\ell\bar{m}}(-i)^{ \bar{\ell}}\int\mathrm{d}\widehat{\mathbf{p}}\ \mathbf{Z}_{jm}^{\dagger}(\widehat{\mathbf{p}})Y_{\bar{\ell}m}( \widehat{\mathbf{p}})\cdot\\ \cdot\int\mathrm{d}^{3}\mathbf{r}\ \mathbf{J}(\mathbf{r})Y_{\ell m}^{*}( \widehat{\mathbf{r}})j_{\bar{\ell}}(k_{0}r), \tag{24}\]
\[b_{jm}=-(i)^{j}\frac{k_{0}^{2}}{2\pi}\sum_{\bar{\ell}\bar{m}}(-i)^{ \bar{\ell}}\int\mathrm{d}\widehat{\mathbf{p}}\ \mathbf{X}_{jm}^{\dagger}(\widehat{\mathbf{p}})Y_{\bar{\ell}m}( \widehat{\mathbf{p}})\cdot\\ \cdot\int\mathrm{d}^{3}\mathbf{r}\ \mathbf{J}(\mathbf{r})Y_{\ell m}^{*}( \widehat{\mathbf{r}})j_{\bar{\ell}}(k_{0}r), \tag{25}\]
where \(\bar{m}\in\{-\bar{\ell}\ldots\bar{\ell}\}\). For electric component \(a_{jm}\), index \(\bar{\ell}\) takes only two allowed values \(\bar{\ell}\in\{j-1,j+1\}\), while for magnetic multipoles \(b_{jm}\), it takes only one allowed value \(\bar{\ell}=j\). The \(j_{\ell}(k_{0}r)\) is the spherical Bessel function. The \(Y_{\ell m}\colon S^{2}\to\mathbb{C}\) is the scalar spherical harmonics defined as in Ref. [78]. The symbol \(\widehat{\mathbf{p}}=\mathbf{p}/|\mathbf{p}|\) represents the angular part of the momentum vector \(\mathbf{p}\), where \(|\mathbf{p}|=\omega/c\). Vector \(\widehat{\mathbf{r}}=\mathbf{r}/|\mathbf{r}|\) is the unit vector along \(\mathbf{r}\).
The \(\mathbf{Z}_{jm}(\hat{\mathbf{p}}),\mathbf{X}_{jm}(\hat{\mathbf{p}})\) are the multipolar functions in momentum space defined as
\[\mathbf{X}_{jm}(\hat{\mathbf{p}})=\frac{1}{\sqrt{j(j+1)}}\mathbf{L}Y_{jm}( \hat{\mathbf{p}}), \tag{26}\]
\[\mathbf{Z}_{jm}(\hat{\mathbf{p}})=i\hat{\mathbf{p}}\times\mathbf{X}_{jm}( \hat{\mathbf{p}}). \tag{27}\]
The current density corresponds to the polarization vector as \(\mathbf{J}(\mathbf{r})=i\omega\mathbf{P}(\mathbf{r})=i\omega\varepsilon_{0}( \varepsilon-1)\mathbf{E}^{1}\) as the harmonic time dependence in COMSOL(r) is defined as \(e^{i\omega t}\). It and can be expanded into a Fourier series
\[\mathbf{J}(\rho,z,\varphi)=\sum_{m=-\infty}^{\infty}\mathbf{J}_{m}(\rho,z)e^{ -im\varphi}. \tag{28}\]
In the following we will use the components of the current \(J_{m\rho}\), \(J_{m\varphi}\), \(J_{mz}\), while the small \(j\) stands for spherical Bessel functions.
The total power radiated is a sum of contributions from the different multipoles:
\[P^{\mathrm{scat}}=\frac{Z}{2k_{0}^{2}}\sum_{j,m}\left(|a_{jm}|^{2}+|b_{jm}|^{2 }\right). \tag{29}\]
The scattering cross sections \(\sigma^{\mathrm{scat}}\) are defined from \(P_{\mathrm{scat}}\) by normalization to the energy flux of the incident wave \(I^{\mathrm{inc}}=|E^{\mathrm{inc}}|^{2}/(2Z)\)
\[\sigma^{\mathrm{scat}}=\frac{Z^{2}}{k_{0}^{2}|E^{\mathrm{inc}}|^{2}}\sum_{j,m} \left(|a_{jm}|^{2}+|b_{jm}|^{2}\right). \tag{30}\]
We provide a set of scripts [79] that help in the computation of exact multipolar moments for systems with axial symmetry. The scripts provide the expressions for the coefficients \(a_{jm}\), \(b_{jm}\), \(j\leq 4\) in terms of \(J_{m\varphi}\), \(J_{mp}\), \(J_{mz}\), which one could obtain by taking the first integral by \(\widehat{\mathbf{p}}\) and the second only by \(\varphi\) in 24 and 25. These expressions are written in cylindrical coordinates and should be substituted in COMSOL(r) and then integrated by the nanoparticle's "surface" (integral by the rest spherical coordinates \(\rho\) and \(\theta_{0}\)).
As a result, for example, for magnetic dipoles, one can obtain:
\[b_{1-1} =\int_{\Omega}\mathrm{d}\rho\mathrm{d}\theta_{0}\ j_{1}(k_{0}r)k_ {0}^{2}\frac{\sqrt{3\pi}}{2}\rho\ \cdot\] \[\quad\cdot\left((-iJ_{-1\varphi}+J_{-1\rho})\cos\theta_{0}-J_{-1z} \sin\theta_{0}\right)\] \[b_{10} =-i\int_{\Omega}\mathrm{d}\rho\mathrm{d}\theta_{0}\ j_{1}(k_{0}r) J_{0\varphi}k_{0}^{2}\sqrt{\frac{3\pi}{2}}\rho\sin\theta_{0} \tag{31}\] \[b_{11} =\int_{\Omega}\mathrm{d}\rho\mathrm{d}\theta_{0}j_{1}(k_{0}r)k_{0 }^{2}\frac{\sqrt{3\pi}}{2}\rho\ \cdot\] \[\quad\cdot\left((iJ_{1\varphi}+J_{1\rho})\cos\theta_{0}-J_{1z} \sin\theta_{0}\right),\]
where the integration should be performed over the nanoparticle's volume, which appears as a surface in 2D geometry; \(\theta_{0}\) is the polar (zenith) angle in the spherical coordinate system.
We used the derived expressions of the multipolar moments for the case of light scattering on a sphere. The comparison of the extracted multipoles via Eqs. (24-25) with the analytical results predicted by the Mie theory [76] is shown in Fig. 3 for the case of GaAs sphere of radius \(a=250\) nm placed in a free space. One can see excellent agreement between the Mie theory and numerical simulations with account for axial symmetry of the structure. The COMSOL Multiphysics(r) file reproducing the results shown in Fig. 3 is available [59]. Note
that the total scattering cross-section can be obtained by summing over all multipolar contributions.
## V Second harmonic generation
In this section, we will extended method to speed up and improve the performance of simulations of second-harmonic generation (SHG) from subwavelength scatterers. The second harmonic generation is a nonlinear optical process of interaction of two photons of the same frequency and generation of the third photon with doubled frequency [81]. From the early years of nonlinear optics, second and higher harmonic generation was rightly regarded as an effective tool for frequency conversion. Meanwhile, the developing efficient subwavelength sources of SHG is still one of the topical problems of experimental and theoretical nanophotonics [82, 83, 84, 85, 86, 87, 88, 89, 90]. Solution of the SHG problem even in the simplest geometries such as spherical scatterer and plane wave excitation (Mie geometry) is a complex problem [80, 91, 92], and numerical methods play a crucial role in designing nanophotonic systems. The axial symmetry of the scatterers allows for significantly speeding up the simulations of the SHG by using azimuthal expansion method.
The main challenges of the extension of the proposed method to the second-harmonic domain are i) the nonlinearity of the problem and ii) the symmetry of the material tensor responsible for SHG. Indeed, one can describe generation at the doubled frequency \(2\omega\) via the polarization vector \(\mathbf{P}^{2\omega}\) determined by the second-order nonlinear optical susceptibility tensor \(\hat{\chi}^{(2)}\):
\[\mathbf{P}^{2\omega}(\mathbf{r})=\varepsilon_{0}\hat{\chi}^{(2)}\mathbf{E}^{ \omega}(\mathbf{r})\mathbf{E}^{\omega}(\mathbf{r}). \tag{32}\]
where \(\varepsilon_{0}\) is vacuum permittivity. This approach is valid for non-centrosymmetric materials such as gallium arsenide [93, 94, 95, 96, 97, 30] or lithium niobate [98, 99, 100, 101, 102, 103, 104], and we will limit our consideration to them in this paper.
Already at this point, it becomes clear that despite of cylindrical symmetry of the scatterer, the nonlinearity of the problem and the symmetry of the tensor "mixes" the input harmonics with different \(m\)[80, 105]. One should not be discouraged by this fact, our method still can be applied here once proper expansion of the nonlinear tensor is carried out and the incident field's frequency is doubled.
First of all, let us expand the field inside the nanoparticle as follows
\[\mathbf{E}^{\text{in}}(\rho,\varphi,z)=\sum_{m=-\infty}^{\infty}\mathbf{E}_{m} ^{\omega}(\rho,z)e^{-im\varphi}. \tag{33}\]
Figure 4: Possible \(m_{\text{tens}}\) for different \(\hat{\chi}^{(2)}\) tensor components. Parity \(p_{r}^{ijk}\) is also given, which reflects the behavior of the tensor under reflection in \(y=0\) plane. This affects the second-harmonic parity under this reflection [80] but does not play major role for our considerations. Note, that in our COMSOL® model file [59] these components are also marked by their colors for convenience.
Figure 3: Partial cross-sections of a plane wave scattered on a sphere corresponding to each multipole moment and normalized over the geometrical \(\sigma_{\text{geom}}=\pi a^{2}\) (radius of sphere \(a=250\) nm ) calculated with the exact expressions (24)–(25) (solid lines) and with Mie theory (dashed lines). Spherical particle made for GaAs has radius of \(a=250\) nm.
Now one can move on to the second-order susceptibility tensor, which for convenience we represent as follows:
\[\hat{\chi}^{(2)}=\chi^{(2)}_{ijk}\mathbf{e}_{i}\otimes\mathbf{e}_{j}\otimes \mathbf{e}_{k}, \tag{34}\]
where \(\mathbf{e}_{i,j,k}\) is the unit vector with \(i\), \(j\), \(k\) being \(x\), \(y\) or \(z\). Hereinafter, we will omit the sign of the tensor product. After one introduces three variables \(m_{1}\), \(m_{2}\), \(m_{3}\), where \(m_{1}\) and \(m_{2}\) are associated with the incident field and \(m_{3}\) corresponds to the SHG field, the expansion (33) substituted into (32) gives
\[\begin{split}\mathbf{P}^{2\omega}(\rho,\varphi,z)=& \sum_{m_{3}}\mathbf{P}^{2\omega}_{m_{3}}(\rho,z)e^{-im_{3}\varphi}=\\ &=\mathbf{e}_{i}\varepsilon_{0}\chi^{(2)}_{ijk}(\mathbf{e}_{j} \mathbf{E}^{\text{in}}(\mathbf{r}))(\mathbf{e}_{k}\mathbf{E}^{\text{in}}( \mathbf{r}))=\\ =&\sum_{m_{1},m_{2}}\mathbf{e}_{i}\varepsilon_{0} \chi^{(2)}_{ijk}(\mathbf{e}_{j}\mathbf{E}^{\omega}_{m_{2}}(\rho,z)e^{-im_{2} \varphi})\cdot\\ &\cdot(\mathbf{e}_{k}\mathbf{E}^{\omega}_{m_{1}}(\rho,z)e^{-im_ {1}\varphi}),\end{split} \tag{35}\]
where we use Einstein summation convention for \(i,\ j,\ k\).
Importantly, here \(m_{3}\neq m_{1}+m_{2}\) in the general case because of the spatial symmetry of \(\hat{\chi}^{(2)}\). Indeed, the relations for the cylindrical coordinate system
\[\begin{cases}\mathbf{e}_{x}=\mathbf{e}_{\rho}\cos\varphi-\mathbf{e}_{\varphi }\sin\varphi,\\ \mathbf{e}_{y}=\mathbf{e}_{\rho}\sin\varphi+\mathbf{e}_{\varphi}\cos\varphi, \\ \mathbf{e}_{z}=\mathbf{e}_{z}\end{cases} \tag{36}\]
substituted into tensor components (34) lead to the additional exponential terms. For example, for \(\mathbf{e}_{x}\mathbf{e}_{y}\mathbf{e}_{z}\) and \(\mathbf{e}_{y}\mathbf{e}_{x}\mathbf{e}_{z}\) (we consider these two terms simultaneously for further convenience), one can obtain:
\[\begin{split}\mathbf{e}_{x}\mathbf{e}_{y}\mathbf{e}_{z}=\frac{e ^{2i\varphi}-e^{-2i\varphi}}{4i}\mathbf{e}_{\rho}\mathbf{e}_{\rho}\mathbf{e}_ {z}-\frac{e^{2i\varphi}-e^{-2i\varphi}}{4i}\mathbf{e}_{\varphi}\mathbf{e}_{ \varphi}\mathbf{e}_{z}+\\ +\frac{e^{2i\varphi}+e^{-2i\varphi}-2}{4}\mathbf{e}_{\varphi} \mathbf{e}_{\rho}\mathbf{e}_{z}+\frac{e^{2i\varphi}+e^{-2i\varphi}+2}{4} \mathbf{e}_{\rho}\mathbf{e}_{\varphi}\mathbf{e}_{z},\end{split} \tag{37}\]
\[\begin{split}\mathbf{e}_{y}\mathbf{e}_{x}\mathbf{e}_{z}=\frac{e ^{2i\varphi}-e^{-2i\varphi}}{4i}\mathbf{e}_{\rho}\mathbf{e}_{\rho}\mathbf{e}_ {z}-\frac{e^{2i\varphi}-e^{-2i\varphi}}{4i}\mathbf{e}_{\varphi}\mathbf{e}_{ \varphi}\mathbf{e}_{z}+\\ +\frac{e^{2i\varphi}+e^{-2i\varphi}+2}{4}\mathbf{e}_{\varphi} \mathbf{e}_{\rho}\mathbf{e}_{z}+\frac{e^{2i\varphi}+e^{-2i\varphi}-2}{4} \mathbf{e}_{\rho}\mathbf{e}_{\varphi}\mathbf{e}_{z}.\end{split} \tag{38}\]
Note that these expressions are still purely real, but we use the complex form to emphasize how the momentum projection changed due to the lattice symmetry. One can see that the tensor components contain exponential factors, that we also need to take into account in Eq. (35). For them we will use the notation \(e^{im_{\text{tensor}}\varphi}\). Therefore, angular momentum conservation does not work in a usual way as for cylindrical symmetry. We would like to emphasize that the orientation of the crystal lattice is taken into account automatically, since it affects only the values of the \(\hat{\chi}^{(2)}\) tensor components [105]. Note that the consideration should be different for materials with central symmetry [106]; however, we believe that our approach is expandable to the latter case as well.
The \(m_{\text{tens}}\) for different components is given in Fig. 4. One can derive this by considering the behavior of the unit vectors under rotations around the \(z\)-axis. \(\mathbf{e}_{z}\) is not transformed, so it does not contribute, and \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) provide \(m=\pm 1\). The parity under reflection in \(y=0\) plane \(p^{ijk}_{r}\) is also given, but this can only affect the selection rules [107] and thus does not play major role in our considerations now. Therefore, Eq. (35) provides all possible non-zero nonlinear \(\mathbf{P}^{2\omega}_{m_{3}}(\rho,z)e^{-im_{3}\varphi}\), where \(m_{3}\) satisfies the following condition
\[m_{1}+m_{2}+m_{\text{tens}}=m_{3}, \tag{39}\]
and also takes into account the value of \(m_{\text{tens}}\) for different tensor components.
Let us consider this approach using the example of the [100] orientation GaAs tensor, which has only off-diagonal components (\(\chi^{(2)}_{ijk}\) vanishes if any of the two indices \(i,j,k\) coincide, and all other components are equal to each other) [81]. We rewrite the tensor in cylindrical coordinate system according to Eq. (37) and substitute it into Eq. (35). It turns out that the additional momentum projection \(m_{\text{tens}}=0\) vanishes after the summation of the \(xyz+yxz\), \(zxy+zyx\), \(xzy+yzx\) components. After that, we can write all possible components \(\rho,\varphi\) and \(z\) of the induced polarization as follows. As an example, we provide the \(xyz+yxz\)-term below, while \(zxy+zyx\) and \(xzy+yzx\) terms can be obtained similarly:
\[\begin{split} P^{2\omega}_{\rho,m_{3}}e^{-im_{3}\varphi}=\varepsilon _{0}(\chi^{(2)}_{yzz}+\chi^{(2)}_{xyz})\sum_{m_{1},m_{2},m^{\prime}_{1},m^{ \prime}_{2}}& 2\left[e^{-i(m_{1}+m_{2}+2)\varphi}\left(-\frac{1}{2i}E^{\omega}_{\rho,m_{2}}E^ {\omega}_{z,m_{1}}+\frac{1}{2}E^{\omega}_{\varphi,m_{2}}E^{\omega}_{z,m_{1}} \right)+\\ &+e^{-i(m^{\prime}_{1}+m^{\prime}_{2}-2)\varphi}\left(\frac{1}{2i}E ^{\omega}_{\rho,m^{\prime}_{2}}E^{\omega}_{z,m^{\prime}_{1}}+\frac{1}{2}E^{ \omega}_{\varphi,m^{\prime}_{2}}E^{\omega}_{z,m^{\prime}_{1}}\right)\right], \end{split} \tag{40}\]
Note that even if each separate \(\hat{\chi}^{(2)}\) component contains \(m_{\rm tens}=0\) [see Eqs. (37) and (38)], it disappears after summation by pairs \(xyz+yxz\) and so on. This is specific for GaAs and will generally not happen.
During the simulations, the range of \(m_{3}\) should be chosen manually, based on the selection rules [80] and requires attention. Note that despite possible values are \(|m_{3}|\leq 2M_{\rm max}+|m_{\rm tens}|\), we recommend to choose the maximum value \(|m_{3}|\leq M_{\rm max}\) to preserve the accuracy, and check if there are resonances with such \(m\) in this range. Since one can take only the finite number of harmonics in the numerical simulation, we assume that the numbers \(m_{1,2}\) are in the range \(m_{1,2}\in[-M_{\rm max}..M_{\rm max}]\). Thus, one should take all possible values from this range, which together with \(m_{\rm tens}\) satisfy (39). So one should impose the following conditions:
1. \(m_{2}=m_{3}-m_{1}-m_{\rm tens}\)
2. \(m_{1}\in[\max(m_{3}-M_{\rm max}-m_{\rm tens},-M_{\rm max})\ldots\) \(\ldots\) \(\min(m_{3}+M_{\rm max}-m_{\rm tens},M_{\rm max})]\).
We used the described method to simulate the SHG in the GaAs nanocylinder oriented such as \([100]||x|\), \([001]||z\). According to the selection rules for a normally incident linearly polarized wave, the nonlinear response corresponding to the second harmonic generation will be nonzero for \(m_{3}\in\{0,\pm 2,\pm 4\}\)[107]. This happens because \(m=\pm 1\) for the incident field, which contributes twice and leads to \(m\in\{0,\pm 2\}\), and \(m_{\rm tens}=\pm 2\). Figure 5 shows the dependence of the nonlinear signal intensity on the cylinder diameter while the its height is fixed at \(h=400\) nm. The figure depicts the SHG cross-section \(\sigma_{\rm SH}\), which is defined as the second harmonic intensity normalized over the geometric cross-section and the intensity of the incident radiation [107]. The excitation plane wave is incident along the cylinder axis at the wavelength \(\lambda=1550\) nm. We have compared the results of the full 3D simulations and simulation with the proposed 2D axisymmetric problem solution, which shows an excellent agreement proving the correctness of our method. In addition, the contributions of different polarization components with order \(m\in\{0,\pm 2,\pm 4\}\) are demonstrated.
The high performance and computational efficiency of the method allows for sweeping over the large sets of parameters. As an example, Figure 6(a) shows maps of linear scattering cross-section for various cylinder heights and diameters. Along with the scattering cross-section, the average electromagnetic energy density inside the cylinder and SHG cross-section are also shown in Figs. 6(b) and 6(c). For the convenience of the readers, the COMSOL Multiphysics(r) model file [59] is available for downloading. The model allows for ob
Figure 5: Dependence of the second-harmonic efficiency in GaAs cylinder of height \(h=400\) nm excited by a normally incident plane wave at \(\lambda=1550\) nm on the cylinder diameter calculated by the 3D model and 2D model. Contributions calculated in the 2D model for different orders \(m\in\{0,\pm 2,\pm 4\}\) are shown.
taining the results shown in Figs. 5 and 6.
## VI Discussion
Finally, let us discuss the advantage in computational time and required resources that the proposed method provides. It allows to reduce 3D problem to a set of 2D problems which could be simulated much faster. Though, one need to perform a number of 2D simulations proportional to the number of required harmonics, \(2M_{\max}+1\) in linear case and number of a chosen \(m_{3}\) in SHG case, it still appears to be much faster if the computation area and/or the number of mesh elements in 3D model are large. This benefit is illustrated by Fig. 7, where we compare the elapsed time to simulate scattering on the dielectric cylinder for full 3D and 2D axisymmetric geometry. The simulation time is shown for both linear scattering and SHG simulations. Note that the SHG computation time is longer because it includes the solution of the linear problem. One can see that the computation time of the 2D model is much lower than for the 3D model and almost does not change with the size of the modeled object (cylinder diameter \(D\)), while time required for 3D simulation rapidly increases.
Another advantage is that the our method immediately provides extra information on the particular Fourier harmonic contribution. This is often helpful for further analysis of the simulation results such as multipole decomposition or far-field radiation patterns. Indeed, Fig. 8 shows the decomposition of the SH map shown in Fig. 6 over the Fourier harmonics with different numbers \(m\). One can see that the such a decomposition immediately explains origins of the peaks in the SHG intensity spectra.
Finally, the proposed method of SHG simulations can be extended to other nonlinear processes such as third harmonic generation. While the general approach will be exactly the same, the main difference will be in the expansion of the third-order nonlinear tensor into the Fourier series over \(e^{-im\varphi}\) and accounting for three input fields in nonlinear polarization tensor [see Eq. (32)].
Figure 6: Base-10 logarithmic scale maps for (a) normalised scattering cross-section \(\sigma_{\text{sca}}\), (b) average electro-magnetic energy inside the nanostructure \(\sigma_{\text{nav}}\), integrated over the nanoparticle volume, and normalized by the volume as well as (c) SHG \(\sigma_{\text{SHG}}\) for the GaAS nanocylinder as a function of its height \(h\) and diameter \(D\), obtained in the two-dimensional model. Wavelength of the incident wave \(\lambda=1550\) nm.
Figure 7: Linear scattering and SHG computation time for 2D an 3D models depending on the radius of GaAs cylinder with height of 400 nm upon excitation of the plane wave at \(\lambda=1550\) nm.
## VII Conclusions
In conclusion, this work proposes novel efficient numerical tool based on COMSOL Mutliphysics(r) software for simulating linear and nonlinear light scattering from the nanophotonic scatterers of cylindrical symmetry. Taking the advantage of the symmetry of the problem, one can reduce simulations from 3D problem to a set of 2D problems, which can be computed much faster. We provide the particular expressions for performing multipolar decomposition of the scattered fields and computing optical forces acting on the scatterers. We also showed that the proposed method is efficient for simulating second harmonic generation. We showed that it gives sufficient benefit in computational time when simulating the second-harmonic generation from resonant dielectric nanocylinders made of GaAs.
We also provided detailed description of the method and accompanied it with the COMSOL Multiphysics(r) sample models freely available for downloading. We believe that the proposed numerical tool represents a significant advancement in the simulating linear and nonlinear scattering from axially symmetric structures. Its computational efficiency, accuracy and versatility make it a valuable asset for researchers in this field, allowing a deeper understanding and facilitating the design of novel nanophotonic devices.
## Acknowledgements
We thank Kirill Koshelev for fruitful and valuable discussions. The work was supported by the Russian Science Foundation (22-12-00204). A.B. and M.P. acknowledge the Federal Academic Leadership Program Priority 2030.
|
2301.09660 | The Simplest Oscillon and its Sphaleron | Oscillons in a simple, 1-dimensional scalar field theory with a cubic
potential are discussed. The theory has a classical sphaleron, whose decay
generates a version of the oscillon. A good approximation to the
small-amplitude oscillon is constructed explicitly using the asymptotic
expansion of Fodor et al., but for larger amplitudes a better approximation
uses the discrete, unstable and stable deformation modes of the sphaleron. | N. S. Manton, T. Romańczukiewicz | 2023-01-23T19:00:09Z | http://arxiv.org/abs/2301.09660v1 | # The Simplest Oscillon and its Sphaleron
###### Abstract
Oscillons in a simple, 1-dimensional scalar field theory with a cubic potential are discussed. The theory has a classical sphaleron, whose decay generates a version of the oscillon. A good approximation to the small-amplitude oscillon is constructed explicitly using the asymptotic expansion of Fodor et al., but for larger amplitudes a better approximation uses the discrete, unstable and stable deformation modes of the sphaleron.
## I Introduction
Oscillons are spatially-localised, long-lived, oscillatory solutions of the field equation(s) of classical field theories [1]. The nonlinearity of the field equation is essential. Oscillons, unlike kinks and other types of classical soliton, have no topological charge ensuring their stability, and it is surprising that oscillons do not couple more strongly to radiation modes of the field, leading to rapid decay towards the classical vacuum.
Despite oscillons being known in a variety of field theories in various spatial dimensions, the fundamental reason for their existence remains somewhat mysterious. We will show that, at least for the special, simple oscillon that we consider here in detail, the oscillon can be thought of as a decaying sphaleron of the field theory. By a sphaleron, we mean a localised, static but unstable solution of the field equation [2].
For an oscillon to exist, the continuum of radiation modes of the linearised field needs to have a frequency gap, starting at some positive threshold frequency \(m\). A basic oscillon is periodic, with a fundamental frequency \(\omega<m\), so it couples to radiation only through
nonlinear terms, at frequencies that are multiples of \(\omega\). The oscillon has an arbitrary amplitude lying in some finite range upwards from zero, and as the amplitude increases, the frequency \(\omega\) decreases away from the threshold \(m\). Generally, \(2\omega\) and higher integer multiples of \(\omega\) are in the continuum (although some exceptions are known [3]), which underlines the surprise that the oscillon is so long-lived. Nevertheless, an oscillon does slowly radiate energy away, and as it does so its amplitude decreases and its frequency increases.
Much of the understanding of oscillons comes from numerical investigation. One prototype is the oscillon in \(\phi^{4}\) scalar field theory, where the field potential is of the familiar double-well form. This oscillon exists in the theory in 1-, 2- or 3-dimensions, with the field profile depending just on radius and time (up to a spatial translation). In the 1-dimensional theory, the oscillon is reflection-symmetric about the origin. An oscillon of this type is produced by starting from generic initial conditions of the form of a symmetric hump, for example a Gaussian shape, superimposed on one of the vacua. Oscillon formation is rather robust, and the initial shape is not very important. There is usually a transient in which the field shape changes over one or two oscillations, with pulses of energy being radiated to the left and right, and then the field settles into the oscillon.
A substantial theoretical analysis of oscillon structure was given by Fodor et al. [4], for oscillons of small and modest amplitude 1. These authors considered a rather general scalar field theory in spatial dimension \(D\leq 3\), whose potential \(V(\phi)\) has a Taylor expansion in \(\phi\) about a quadratic minimum at \(\phi=0\). By an iterative method, using the field equation, they systematically constructed an oscillon as a series in an expansion parameter \(\epsilon\), related to the amplitude. The oscillon's existence depends on the strength of the cubic and higher-power terms in \(V\), but the conditions that arise are inequalities, so oscillons are generic for small \(\epsilon\). By construction, the oscillon depends just on radius and on time, and it is periodic (i.e. the Fourier series w.r.t. time has terms that are multiples of a unique, fundamental frequency \(\omega\)).
Footnote 1: Refs. [1] and [4] comprehensively review the oscillon literature up to 2009.
We shall use the method of Fodor et al. to explicitly construct a particularly simple oscillon in 1-dimension. In practice, the algebra is still quite tricky, and we have only found the first four terms of the series in \(\epsilon\). As is hardly surprising, this series is asymptotic rather than convergent, because if the series were convergent then there would be a strictly-periodic, exact oscillon solution, having infinite lifetime. In practice, for small \(\epsilon\) it is useful to sum
all four terms to obtain a good approximation to the oscillon, but as \(\epsilon\) and the amplitude increase, one needs to truncate the series after fewer terms, as is typical for asymptotic series; the discarded terms are larger than the last retained term.
For oscillons of even larger amplitude, the method of Fodor et al. tends to break down, but instead, the oscillon can now be interpreted as arising from the decay of a static sphaleron solution. The decaying sphaleron can be well approximated using an ansatz constructed from two discrete modes of the linearised deformations of the sphaleron, one unstable and the other stable. This analysis shows that the sphaleron can be regarded as the precursor of the oscillon.
The study of oscillons in field theories with double-well minima has tended to hide this proposed connection between sphalerons and oscillons. For example, the \(\phi^{4}\) theory in 1-dimension with double-well potential has no true sphaleron, but it has the configuration of a kink and antikink at infinite separation as a'sphaleron'. If a kink and antikink are released from a large separation at zero velocity, and evolved numerically, then they turn into an oscillon (often called a bion in this context). The sine-Gordon breather [5] provides another example. This is an oscillon that lasts indefinitely because of exact integrability. A large-amplitude breather instantaneously comes to rest resembling a kink and antikink at large separation. Again, the kink-antikink configuration at infinite separation can be thought of as a sphaleron. The sine-Gordon breather exhibits a key property of an oscillon, namely, that its fundamental frequency is less than the continuum threshold for linearised waves, and as its amplitude increases, the frequency decreases away from this threshold.
The connection between oscillons and sphalerons is clearer if there is a genuine sphaleron of finite size in the field theory. Here, we focus on a scalar field theory in 1-dimension which has such a sphaleron. We assume that \(V(\phi)\) has a quadratic minimum at \(\phi=0\), with \(V(0)=0\). Then, a sphaleron exists if \(V\) becomes negative for some \(\phi>0\). (It is convenient to choose this sign for the inequality, but \(\phi<0\) is equivalent.) More simply, we assume that \(V(\phi)\) increases to a local maximum at \(\phi=\phi_{1}>0\), then decreases and passes linearly through \(V=0\) at some \(\phi_{2}>\phi_{1}\). \(V\) could have further local or global minima as \(\phi\) increases further. Note that \(\phi=0\), which is the asymptotic value of the sphaleron tail field, is a _false_ vacuum, because it is not the global minimum of \(V\), but this doesn't cause difficulties.
The existence of a time-independent sphaleron solution in 1-dimension can be easily understood using the standard trick of identifying the static field equation as the equation
for a particle rolling in the inverted potential \(-V\). In the inverted potential, the particle starts at rest from \(\phi=0\), rolls through the potential minimum at \(\phi=\phi_{1}\) and ascends the potential to \(\phi_{2}\). As the potential is linear here, the particle stops instantaneously, then rolls back to the starting point at \(\phi=0\). Because \(V\) is quadratic around \(\phi=0\), the whole process takes infinite time. Spatially, one obtains a hump-shaped sphaleron profile which has field values lying in the range \(0<\phi\leq\phi_{2}\), with tails approaching \(\phi=0\) exponentially fast.
The connection between oscillons and sphalerons in a potential of this type seems first to have been noted in ref.[6], but here we will explore the connection more systematically. We will work with the simplest potential of the required form, the purely cubic potential \(V(\phi)=\frac{1}{2}\phi^{2}-\frac{1}{3}\phi^{3}\). \(V\) has a quadratic local minimum at \(\phi=0\), with value zero, a local maximum at \(\phi_{1}=1\), and passes through zero linearly at \(\phi_{2}=\frac{3}{2}\). The sphaleron has a simple analytical form, and we can calculate its unstable mode, its translation zero mode, and its single discrete vibrational mode - its shape mode. At the same time, this potential allows for explicit calculation of a small-amplitude oscillon as a series, using the method of Fodor et al., and we have analytically calculated the terms up to fourth order in the expansion parameter \(\epsilon\). Importantly, we will show numerically that if the sphaleron is perturbed by its unstable mode, in the direction of decreasing \(\phi\), then it evolves into the oscillon. (If it is perturbed in the opposite direction, then the field values quickly become very large, and the field becomes singular.)
This sphaleron in 1-dimension, arising from a cubic potential, is not new. It occurs as a "bounce" in work of Callan and Coleman [7] and was discussed in detail by Avelar et al. [8]. These authors noted that because its translation zero mode has a node, there must be a mode with negative squared frequency, i.e. an instability. Avelar et al. also found the positive-frequency shape mode. However, the connection to oscillons appears not to have been investigated before.
Clearly, at the linearised level, the instability and shape oscillation of the sphaleron can be modelled by combining the sphaleron with the two relevant discrete modes. As we shall only consider the sphaleron and oscillon with their centres of mass at rest, we can ignore the translation zero mode. The perturbed sphaleron, like the oscillon, is then reflection-symmetric. We shall now make a bold leap, and consider the sphaleron deformed by these two modes with arbitrarily large, time-dependent amplitudes. This is a collective coordinate ansatz for the evolution of the sphaleron. The reduced, collective coordinate Lagrangian,
obtained by substituting this ansatz into the field-theoretic Lagrangian, is nonlinear but rather simple. We will show that its resulting dynamics gives another good approximation to the oscillon, which is particularly useful when the oscillon has quite large amplitude and the series of Fodor et al. breaks down.
We should clarify here that the oscillon constructed by the method of Fodor et al. has just one degree of freedom, its amplitude, and its shape and frequency depend on this. Numerically however, one typically finds that an oscillon appears to be quasi-periodic, although by careful adjustment of initial conditions, the periodic version can be found too. To model quasi-periodic behaviour one needs to have a system with two degrees of freedom at least, and the two mode amplitudes of the deformed sphaleron provide these. This issue was also recently raised by Blaschke and Karpisek [9], who studied a mechanised model of an oscillon with two internal degrees of freedom (in addition to the centre of mass position). In fact, a non-integrable Lagrangian system with two degrees of freedom (and 4-dimensional phase space) has more complicated dynamics than quasi-periodic motion, but we have not been able to observe this in the oscillon. The issue of quasi-periodic or chaotic behaviour of the oscillon is complicated, because the reduced system is only an approximation to the field theory with its infinitely many degrees of freedom, and does not couple to radiation.
It is surprising that a model using the sphaleron's two linearised modes is quantitatively useful, because there are no values of the two mode amplitudes giving the vacuum field configuration \(\phi\equiv 0\) exactly (although, for optimal values, it gets quite close). Consequently, this rather crude model cannot accurately describe oscillons of small amplitude.
In the following, we introduce the scalar field theory with cubic potential, then construct the first four terms in the series for the small-amplitude oscillon solution, following Fodor et al. Next, we recall the sphaleron solution and its discrete modes, and use these to construct and test our collective coordinate dynamics modelling an oscillon of larger amplitude. Finally, we describe in further detail some features of the oscillon that we have uncovered numerically, and present our conclusions.
## II A simple scalar field theory
Consider the theory for a real scalar field \(\phi(t,x)\) in 1-dimension, with Lagrangian
\[L[\phi]=\int_{-\infty}^{\infty}\left(\frac{1}{2}\phi_{t}^{2}-\frac{1}{2}\phi_{x} ^{2}-\frac{1}{2}\phi^{2}+\frac{1}{3}\phi^{3}\right)dx\,. \tag{1}\]
This has the simple nonlinear field equation
\[\phi_{tt}-\phi_{xx}+\phi-\phi^{2}=0\,. \tag{2}\]
FIG. 1 shows the potential
\[V(\phi)=\frac{1}{2}\phi^{2}-\frac{1}{3}\phi^{3}\,, \tag{3}\]
which is unbounded below but has a local quadratic minimum at \(\phi=0\) with \(V(0)\) zero, and a local maximum at \(\phi=1\) with \(V(1)=\frac{1}{6}\). Additionally, \(V\) passes linearly through zero at \(\phi=\frac{3}{2}\).
## III The small-amplitude oscillon
Following the method of Fodor et al. [4] to construct an oscillon solution of eq.(2), we expand the field in powers of a small parameter \(\epsilon\),
\[\phi=\sum_{k=1}^{\infty}\epsilon^{k}\phi_{k}(t,x)\,. \tag{4}\]
We denote the truncated series as \(\Phi_{N}=\sum_{k=1}^{N}\epsilon^{k}\phi_{k}(t,x)\). We also introduce rescaled space and time variables
\[\zeta=\epsilon x,\qquad\tau=\omega t \tag{5}\]
and assume that
\[\omega=\sqrt{1-\epsilon^{2}}\,, \tag{6}\]
which locks the expansion parameter \(\epsilon\) to the oscillon frequency. We assume the oscillon is instantaneously at rest at \(\tau=0\), so it is symmetric in \(\tau\). The oscillon will also be symmetric in \(\zeta\), and we can identify its amplitude as \(\sum_{k=1}^{\infty}\epsilon^{k}\phi_{k}(0,0)\), or the truncated version of this. In terms of these new variables the field equation (2) takes the form
\[(1-\epsilon^{2})\ddot{\phi}-\epsilon^{2}\phi^{\prime\prime}+\phi-\phi^{2}=0\,, \tag{7}\]
where overdots and primes denote derivatives w.r.t. \(\tau\) and \(\zeta\) respectively. Expanding in powers of \(\epsilon\), we obtain an infinite set of coupled equations, of which the first five are
\[\ddot{\phi_{1}}+\phi_{1} = 0\,, \tag{8}\] \[\ddot{\phi_{2}}+\phi_{2} = \phi_{1}^{2}\,,\] (9) \[\ddot{\phi_{3}}+\phi_{3} = \ddot{\phi_{1}}+\phi_{1}^{\prime\prime}+2\phi_{1}\phi_{2}\,,\] (10) \[\ddot{\phi_{4}}+\phi_{4} = \ddot{\phi_{2}}+\phi_{2}^{\prime\prime}+2\phi_{1}\phi_{3}+\phi_{2 }^{2}\,,\] (11) \[\ddot{\phi_{5}}+\phi_{5} = \ddot{\phi_{3}}+\phi_{3}^{\prime\prime}+2\phi_{1}\phi_{4}+2\phi_{ 2}\phi_{3}\,. \tag{12}\]
These can be regarded as an iterative sequence of ordinary, linear differential equations for \(\phi_{1},\phi_{2},\phi_{3},\dots\), whose sources on the right-hand side depend on the previously determined functions. [Note that in ref.[4], eqs.(11) and (12) are not given explicitly, and their version of eq.(10) has a typo; their explicit \(-\ddot{\phi_{1}}\) should be left out, as it is present in the term \(\omega_{2}\ddot{\phi_{1}}\).]
The solution of eq.(8), symmetric in \(\tau\), is
\[\phi_{1}=p_{1}(\zeta)\cos\tau \tag{13}\]
where \(p_{1}\) is yet to be determined. Equation (9) now becomes \(\ddot{\phi_{2}}+\phi_{2}=\frac{1}{2}p_{1}^{2}(\zeta)(1+\cos 2\tau)\), whose solution, combining the particular integral with a homogeneous function symmetric in \(\tau\), is
\[\phi_{2}=p_{2}(\zeta)\cos\tau+\frac{1}{6}p_{1}^{2}(\zeta)(3-\cos 2\tau)\,. \tag{14}\]
The two unknown functions, \(p_{1}\) and \(p_{2}\), are determined by considering eqs.(10) and (11) for \(\phi_{3}\) and \(\phi_{4}\). First, for the oscillon to be periodic, there is a condition of _no resonance_, i.e. the right-hand side of eq.(10) should have no \(\cos\tau\) term. This condition reduces to
\[p_{1}^{\prime\prime}-p_{1}+\frac{5}{6}p_{1}^{3}=0\,, \tag{15}\]
whose solution symmetric in \(\zeta\), and decaying for large \(|\zeta|\), is
\[p_{1}(\zeta)=\sqrt{\frac{12}{5}}\frac{1}{\cosh\zeta}\,. \tag{16}\]
Second, one finds that it is consistent to set \(p_{2}\equiv 0\). This can be argued in more than one way. After solving for \(\phi_{3}\), it is found that the no resonance condition for \(\phi_{4}\) implies that \(p_{2}\) obeys a linear differential equation whose solution is an antisymmetric function of \(\zeta\), whereas we require the oscillon to be symmetric in \(\zeta\). Setting \(p_{2}\equiv 0\) also means that the oscillon can be symmetric under the combined transformations \(\epsilon\to-\epsilon\), \(\tau\to\tau+\pi\). More generally, the latter symmetry requires that \(\phi_{k}\) only has terms \(\cos n\tau\) with \(n\) even/odd when \(k\) is even/odd. In summary, we have established that the leading terms in the series for the oscillon are
\[\phi_{1}=\sqrt{\frac{12}{5}}\frac{\cos\tau}{\cosh\zeta}\,,\quad\phi_{2}=\frac {2}{5}\,\frac{3-\cos 2\tau}{\cosh^{2}\zeta}\,. \tag{17}\]
Equation (10) now simplifies, and its solution is
\[\phi_{3}=p_{3}(\zeta)\cos\tau+\sqrt{\frac{12}{5}}\frac{1}{20}\frac{\cos 3 \tau}{\cosh^{3}\zeta}\,, \tag{18}\]
where \(p_{3}\), the homogeneous contribution, is as yet arbitrary and will not be zero. It is then straightforward to substitute for \(\phi_{1},\phi_{2}\) and \(\phi_{3}\) in eq.(11), and integrate to find that
\[\phi_{4}=\sqrt{\frac{12}{5}}\frac{1}{3}\frac{p_{3}(\zeta)}{\cosh\zeta}(3- \cos 2\tau)+\frac{24}{5}\frac{1}{\cosh^{2}\zeta}-\frac{1}{75}\frac{426+39\cos 2 \tau+\cos 4\tau}{\cosh^{4}\zeta}\,. \tag{19}\]
There could be an additional homogeneous term \(p_{4}(\zeta)\cos\tau\), but the symmetry mentioned above requires \(\phi_{4}\) only to have terms proportional to \(\cos n\tau\) with \(n\) even, so we can set \(p_{4}\equiv 0\).
Finally, we impose the no resonance condition for \(\phi_{5}\), i.e. that there is no \(\cos\tau\) term on the right-hand side of eq.(12). This gives an ordinary differential equation for \(p_{3}\), of the Poschl-Teller form with a source, whose acceptable solution is
\[p_{3}(\zeta)=\sqrt{\frac{12}{5}}\frac{1}{60}\left(\frac{94}{\cosh\zeta}-\frac {119}{\cosh^{3}\zeta}\right)\,. \tag{20}\]
Combined with the earlier results (18) and (19), this gives the final form for \(\phi_{3}\) and \(\phi_{4}\),
\[\phi_{3} =\sqrt{\frac{12}{5}}\frac{1}{60}\left(\frac{94\cos\tau}{\cosh\zeta} -\frac{119\cos\tau-3\cos 3\tau}{\cosh^{3}\zeta}\right)\,,\] \[\phi_{4} =\frac{1}{75}\left(\frac{642-94\cos 2\tau}{\cosh^{2}\zeta}- \frac{783-80\cos 2\tau+\cos 4\tau}{\cosh^{4}\zeta}\right)\,. \tag{21}\]
We do not calculate \(\phi_{5}\) as this will involve yet another non-zero arbitrary function \(p_{5}\) that can only be determined by a no resonance condition in the equation for \(\phi_{7}\).
The truncated approximate oscillon, \(\Phi_{N}\), is the sum of the first \(N\) terms of the expansion (4), where \(\phi_{1},\ldots,\phi_{4}\) are as in eqs.(17) and (21). FIGS. 2 show this truncated oscillon at \(\tau=0\) for \(N=1,\ldots,4\), and for amplitude parameters \(\epsilon=0.1\) and \(\epsilon=0.5\). FIG. 3 a) shows the combined strength of the contributing terms, evaluated at \(\zeta=\tau=0\). It is clear that for \(\epsilon\gtrsim 0.6\), the higher-order terms are no longer small compared to the lower-order terms, as is typical for an asymptotic series, so it is better to truncate the series after two or three terms, obtaining \(\Phi_{2}\) or \(\Phi_{3}\). The pronounced double-hump of the oscillon profile for large \(\epsilon\) in FIG. 3 b) appears therefore to be exaggerated, and not a reliable feature.
The truncated oscillon has just one degree of freedom, \(\epsilon\), and it is periodic with \(t\)-period \(2\pi/\sqrt{1-\epsilon^{2}}\). This is because of the symmetry assumptions that have been imposed. There were opportunities to include less symmetric terms in the construction, so a larger family of oscillons could probably be found, although more algebraic work would be required. There is therefore no inconsistency with the approach discussed below, where the oscillon is generally quasi-periodic.
To show the quality of the truncated oscillon, we have numerically solved the field
Figure 2: Profiles of the oscillons at \(\tau=0\) for truncation orders \(N=1,\ldots,4\) of the Fodor et al. series – a) \(\epsilon=0.1\) and b) \(\epsilon=0.5\). \(x\) is the unscaled spatial variable.
equation (using variables \(t,x\)) with initial condition \(\Phi_{N}(0,x)\) for \(N=1,\ldots,4\) and a wide range of \(\epsilon\in[0.1,0.8]\). We have measured the loss of energy from the spatial interval \(-100<x<100\) during the time interval \(0<t<T=300\). The energy loss \(\Delta E\) is the time-integrated energy flux through the ends, which is equal on the left and right, so
\[\Delta E=2\int_{0}^{T}\phi_{t}(t,100)\phi_{x}(t,100)\,dt\,, \tag{22}\]
and is shown in FIG. 4. In the range \(\epsilon\in[0.1,0.5]\), \(\Phi_{3}\) is the best initial condition. For larger \(\epsilon\), the initial configuration \(\Phi_{4}\) loses energy faster, and the approximate oscillon \(\Phi_{4}(t,x)\) breaks down. \(\Phi_{4}\) is probably a better approximation to the numerical solution than \(\Phi_{3}\) for small values of \(\epsilon\), but this is not clear from the figure because of possible numerical errors.
## IV The Sphaleron
There exists a nontrivial, lump-like static solution of the field equation (2),
\[\phi_{\rm S}(x)=\frac{3}{2}\frac{1}{\cosh^{2}\frac{1}{2}x}\,. \tag{23}\]
This is expressed in terms of the unscaled spatial variable \(x\) and satisfies the boundary conditions \(\phi_{\rm S}\to 0\) as \(x\to\pm\infty\), like the oscillon. The solution can be translated, but as given, it is reflection-symmetric in \(x\). Its energy is \(E=\frac{6}{5}\).
A small perturbation \(\delta\phi=e^{i\omega t}\eta(x)\) of \(\phi_{\rm S}\), with frequency \(\omega\), obeys the linearised equation
\[-\eta^{\prime\prime}(x)+U(x)\eta(x)=\omega^{2}\eta(x)\,, \tag{24}\]
where
\[U(x)=V^{\prime\prime}(\phi_{\rm S}(x))=1-\frac{3}{\cosh^{2}\frac{1}{2}x}\,. \tag{25}\]
\(U\) is a Poschl-Teller potential, so the solutions of eq.(24) are well known. There are three (normalised) discrete modes,
\[\eta_{-1}(x) =\sqrt{\frac{15}{32}}\,\frac{1}{\cosh^{3}\frac{1}{2}x}\,, \qquad\omega_{-1}^{2}=-\frac{5}{4}\,, \tag{26}\] \[\eta_{0}(x) =\sqrt{\frac{15}{8}}\,\frac{\sinh\frac{1}{2}x}{\cosh^{3}\frac{1}{ 2}x}\,, \qquad\omega_{0}^{2}=0\,,\] (27) \[\eta_{1}(x) =\sqrt{\frac{3}{32}}\,\frac{4\cosh^{2}\frac{1}{2}x-5}{\cosh^{3} \frac{1}{2}x}\,, \qquad\omega_{1}^{2}=\frac{3}{4}\,. \tag{28}\]
The presence of a unique unstable mode \(\eta_{-1}\) with negative squared frequency means that the lump is a sphaleron. It is the saddle point in field configuration space between the false vacuum \(\phi\equiv 0\) (with zero energy) and configurations with negative energy, whose field \(\phi\) is large and positive in some region of physical space. After being perturbed in the unstable direction towards the false vacuum, the sphaleron's evolution connects it with the oscillon. The sphaleron's discrete shape mode \(\eta_{1}\), whose positive frequency \(\omega_{1}\) is below the continuum threshold at \(\omega=1\), is also important. It is the source of a second degree of freedom for the oscillon. \(\eta_{0}\) is the translation zero mode, and can be ignored here, because it has the opposite
Figure 4: Oscillon energy loss from the spatial interval \([-100,100]\) during time \(0<t<300\), starting from the truncated series \(\Phi_{N}(0,x)\) as initial configuration.
reflection symmetry to the other modes, and doesn't contribute to an oscillon whose centre of mass is at rest.
It is instructive to compare the truncated oscillon profiles \(\Phi_{N}(0,x)\) for varying \(\epsilon\) with the sphaleron profile (FIG. 5). \(\Phi_{1}(0,0)\) matches the sphaleron central amplitude \(\phi_{\rm S}(0)=\frac{3}{2}\) for \(\epsilon=\frac{3}{2}\sqrt{\frac{5}{12}}\approx 0.9682\). This corresponds to an oscillon frequency \(\omega=\frac{1}{4}\). However, the \(L^{2}\) norm \(\|\phi_{\rm S}(x)-\Phi_{1}(0,x)\|^{2}\approx 0.1776\) shows that the match of profiles is not good. The second truncation matches much better. The condition \(\Phi_{2}(0,0)=\frac{3}{2}\) is a quadratic equation with solutions
\[\epsilon_{1}=\frac{15-5\sqrt{3}}{4\sqrt{5}}\approx 0.7088\,,\qquad\epsilon_{2} =\frac{-15-5\sqrt{3}}{4\sqrt{5}}\approx-2.6453\,. \tag{29}\]
The second solution is outside the acceptable range, \(-1<\epsilon<1\), but the first gives a profile very close to the sphaleron with \(\|\phi_{\rm S}(x)-\Phi_{2}(0,x)\|^{2}\approx 0.0138\). The corresponding oscillon frequency is \(\omega\approx 0.7054\). \(\Phi_{3}\) has a profile with a dip for large \(\epsilon\) and matches much worse, as does \(\Phi_{4}\).
## V Collective coordinate model based on the sphaleron
Reflection-symmetric field evolution around the sphaleron, including the (normalised) unstable and shape modes \(\eta_{-1}\) and \(\eta_{1}\), can be modelled by the ansatz
\[\phi(t,x)=\phi_{\rm S}(x)+A(t)\,\eta_{-1}(x)+B(t)\,\eta_{1}(x)\,. \tag{30}\]
Figure 5: Match between the sphaleron \(\phi_{\rm S}(x)\) and the profiles \(\Phi_{N}(0,x)\) for optimal \(\epsilon\).
At the linearised level, \(B\) oscillates and \(A\) tends to grow exponentially, suggesting that the ansatz will be valid for limited time. Rather surprisingly, this ansatz has an extended approximate validity. \(A\) and \(B\) can be assumed to have unconstrained magnitudes, and can be treated as collective coordinates of the field \(\phi\). Their nonlinear time-evolution gives an approximate model for the oscillon. To find the model equations, we substitute the ansatz (30) into the field Lagrangian (1). After evaluating the derivatives and integrating over space (and discarding boundary terms), we obtain a reduced, effective Lagrangian whose nonlinear equations of motion define the collective coordinate dynamics.
Because the discrete modes are localised, they provide a useful approximation to the oscillon. This is especially true for oscillons whose amplitude is not too small. Recall that an oscillon of small amplitude has a large spatial extent (since in the Fodor et al. analysis it is a function of the scaled spatial variable \(\zeta=\epsilon x\)). Oscillons of larger amplitude have a shape closer to that obtained by deforming the sphaleron by its discrete modes.
The reduced Lagrangian is of the form
\[L_{\text{eff}}[A,B]=\frac{1}{2}\dot{A}^{2}+\frac{1}{2}\dot{B}^{2}-V_{\text{eff} }(A,B)\,, \tag{31}\]
where overdots are now unscaled time-derivatives and
\[V_{\text{eff}}(A,B)=\frac{6}{5}-\frac{5}{8}A^{2}+\frac{3}{8}B^{2}-C_{1}A^{3}- C_{2}A^{2}B-C_{3}AB^{2}-C_{4}B^{3}\,, \tag{32}\]
with the constants \(C_{1},\ldots,C_{4}\) given below. To establish this, some integration by parts is needed, together with use of the nonlinear equation satisfied by \(\phi_{\text{S}}\) and the linearised equations for the retained modes. The kinetic terms have a simple Euclidean form because the modes are orthonormal. The first three coefficients in the potential \(V_{\text{eff}}\) are the energy of the sphaleron and half the (negative and positive) squared frequencies of the retained modes. The coefficients of the cubic terms are the integrals
\[C_{1}=\frac{1}{3}\int_{-\infty}^{\infty}\eta_{-1}^{3}(x)\,dx=\sqrt{\frac{15}{2 }}\frac{175\pi}{8192}\,,\quad C_{2}=\int_{-\infty}^{\infty}\eta_{-1}^{2}(x) \eta_{1}(x)\,dx=-\sqrt{\frac{3}{2}}\frac{225\pi}{8192}\,,\]
\[C_{3}=\int_{-\infty}^{\infty}\eta_{-1}(x)\eta_{1}^{2}(x)\,dx=\sqrt{\frac{15}{2 }}\frac{129\pi}{8192}\,,\quad C_{4}=\frac{1}{3}\int_{-\infty}^{\infty}\eta_{1 }^{3}(x)\,dx=\sqrt{\frac{3}{2}}\frac{201\pi}{8192}\,. \tag{33}\]
From the reduced Lagrangian (31) we obtain the equations of motion
\[\frac{d^{2}A}{dt^{2}}=\frac{5}{4}A+3C_{1}A^{2}+2C_{2}AB+C_{3}B^{2}\,, \tag{34a}\] \[\frac{d^{2}B}{dt^{2}}=-\frac{3}{4}B+C_{2}A^{2}+2C_{3}AB+3C_{4}B^{2}\,. \tag{34b}\]
and the conserved energy
\[E_{\text{eff}}[A,B]=\frac{1}{2}\dot{A}^{2}+\frac{1}{2}\dot{B}^{2}+V_{\text{eff}}( A,B)\,, \tag{35}\]
FIG. 6 shows a contour plot of \(V_{\text{eff}}\). There is a saddle point at \(A=B=0\) corresponding to the sphaleron, and a local minimum at \(A=-2.40501\,,B=-0.40325\) corresponding to an approximation to the (false) vacuum \(\phi\equiv 0\), whose energy is \(0.01532\) and whose field configuration (30) is shown in FIG. 7. In the reduced dynamics, diagonalised small perturbations around the approximate vacuum have frequencies \(\tilde{\omega}_{1}=1.02216\) and \(\tilde{\omega}_{2}=1.37920\), which are above the continuum threshold frequency \(\omega=1\). This is partly because the minimum is not the exact vacuum, but mainly because the perturbations are linear combinations of the localised modes \(\eta_{-1}\) and \(\eta_{1}\), which do not have the large wavelengths of radiation modes close to the threshold.
We have explored the extent to which important features of a solution \(\phi(t,x)\) of the field equation (2) are captured by this reduced model. To do this it is useful to follow the amplitudes of the projection of \(\phi\) onto the modes \(\eta_{-1}\) and \(\eta_{1}\),
\[A_{\text{p}}(t)=\int_{-\infty}^{\infty}\left(\phi(t,x)-\phi_{\text{S}}(x) \right)\eta_{-1}(x)\,dx\,,\qquad B_{\text{p}}(t)=\int_{-\infty}^{\infty}\left( \phi(t,x)-\phi_{\text{S}}(x)\right)\eta_{1}(x)\,dx\,. \tag{36}\]
Figure 6: Effective potential \(V_{\text{eff}}(A,B)\). The white contour corresponds to the energy \(1.2\) of the sphaleron and the pair of red contours to energy \(0.016\), slightly above that of the approximate (false) vacuum. The orange line is a trajectory of a solution discussed in section VI and FIG. 13.
Because the modes are orthogonal to each other and to the radiation, this method is equivalent to the usual least squares fit of \(\phi\) to the function (30), but it is numerically faster and more stable. FIG. 8 illustrates this approach for the numerical field evolution of a perturbed sphaleron, with initial condition \(\phi(0,x)=\phi_{\rm S}(x)-0.001\eta_{-1}(x)\), decaying to an oscillon, and FIG. 9 is for the evolution from the truncated Fodor et al. series \(\phi(0,x)=\Phi_{4}(0,x)\) with \(\epsilon=0.5\) as initial condition. The upper-left plots a) show the values of the projected mode amplitudes \(A_{\rm p},B_{\rm p}\) and the \(L^{2}\) norm of the remainder \(\|\delta\phi\|^{2}\). The upper-right plots b) show the comparison between the field value \(\phi(t,0)\) at the centre (orange line) and its approximation \(\Phi_{4}(t,0)\) (blue line). The lower-left plots c) show the energy \(\mathcal{E}(|x|<8)\) within the interval \(-8<x<8\) of the solution \(\phi(t,x)\) and the energy of the reduced model \(E(A_{\rm p},B_{\rm p})\) for the fitted \(A_{\rm p}(t),B_{\rm p}(t)\) values. The lower-right plots d) compare the field profiles \(\phi(t,x)\) at the time \(t=T_{\rm max}\) of the last maximum of \(\phi(t,0)\) before \(t=50\) (green) and its projection (red).
In the case of sphaleron decay (FIG. 8), the oscillon is initially generated with a large amplitude, but this soon reduces. The field is accurately approximated only until \(\phi(t,0)\) crosses zero for the first time. Then the remainder grows and the energy in the projected modes starts to decrease (unlike in the reduced model itself). Energy starts to escape from the interval \(-8<x<8\) in less than 20 time-units, and converts to radiation. This is
Figure 7: Optimal approximation to the (false) vacuum configuration using the sphaleron plus modes expansion (30).
confirmed by the field profile at the latest maximum where radiation with an amplitude approximately 0.1 is clearly visible.
Starting from the Fodor et al. configuration \(\Phi_{4}\) with smaller initial amplitude (FIG. 9), substantially less energy is lost and all the parameters are much better approximated. The energy decreases by about 5% within the simulation time, compared to over 20% during sphaleron decay. We expect that at a later stage of oscillon evolution, or from more carefully prepared initial conditions, radiation would be even less. This is confirmed below.
## VI Comparison of oscillon models
It is particularly interesting to look at the trajectory of the reduced dynamics that starts just slightly perturbed from the sphaleron saddle point in either direction of the unstable mode. This is shown in FIG. 10. A positive perturbation leads to field blow-up, and a negative perturbation leads to oscillon formation. In both cases, starting from the same initial conditions, the full field dynamics (solid lines) is captured well by the reduced model (dashed lines) in its initial stages. In the case of oscillon formation, however, radiation
Figure 8: Evolution from initial condition \(\phi(0,x)=\phi_{\rm S}(x)-0.001\eta_{-1}(x)\) compared with the projected mode parameters of the effective model.
production results in a separation of trajectories. The reduced dynamics is quasiperiodic and almost returns to the initial amplitude \(\phi(0,0)=1.499\) at \(t\approx 60\). The full field dynamics near \(x=0\) becomes almost quasiperiodic from \(t\approx 15\) onwards, but it has a smaller amplitude oscillating between \(0.55\) and \(0.7\), and a higher basic frequency \(\omega=0.9024\). Both solutions for longer times are shown in FIG. 11. The difference is particularly visible for initial data for which an oscillon does not form. An example of such evolution is shown in FIG. 12. Again, at the initial stage, during the first oscillation, the field and reduced dynamics are very similar. But later the field dynamics is dominated by radiation.
The best match between the field dynamics and the dynamics of the reduced model occurs for initial data where a true oscillon is produced with minimal transient radiation. We have found such an oscillon for \(A(0)=-3.4247\), \(B(0)=-1.0218\). The evolution is shown in FIG. 13. The field and reduced dynamics are very close, except for a small difference in frequencies. However, at later times it is clear that the field and reduced dynamics do differ. The field dynamics is almost periodic with \(\omega=0.9024\) and minimum field value \(\phi_{\rm min}=-0.5211\), whereas the reduced dynamics is visibly quasiperiodic (bottom panel centre) with the field minimum oscillating between \(-0.59\) and \(-0.53\). This is because the field dynamics always
Figure 9: Evolution from initial condition \(\phi(0,x)=\Phi_{4}(0,x)\) for \(\epsilon=0.5\) compared with the projected mode parameters of the effective model.
produces some radiation, especially early on, so the oscillon settles into a slightly different state. But by analysing solutions of the reduced model we have found initial conditions where the solution \(\phi\) is periodic with amplitude \(\phi_{\rm min}=-0.5485\) and frequency \(\omega^{*}=0.8967\), very similar to the oscillon in the field theory. This periodic solution is also shown in FIG. 13, and shown in the \((A,B)\) plane in FIG. 6 (orange line).
Figure 11: Longer-time solutions of the field equation (PDE) and reduced equations (ODEs) for initial conditions \(\phi(0,x)=\phi_{\rm S}(x)-0.001\eta_{-1}(x)\).
By following sphaleron decay in the field theory for an even longer time, we see that the beats persist with slightly smaller amplitude (FIG. 14). The main frequency increases to \(\omega\approx 0.913\) at \(t\approx 2000\). The field shape remains lump-like but grows slightly wider (FIG. 15). FIG. 16 shows the power spectrum of the field \(\phi(t,x)\). It reveals the main frequency at \(\omega=0.9075\) and its harmonics. The higher harmonics are widened due to the frequency drift with time. Around each main peak there is a family of equidistant smaller peaks. Their positions correspond very well with \(|n_{1}\omega+n_{2}|\), which shows that there is another basic frequency near the threshold \(m=1\). These peaks are generated by resonances due to the nonlinearity of the field equation. Recall that the reduced model has quasi-periodic solutions but neither of the frequencies is very close to 1.
Not all initial conditions of the sphaleron modes ansatz (30) give oscillons immediately, as we have seen; one needs to minimise the energy loss to radiation, (22). We have found the values of \(A\) and \(B\) achieving this for a range of initial central amplitudes \(\phi_{0}=\phi(0,0)\). We have also investigated the subsequent energy loss as a function of \(\phi_{0}\). For \(\phi_{0}>0.8\) the ansatz (30) gives as good an initial condition as the best initial condition \(\Phi_{2}\) from the Fodor et al. expansion. Moreover, the reduced model reproduces the amplitude decay very well, at least up to the first minimum of the field profile, whereas \(\Phi_{2}\) evolves strictly periodically. The
Figure 12: Solutions of the field equations (PDE) and reduced equations (ODEs) for initial conditions corresponding to a substantially deformed sphaleron with
reduced model has more degrees of freedom and has other types of solution, one example of which will be presented in the following section.
Further evidence that there is an overlap between these two approximations is shown in Figure 13. The red line (ODEs\({}^{*}\)) is the periodic solution of the reduced equations with \(A^{*}(0)=-3.4287\), \(B^{*}(0)=-0.9605\).
Figure 13: Solutions of the field equation (PDE) and reduced equations (ODEs) for initial conditions corresponding to the sphaleron modes ansatz with \(A(0)=-3.4247\), \(B(0)=-1.0218\). The red line (ODEs\({}^{*}\)) is the periodic solution of the reduced equations with \(A^{*}(0)=-3.4287\), \(B^{*}(0)=-0.9605\).
Figure 16: Power spectrum of field at center for \(100<t<2000\). Red lines indicate harmonics of the primary frequency \(\omega=0.9075\), the thin green lines indicate some of the combinations \(|n_{1}\omega+n_{2}|\) for \(-3\leq n_{2}\leq 3\). The widening of the peaks is due to a slow frequency drift.
Figure 15: Decomposition of a field profile (colour lines) into sphaleron and its modes (dashed lines) a) at the initial stage of evolution and b) at a much later time.
points obtained by minimising the energy loss also lie very close to the line corresponding to the vibrating sphaleron, which we discuss next.
## VII Vibrating sphaleron
At the linearised level of the reduced model, with Lagrangian (31), the shape mode \(\eta_{1}\) of the sphaleron oscillates indefinitely with constant amplitude \(\mathcal{B}\) and frequency \(\omega_{1}=\sqrt{3/4}\). However, the nonlinear coupling of \(B\) to \(A\) leads to an excitation of the unstable mode \(\eta_{-1}\) as well. In eq.(34a), the term \(C_{3}B^{2}\) can cause exponential growth of \(A\). We see this in more detail by solving eqs.(34a, 34b) to low order in \(\mathcal{B}\). Assume that at linear order only the shape mode is excited, so
\[A(t)=0,\qquad B(t)=\mathcal{B}\cos(\omega_{1}t)\,. \tag{37}\]
At quadratic order,
\[\frac{d^{2}A}{dt^{2}}=\frac{5}{4}A+C_{3}B^{2}=\frac{5}{4}A+\frac{1}{2}C_{3} \mathcal{B}^{2}+\frac{1}{2}C_{3}\mathcal{B}^{2}\cos(2\omega_{1}t)\,, \tag{38}\]
Figure 17: Comparison of the initial conditions leading to the lowest energy loss for given \(\phi_{0}\) (black dots) and the \(\Phi_{2}\) approximation for \(\epsilon\in[-0.95,0.95]\) (red line), projected on to the sphaleron modes. Static solutions for the sphaleron and approximate vacuum are also marked along with the solution corresponding to the vibrating sphaleron.
whose general solution is
\[A(t)=F_{1}e^{|\omega_{-1}|t}+F_{2}e^{-|\omega_{-1}|t}-C_{3}\mathcal{B}^{2}\left( \frac{2}{5}+\frac{2}{5+16\omega_{1}^{2}}\cos(2\omega_{1}t)\right)\,, \tag{39}\]
where \(|\omega_{-1}|=\sqrt{5/4}\).
Generally, \(F_{1}\) is non-zero and the solution grows exponentially. However, for the initial conditions
\[\frac{dA}{dt}(0)=0,\qquad A(0)=-C_{3}\mathcal{B}^{2}\left(\frac{2}{5}+\frac{2}{ 5+16\omega_{1}^{2}}\right)=-\frac{44}{85}C_{3}\mathcal{B}^{2} \tag{40}\]
there are no exponential terms in the solution, and only constant and oscillatory terms remain. Motivated by this, we have fine-tuned the initials conditions in the field theory and obtained an almost periodic, vibrating sphaleron with a varying amplitude of oscillation, as shown in FIG. 18. For initial conditions we took
\[\phi_{t}(0,x)=0\,,\qquad\phi(0,x)=\phi_{\rm S}(x)-\alpha\frac{44}{85}C_{3} \mathcal{B}^{2}\eta_{-1}(x)+\mathcal{B}\eta_{1}(x)\,, \tag{41}\]
where \(\alpha\) was chosen to suppress exponential growth for as long as possible. \(\alpha=1.024\) for \(\mathcal{B}=0.2\) and \(\alpha\) decreases to \(1\) as \(\mathcal{B}\) approaches \(0\). Similar vibrating sphaleron solutions were considered earlier at linear order in [10].
## VIII Conclusions
We have considered a simple 1-dimensional scalar field theory with a cubic potential, having a long-lived oscillon solution as well as a static, unstable sphaleron solution. We
Figure 18: Nearly periodic evolution of the vibrating sphaleron with fine-tuned initial data.
have explicitly constructed the Fodor et al. expansion for the oscillon up to fourth order in the oscillon's amplitude parameter. As this parameter increases, the oscillon frequency decreases. The expansion is asymptotic rather than convergent, so the fourth-order truncation \(\Phi_{4}(t,x)\) is only valid for small amplitudes. A larger-amplitude oscillon is better approximated by the second-order truncation \(\Phi_{2}(t,x)\).
When the oscillon is instantaneously at rest, its shape is similar to that of the sphaleron, but the sphaleron has larger amplitude and more energy. The sphaleron, slightly perturbed, decays into the oscillon. During its first couple of oscillations it radiates a significant fraction of its energy, but then settles into an oscillon of relatively large amplitude.
The Fodor et al. oscillon is periodic, with a single fundamental frequency. However, the decaying sphaleron approaches an oscillon whose amplitude is itself slightly oscillating. This suggests that oscillon solutions are best modelled by a truncation of the field theory having two degrees of freedom. The sphaleron naturally provides these - it has a single unstable mode and a further discrete mode of oscillation whose frequency is below the threshold frequency for the continuum of radiation modes.
We have considered the field ansatz obtained by linearly deforming the sphaleron by these two modes, with amplitudes \(A\) and \(B\). Substituting this ansatz into the full field-theory Lagrangian, we obtain a reduced, nonlinear dynamical Lagrangian for \(A\) and \(B\), whose nonlinearity arises from the cubic potential term. This dynamics is decoupled from the field radiation modes. Because the ansatz gets quite close to the vacuum configuration for particular values of \(A\) and \(B\), it provides a useful interpolation between the sphaleron and the vacuum. The dynamical equations for \(A\) and \(B\) have solutions describing sphaleron decay as well as oscillons of relatively large amplitude. However, oscillons of small amplitude, which have a larger spatial extent than the sphaleron and its discrete modes, are not well-described by the ansatz.
By carefully adjusting the initial conditions of \(A\) and \(B\), we can find oscillatory solutions of the reduced dynamics that are almost exactly periodic. We can also use these initial conditions as initial conditions for the field theory itself, to generate oscillons with minimal radiation. We then find close similarities between the field theory dynamics and the reduced dynamics of \(A\) and \(B\). The comparison is effected by projecting the field dynamics on to the two discrete modes.
In summary, we have found similar oscillon solutions from several points of view
through the Fodor et al. small-amplitude expansion, through the decay of the sphaleron and oscillating versions of the sphaleron, and from our truncation of the field theory to a dynamical system with two degrees of freedom. This dynamical system appears to be a useful extension of the truncation to one degree of freedom implied by the Fodor et al. analysis. However, all these approaches are only approximate. The Fodor et al. expansion is asymptotic and needs to be truncated; the decaying sphaleron emits considerable radiation in its initial few oscilations; finally, our linearised field ansatz exploiting the sphaleron's discrete modes misses the vacuum and small-amplitude oscillons.
Numerically, there is overwhelming evidence for the existence of long-lived, topologically-trivial, localised oscillatory solutions of the field theory - oscillons - but the precise mathematical status of oscillons remains elusive.
## Acknowledgements
The research of TR was supported by the Polish National Science Centre, grant number NCN 2019/35/B/ST2/00059.
|
2306.13847 | Design parameters of free-form color routers for subwavelength pixelated
CMOS image sensors | Metasurface-based color routers are emerging as next-generation optical
components for image sensors, replacing classical color filters and microlens
arrays. In this work, we report how the design parameters such as the device
dimensions and refractive indices of the dielectrics affect the optical
efficiency of the color routers. Also, we report how the design grid resolution
parameters affect the optical efficiency and discover that the fabrication of a
color router is possible even in legacy fabrication facilities with low
structure resolutions. | Sanmun Kim, Chanhyung Park, Shinho Kim, Haejun Chung, Min Seok Jang | 2023-06-24T03:36:09Z | http://arxiv.org/abs/2306.13847v1 | # Design parameters of free-form color routers for subwavelength pixelated CMOS image sensors
###### Abstract
Metasurface-based color routers are emerging as next-generation optical components for image sensors, replacing classical color filters and microlens arrays. In this work, we report how the design parameters such as the device dimensions and refractive indices of the dielectrics affect the optical efficiency of the color routers. Also, we report how the design grid resolution parameters affect the optical efficiency and discover that the fabrication of a color router is possible even in legacy fabrication facilities with low structure resolutions.
## Introduction
The major optical components in classical complementary metal-oxide-semiconductor (CMOS) image sensors are a microlens array and color filter. A microlens focuses the incident light on the photodiode and the color filter blocks the light of unwanted wavelength (Figure 1a). However, such geometric-optics-based configuration is limited to image sensors with pixel sizes relatively large compared to the wavelength [1, 2]. Recent developments in CMOS image sensors brought the subpixel size down to 0.56 \(\upmu\)m [3], reaching down to the boundary of the geometric optics and wave optics. Further miniaturization will likely cause a failure in design approaches based on geometric optics. Furthermore, a decrease in the subpixel size has led to a reduction of light energy per subpixel leading to poor image quality. Metasurface-based color routers are being investigated as a candidate for substituting microlenses and color filters due to their high optical efficiency. Instead of filtering out lights of unwanted wavelength, the color router guides the incoming light to the corresponding subpixels, thus opening the way to utilize light incident on the entire image sensor area. Compared to the conventional CMOS image sensors whose subpixels only utilize either quarter (red and blue) or half (green) of the incident light, the color routers can in principle exhibit 2 to 4 times higher optical efficiencies.
Metasurface-based color router is configured by allocating dielectrics of different refractive indices inside the design region. This is a typical freeform optimization problem involving high degrees of freedom (DoF). There have been many attempts to solve such optimization problems. The first measurement data in the visible range was reported by Miyata et al. [4]. The authors used a conventional library-based meta-atom method to design a single-layer color router. Although a library-based method has a significantly constrained design space, the fabricated device already showed superior performance compared to the classical color filter. Similar work was followed by Zou et al. who designed a single-layer freeform color router using the genetic algorithm and measured its performance in the visible range [5]. Investigation on multilayer devices has also been reported. Although a multilayer device tends to show higher performance, the design optimization of a multilayer device is much harder than that of a single-layer device due to the large DoF. A typical approach to handle this large DoF is to utilize local figure-of-merit gradients on design variables obtained through auto-gradient calculation [6] or the adjoint method [7]. Zhao et al. [8] and Catrysse et al. [9] optimized high-complexity color routers in 2D and 3D space using auto-gradient calculations. The design space is meshed with ultra-fine grids, and the authors were able to obtain a device design with near-perfect efficiency. Another pioneering work was done by Camayd-Munoz et al. where an adjoint-based method was applied to design a 3D device with a higher fabricability. Despite the rapidly growing field [10, 11, 12, 13, 14, 15, 16], there has not been any systematic investigation into the choice of device design parameters. The choice of design parameters, such as the device height or selection of refractive index, has a critical effect on the final optimized devices. Until now, the choice of such parameters was based on simple deductions such as Fabry-Perot resonance conditions or even worse, based on the computational resource availability [15].
In this work, we outline the effect of design parameters on the optimized optical efficiency of a color router. The result shows that there exist optimal ranges of both structural parameters and the optical index contrast of constituting materials, and more interestingly, their optimal ranges are correlated with each other. We also investigate how the spatial grid size and the number of grid layers affect the optical efficiency and demonstrate that a sufficiently high-performing device can be obtained even with a large cell size if a sufficient number of layers are deposited. This highlights the important role that the choice of design parameters plays in determining the device's performance.
Figure 1: (a) A simplified diagram of a conventional CMOS image sensor consisting of a microlens array and color filter. (b) A schematic diagram of a color router. The design area (\(P\times I\)) is gridded into a grid of \(N_{C}\times N_{L}\), and refractive indices \(n_{1}\) and \(n_{2}\) are allocated to each cell for color routing. Four arrows at the focal plane of the color router imply that an ideal color router can have a four-fold increase in optical efficiency compared to the conventional design.
## Results
As schematically shown in Figure 1b, a color router deflects the incident light to its corresponding subpixel area. Instead of forming a lens-like structure, the design area is gridded into rectangular cells, and each cell is filled with a selection of two different dielectrics. The design parameters for the 2D color routers can be classified into two categories: physical parameters and spatial resolution parameters. The physical parameters include color router period (\(P\)), thickness (\(t\)), the position of the focal plane (\(h\)), and the refractive indices of the two composing dielectrics (\(n_{1}\) and \(n_{2}\)). The spatial resolution, determined by the number of grid layer \(N_{\mathrm{L}}\) and the grid elements in a layer \(N_{\mathrm{C}}\), defines how the design area is gridded into cells of equal shape. Consequently, the design problem possesses \(N_{\mathrm{L}}\times N_{\mathrm{C}}\) degrees of freedom and thus the number of possible structures is \(2^{N_{\mathrm{L}}N_{\mathrm{C}}}\). The default values of each design parameter are given in Table 1. As the transition from geometric optics to wave optics occurs for geometries with characteristic lengths comparable to or smaller than the wavelength, the color router configured with the default design parameters lies within the wave optics regime.
In this work, we define the optical efficiency \(\eta(\lambda)\) using the electric field intensity at the focal plane (denoted by the dashed line in Figure 2a-c).
\[\eta_{R,G,B}(\lambda)=\frac{1}{2}\sum_{i=\mathrm{TE,TM}}\frac{\int_{x_{1}}^{x_ {2}}\lvert\mathbf{E}(\lambda,i)\rvert^{2}dx}{\int_{0}^{P}\lvert\mathbf{E}( \lambda,i)\rvert^{2}dx}\times T(\lambda,i)\]
Here, \(E\) is the electric field at the focal plane, and \(T\) is the transmittance. Electric field distribution and the total transmittance are calculated with REITCOLO, a rigorous coupled-wave analysis package [17]. \(x\in(x_{1},x_{2})\) defines the area of the subpixel of interest. For simplicity, we assume that the wavelength range required for red (R), green (G), and blue (B) subpixels are 600 nm - 700nm, 500 - 600 nm, and 400 nm - 500 nm, respectively. Throughout the work, a normally incident light is assumed, and the optical efficiency is averaged between both transverse electric (TE) and transverse magnetic (TM) polarizations. Figure 2(a-c) shows the electric field intensity distribution inside an optimized color router with the default design parameters listed in Table 1. The optical efficiency \(\eta_{R,G,B}(\lambda)\) of the same device is shown as red, green, and blue curves in Figure 2d. Both the field distributions and the optical efficiency plot clearly show that the intensity of light is concentrated at the corresponding subpixel area on the focal plane.
Figure 2: The electric field intensity profile inside the optimized device is given in Figure 1b for a normally incident light of (a) \(\lambda=650\) nm, (b) \(\lambda=550\) nm, (c) \(\lambda=450\) nm. The depicted field distribution is the average of transverse electric and transverse magnetic polarized light. (d) Optical efficiency spectra of the same device. The default design parameters in Table 1 are used. The average optical efficiencies between 400 nm to 700 nm are 58.29%.
In a conventional Bayer-type image sensor, a pixel consists of two green subpixels and one subpixel for red and blue, respectively. In order to account for such a subpixel ratio, we include two green subpixels in one period of a 1D image sensor. The default arrangement of the subpixels in this paper was set to RGBG as the design is periodic and the wavelength of the green light is in between red and blue (Figure 1b). In Supplementary Figure S1, we compare the optical efficiency between the RGBG subpixel arrangement and the RGBG subpixel arrangement. As Supplementary Figure S1 suggests, the arrangement of subpixels has a marginal effect on the device performance in terms of optical efficiency and crosstalk.
To understand how the design parameters of a color router affect its performance, we optimize the device geometry for various choices of design parameters. For given device design parameters, a conventional genetic algorithm with elitism is performed to obtain the optimal dielectric distribution in the grids. The optimization is configured with a population size of 200, and 100 epochs. The genotype of the individuals in the gene pool is represented by a binary array with array dimensions equal to \(N_{\text{C}}\) and \(N_{\text{L}}\). The goal of the optimization is to maximize the average optical efficiency, \(\bar{\eta}=(\bar{\eta}_{R}+\bar{\eta}_{\sigma}+\bar{\eta}_{B})/3\), where \(\bar{\eta}_{R,G,B}\) are the wavelength-averaged optical efficiencies obtained by averaging \(\eta_{R,G,B}(\lambda)\) over the wavelength range corresponding to the subpixel type. During the optimization process, the optical efficiencies were averaged over thirty wavelength points (405 nm, 415 nm,... 695 nm) to reduce the computational cost, but the reported \(\bar{\eta}\) were averaged over with much finer wavelength points (400 nm, 401 nm,... 700 nm). As shown in Figure S2 in the Supplementary Information, the difference between 30 and 301 wavelength-point averaging is not significant (around 0.02 for \((N_{L},N_{C})=\) (8, 64) and 0.01 for \((N_{L},N_{C})=\) (4, 32)).
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Design parameter & Value \\ \hline Pixel period, \(P\) & 1 μm (equivalent to subpixel size of 0.25 μm) \\ \hline Thickness of the color router, \(t\) & 1.5 μm \\ \hline Refractive index of dielectric 1, \(n_{1}\) & 1.5 \\ \hline Refractive index of dielectric 2, \(n_{2}\) & 2.0 \\ \hline Position of the focal plane, \(h\) & 0.5 μm \\ \hline Number of grid layers (\(N_{\text{L}}\)) & 4 \\ \hline Number of cells in a layer (\(N_{\text{C}}\)) & 2 \\ \hline \end{tabular}
\end{table}
Table 1: The default design parameters used in this work
Figure 3: Effect of device period (\(P\)) and color router thickness (\(t\)) on the optical efficiency of a color router. \(N_{\text{L}}\) and \(N_{\text{C}}\) are fixed to (4, 32). The design parameters stated in Table 1 are used except for \(P\), and \(t\).
The advantage of substituting microlens and color filters with metasurface-based color routers becomes clear for sub-micron image sensors. Hence, we first investigate the effect of the physical dimensions of devices on the optical efficiency of the router. It should be noted that the subpixel size of the 2D color router is a quarter of the device period, \(P\). In comparison to a Bayer-type image sensor array, a 2D color router extends infinitely in the \(y\)-direction so the pixel size is defined as the width of each subpixel in the \(x\)-direction. The pixel size of the color router with the default design parameter is 0.25 \(\upmu\)m, which is less than half the size of the smallest commercially available image sensor of \(\sim\)0.56 \(\upmu\)m [3]. Figure 3 shows how the optimized \(\bar{\eta}\) varies depending on the period \(P\) and the thickness \(t\) while all the other design parameters including DoF and refractive indices are fixed to their default values. For the devices with a deep subwavelength period of \(P=0.25\)\(\upmu\)m, the optimized average optical efficiencies are around the trivial value of 33%, which can be achieved with a simple antireflection layer. When \(P\geq 0.5\)\(\upmu\)m, the color routers start to show meaningful performance. At a given \(P\), the device performance monotonically increases and saturates as the thickness \(t\) increases. The saturation point of \(t\) for 0.75 \(\upmu\)m\(\leq P\leq 2\)\(\upmu\)m is around 1.5 \(\upmu\)m, and thus we set \(t=1.5\)\(\upmu\)m as the default value. We note that, however, the saturation point of \(t\) can vary as a function of the other design parameters. At a fixed \(t\), the optimized \(\bar{\eta}\) does not monotonically increase with \(P\) but has a specific optimal value. This result is reasonable since it becomes increasingly difficult to route incident light over a longer lateral distance within a given thickness.
The position of the focal plane from the color router, \(h\), is a similar physical design parameter to \(P\) and \(t\), which also defines the physical dimension of the device. The dependence of the focal plane position on the optical efficiency is shown in Supplementary Figure 3. In a periodic grating, the modes with a high lateral wavenumber cannot be extracted in the far field. Hence, as the focal plane of the color router is located further from the meshed region, the device is expected to have a lower efficiency due to the loss of near field. The sharp drop in optical efficiency for \(h>1\)\(\upmu\)m in Supplementary Figure S3 agrees with this expectation.
The refractive indices of the composing dielectric materials are another critical factor determining optimal efficiency. In previous works, the selection of a color router was based on simple relations such as the Fabry-Perot resonance condition [15]. Those relations only provide order-of-magnitude estimates. In this work, we tune the design parameters (\(t\), \(n_{1}\), \(n_{2}\)) to find the global trend in optimized optical efficiency. For the sake of simplicity, we assume that the dielectrics filling each grid are dispersionless and have refractive indices of \(n_{1}\) and \(n_{2}\), where \(n_{1}\leq n_{2}\) is assumed throughout the work. The default values of (\(n_{1}\), \(n_{2}\)) are (1.5, 2.0), which is similar to the refractive indices of silica and silicon nitride. Our analyses reveal that, unlike other nanophotonic devices such as metalens whose device performance monotonically increases with the refractive index contrast [18, 19, 20], color routers have a distinct relation between the optimal refractive index contrast and the thickness of the device. When all the other parameters are fixed to their default values, the optimal index contrast values, \(n_{1}\) - \(n_{2}\), are found to be 2.25, 1, and 0.5 for t = 0.1, 0.5, and 1.5 \(\upmu\)m, respectively, as illustrated in Figure 4. We speculate that trend could be attributed to the fact that the maximum achievable vertical optical path length difference is determined by the product of optical index contrast and the thickness of the device.
The choice of DoF is important in both computational and experimental aspects. On the computational side, the design space grows exponentially with the DoF, and the computational load required for optimization grows accordingly. Popular approaches for tackling high DoF problems are through the adjoint gradient, which provides the
Figure 4: Effect of refractive indices on the optical efficiency for different device thicknesses, \(t\). Optimization based on a genetic algorithm was carried out for color routers with thickness (a) \(t=0.1\)\(\upmu\)m, (b) \(t=0.5\)\(\upmu\)m, and (c) \(t=1.5\)\(\upmu\)m. In each color plot, the lower refractive index \(n_{1}\) is changed from 1 to 2 with a step size of 0.25, and \(n_{2}\) is swept from 2 to 4 with the same step size, 0.25. Each square represents the optimized efficiency obtained with the genetic algorithm. The maximum efficiency in each case is (a) 46.86%, (b) 54.98%, (c) 58.25%. Except for \(t\), \(n_{1}\), and \(n_{2}\), the design parameters given in Table 1 are used.
gradient of FoM with respect to change in the refractive index of every element in the design space [21, 22, 23, 24, 25, 26, 27, 28, 29], or through machine learning methods [30, 31, 32, 33, 34, 35, 36, 37, 38]. In our work, we limit the DoF to the order of hundreds so that the optimization problem is solvable using the classical genetic algorithm [39, 40, 41, 42, 43]. On the other hand, the DoF is directly related to the fabrication feasibility of the device. The number of layers, \(N_{\mathrm{L}}\), determines the number of deposition steps, and the number of cells in a layer, \(N_{\mathrm{C}}\), affects the minimum feature size. Despite its importance, previous works on metasurface based color routers mostly lack investigations on \(\mathrm{DoF}\). In this work, we fix the values of the other design parameters including the device thickness, and change \(N_{\mathrm{L}}\) and \(N_{\mathrm{C}}\) to isolate the effect arising from the device dimension change. \(N_{\mathrm{L}}\) and \(N_{\mathrm{C}}\) are chosen to be integer powers of 2. This implies the existence of trivial monotonicity. For example, a set of every possible combination with (\(N_{\mathrm{L}}=1\), \(N_{\mathrm{C}}=8\)) is a subset of (\(N_{\mathrm{L}}=4\), \(N_{\mathrm{C}}=16\)) so the optical efficiency of the latter must be equal to or greater than the previous one if the optimization converges to the global optimum. Since the number of possible combinations is sufficiently low for device designs with \(\mathrm{DoF}\leq 16\), an exhaustive search was carried out for the corresponding conditions. For device designs with \(\mathrm{DoF}\geq 32\), the previously-described genetic algorithm was carried out.
Figure 5 shows the optimized results for each \(N_{\mathrm{L}}\) and \(N_{\mathrm{C}}\) pair. In the figure, the trivial monotonic relation in the optimized efficiency is observed. Regardless of the number of layers, the optimal \(\bar{\eta}\) almost saturates when \(N_{\mathrm{C}}\geq 32\), which corresponds to the minimum feature size of \(\sim\)31 nm. The optimal \(\bar{\eta}\) asymptotically approaches \(\sim\) 60% for the default physical parameters. It is important to note that, the number of layers \(N_{\mathrm{L}}\) plays a pivotal role in determining the device performance. For example, even with \(N_{\mathrm{C}}=4\) (minimum feature size of 250 nm), it is possible to achieve the average optical efficiency of \(\sim\) 54% (about 90% of the maximum achievable efficiency) by having 8 layers. The designs of color routers for different DoF conditions are displayed in Supplementary Section 4. For low-efficiency devices, a line of reflection symmetry exists at the center of the red and blue subpixel. This line of reflection symmetry originates from RGBG subpixel arrangement which is also symmetric with respect to that line. However, such reflection symmetry isn't observed in the optimized devices. Lack of symmetry in the optimized devices implies that the enforcement of trivial symmetry conditions on the device design does not always lead to better performance.
## Discussion
In conclusion, we systematically analyze the dependence of color router performance on various design parameters by leveraging numerical device optimization methods based on a genetic algorithm. We discover that the average optical efficiency of a color router with a micron-scale form factor can be up to \(\sim\)60%, whereas the classical microlens and color filter configuration can have optical efficiency up to 25% for red and blue and 50% for green. We show that it is not always beneficial to have a larger pixel if the thickness of the device is limited and there exist optimal refractive index pairs for composing dielectrics for a given device thickness. Unlike the case of metalens, the optical efficiency drops when the refractive index contrast becomes greater than the optimal value. We also report that the device
Figure 5: Optimized optical efficiency calculated for multiple DoF configurations. The optical efficiency saturates to \(\sim\)60%. Except for \(N_{\mathrm{L}}\) and \(N_{\mathrm{C}}\), the design parameters given in Table 1 are used.
performance can be greatly increased while maintaining a relatively large feature size by having multiple layers in the design scheme. We anticipate that the qualitative trend seen in the 2D color router design parameter tuning would be repeated for Bayer-type 3D color routers, although the optimal values may differ due to the introduction of the additional dimension. Our results will serve as a design guideline for the future development of free-form metasurface-based color routers for deep sub-micron image sensors.
## Author contributions
Sanmun K. and M.S.J conceived the ideas. Sanmun K., C. P., and Shinho K. developed the optical simulation model and the optimization algorithms. Sanmun K. and M.S.J. conducted a detailed analysis of the optimization results. M.S.J. supervised the project. The manuscript was mainly written by Sanmun K., H. C., and M.S.J. with the contributions of all authors.
## Data and Code availability
The optimization code can be accessed openly on Sanmun Kim's GitHub
([https://github.com/chocopi2718/colorRouter2D](https://github.com/chocopi2718/colorRouter2D)).
## Acknowledgements
This research was supported by the MOTIE (Ministry of Trade, Industry & Energy 1415180303 and KSRC (Korea Semiconductor Research Consortium) 20019357 support program for the development of the future semiconductor device.
## Competing Interests
The authors declare no conflicts of interest.
## Supplemental Document
The following files are available free of charge.
Supplementary information (Supplementary information.pdf)
|
2302.10661 | Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer
Radiation Treatment from Clinically Available Annotations | Deep learning models benefit from training with a large dataset (labeled or
unlabeled). Following this motivation, we present an approach to learn a deep
learning model for the automatic segmentation of Organs at Risk (OARs) in
cervical cancer radiation treatment from a large clinically available dataset
of Computed Tomography (CT) scans containing data inhomogeneity, label noise,
and missing annotations. We employ simple heuristics for automatic data
cleaning to minimize data inhomogeneity and label noise. Further, we develop a
semi-supervised learning approach utilizing a teacher-student setup, annotation
imputation, and uncertainty-guided training to learn in presence of missing
annotations. Our experimental results show that learning from a large dataset
with our approach yields a significant improvement in the test performance
despite missing annotations in the data. Further, the contours generated from
the segmentation masks predicted by our model are found to be equally
clinically acceptable as manually generated contours. | Monika Grewal, Dustin van Weersel, Henrike Westerveld, Peter A. N. Bosman, Tanja Alderliesten | 2023-02-21T13:24:40Z | http://arxiv.org/abs/2302.10661v1 | [
###### Abstract
Deep learning models benefit from training with a large dataset (labeled or unlabeled). Following this motivation, we present an approach to learn a deep learning model for the automatic segmentation of Organs at Risk (OARs) in cervical cancer radiation treatment from a large clinically available dataset of Computed Tomography (CT) scans containing data inhomogeneity, label noise, and missing annotations. We employ simple heuristics for automatic data cleaning to minimize data inhomogeneity and label noise. Further, we develop a semi-supervised learning approach utilizing a teacher-student setup, annotation imputation, and uncertainty-guided training to learn in presence of missing annotations. Our experimental results show that learning from a large dataset with our approach yields a significant improvement in the test performance despite missing annotations in the data. Further, the contours generated from the segmentation masks predicted by our model are found to be equally clinically acceptable as manually generated contours.
1-14 Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment] Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment from Clinically Available Annotations
organs at risk, segmentation, deep learning, missing annotations
1
Footnote 1: 1. Radiation treatment for cancer involves giving high doses of radiation to the tumor to kill cancer cells.
## 1 Introduction
The planning for cervical cancer radiation treatment1 requires manual contouring of the Organs at Risk (OARs) where the adverse effects of radiation must be minimized. Automatic segmentation of these OARs can save hours of manual work. In this paper, we focus on the automatic segmentation of four OARs in cervical cancer radiation treatment: bowel bag, bladder, hips, and rectum. A few studies have focused on developing deep learning based automatic OARs segmentation methods for cervical cancer radiation treatment (Liu et al., 2020, 2020, 2020, 2021, 2021). All of these studies use a traditional setup for developing a deep learning model, which involves:
(a) obtaining a fully annotated clinically available dataset, (b) splitting the data into training, validation, and testing, and (c) training a model and evaluating it on the test dataset. A major drawback in this setup is the limited size of the datasets used for training and testing. A small training dataset limits the possibility of a deep learning model capturing large variance in real-world data. Further, evaluation results from a small test dataset do not inform sufficiently in regard to the true test performance of a deep learning model. Although in the medical imaging domain, such a setup is understandable because of the underlying requirement of clinical expertise for annotating the data, it would be of interest to investigate if clinically available data can be leveraged to increase the size of the training and testing datasets.
The size of the training dataset for automatic OARs segmentation for cervical cancer radiation treatment can be increased if the abdominal scans acquired for tumors other than cervical cancer are also included. However, all the OARs in cervical cancer radiation treatment may not be annotated in those scans. Furthermore, since the clinically available abdominal scans and annotations are retrospectively included, the acquisition protocols, contouring guidelines, and observers may be different giving rise to data inhomogeneity and label noise. In this paper, we follow the motivation of harnessing the benefits of training on a large dataset. Therefore, we use the Computed Tomography (CT) scans and OARs contours delineated for clinical use during radiation treatment for tumors in the abdominal region to develop a deep learning model for segmentation of OARs in cervical cancer radiation treatment. We develop a semi-supervised learning approach to tackle the issue of missing annotations in data. Briefly, the key contributions of our work are the following:
1. We propose a teacher-student setup, wherein, the predictions from a teacher model are used to impute the missing annotations, and a student model is trained using the dataset containing imputed annotations. Additionally, we train the student with an uncertainty-guided loss to avoid the adverse effect of imperfect predictions from the teacher, and with additional augmentations to increase performance.
2. We perform an ablation study to investigate the effect of different components of the proposed approach. Furthermore, we perform a clinical validation study to assess the clinical acceptability of contours generated from automatic segmentation masks predicted by our deep learning model.
### Related Work
Our approach is closely related to previous works in the direction of semi-supervised learning by generation of pseudo-labels and self-training for medical image segmentation tasks (Bai et al., 2017; Li et al., 2019, 2020; Zheng et al., 2020). Different from these works, we use self-training with pseudo-labels in a teacher-student setup similar to (Sedai et al., 2019; Yu et al., 2019). Further, we utilize uncertainty maps to reduce the adverse effect of imperfect pseudo-labels, which have been previously used in (Sedai et al., 2019; Yu et al., 2019; Zheng et al., 2020). In contrast to (Sedai et al., 2019; Yu et al., 2019), we train a noisy student with the use of additional augmentations in the data because it has been shown to provide performance gain (Xie et al., 2020). In the domain of learning an OARs segmentation model for cervical cancer radiation therapy by utilizing a large dataset, our work is similar to (Rhee et al., 2020). However, instead of learning a separate model for each OAR as
in (Rhee et al., 2020), we learn a single model for the segmentation of all OARs, which increases the potential for real-world deployment of our model.
## 2 Data
We retrospectively selected the CT scans of female patients who were treated in an academic hospital for a tumor in the abdominal region from 2009 to 2019. A total of 1170 CT scans with associated clinically available contours from 1108 patients were received in anonymized form through a data transfer agreement. These scans were used for training and validation. For testing, we used 105 CT scans with associated clinically available contours from 95 cervical cancer patients who received radiation treatment in the same hospital.
### Preprocessing
In all the CT scans (1170 from the training and validation dataset, and 105 from the test dataset), the clinically available annotations of four OARs in cervical cancer radiation treatment (bowel bag, bladder, hips, and rectum) were extracted by using the following steps: (1) standardize different variations of organ labels (e.g., bowel, bowel bag, Bowel bag, bowel bag, bowel_bag, Bowel_bag were all considered bowel bag), (2) combine left and right hip annotations as a single organ, (3) remove voxels annotated as bladder or rectum from the bowel bag annotation to avoid ambiguous labeling in those voxels. Next, the scans were resampled to 2.5mm\(\times\)2.5mm\(\times\)2.5mm voxel spacing. The Hounsfield units were converted to intensity values between 0 and 1 by windowing (window level=40, window width=400). In the training and validation dataset, the preprocessing resulted in a total of 186 scans that contained annotations for all the four OARs considered in this work (referred to as the fully annotated dataset, \(\mathcal{D}_{f}\)). The remaining scans had missing annotations for at least one of the OARs (referred to as the partially annotated dataset, \(\mathcal{D}_{p}\)). In total 383, 1103, 504, and 865 scans had annotations for bowel bag, bladder, hips, and rectum, respectively.
### Automatic Data Cleaning
Since the data was accumulated over 10 years and the scans belonging to patients who were treated for a tumor anywhere in the abdominal region were included, the data exhibited inhomogeneity in the cranial extent of the scan (causing an increase in the number of background voxels and potentially less efficient training), and the cranial border of the bowel bag annotations (attributing to label noise).
To make the data more homogeneous so that the adverse effects of inefficient training and label noise could be reduced, we analyzed the histograms of \(\mathcal{D}_{f}\) and decided on thresholds such that the histograms represented a unimodal distribution corresponding to the most frequently used scanning protocol and annotation style (details are provided in appendix A). Based on these thresholds, the scans were cropped in the cranial direction to remove the chest region. The bowel bag annotations in the abdominal region roughly above the level of the lumbar (L4) spinal segment were deleted. The scans that did not contain bowel bag annotations in the entire pelvic region were discarded. These steps resulted in a decrease in the size of \(\mathcal{D}_{f}\) from 186 to 134. The resulting dataset of 134 scans is referred to as \(\mathcal{D}_{f}^{clean}\) in the rest of the paper.
## 3 Approach
We developed a semi-supervised learning approach utilizing a teacher-student setup (Figure 1). We train a teacher model using the small, fully annotated dataset (\(\mathcal{D}_{f}^{clean}\)). The predictions from the trained teacher model are used to impute the remaining large dataset with missing annotations (\(\mathcal{D}_{p}\)). Then, a student is trained with the entire dataset (\(\mathcal{D}_{f}^{clean}+\mathcal{D}_{p}\)) containing the clinically available and imputed annotations.
### Uncertainty-Guided Training
Epistemic uncertainty refers to the lack of knowledge in a model about the underlying data. Estimating epistemic uncertainty enables the estimation of the reliability of a model's prediction for a specific sample. We train the teacher model to also estimate the epistemic uncertainty maps for each sample. For this purpose, we use a K-head neural network, similar to (Zheng et al., 2020). At each iteration of training, a single head is selected randomly for backpropagation. During inference, we use the mean prediction from K-heads as confidence and the entropy of the mean prediction as an estimate of epistemic uncertainty. We selected the K-head approach because it allows independence between predictions from different heads with faster inference times as compared to the Monte-Carlo (MC) dropout approach (Gal and Ghahramani, 2016). Moreover, the memory overhead is not much compared to fully independent deep ensembles (Lakshminarayanan et al., 2017).
We train the student model with an uncertainty-guided cross-entropy loss \(\mathcal{L}_{uCE}=e^{-u}y\cdot log(\hat{y})\), where \(u\) is uncertainty in the teacher's predictions at each voxel, \(e^{-u}\) is the uncertainty-guided weight, \(y\) is the reference label, and \(\hat{y}\) is the predicted probability. The weight \(e^{-u}\) ensures a large weight on voxels where the uncertainty in the teacher's predictions is small and vice-versa. We set \(u=0\) at the voxels where annotations are clinically available. In this way, the student model can benefit from training with a large dataset while avoiding deterioration in performance due to uncertain label predictions from the teacher model.
Figure 1: Schematic of the proposed approach. (a) A K-head (depicted by output arrows) teacher model is trained by randomly selecting a single head (highlighted in black) for backpropagation. (b) The clinically available ‘label’ contains annotation for hips (blue) and rectum (yellow) only. The annotation for bladder is missing. The mean prediction (of K-heads) from the trained teacher is used to impute the bladder annotation. (c) A K-head student model is trained with imputed label and uncertainty-guided loss. \(\mu\): mean, H: entropy, \(L_{CE}\): cross-entropy loss, \(\mathcal{L}_{uCE}\): uncertainty-guided loss.
### Implementation Details
We used the 3D U-Net architecture (Cicek et al., 2016) as a baseline neural network. The training was done using randomly cropped 3D patches (of depth 32 along the transverse direction) with a batchsize of 1 because of the GPU memory constraints. The implementation2 was done in Python by using the PyTorch library (Paszke et al., 2017) and the training was done on NVIDIA RTX2080 GPUs. Other hyperparameters were: optimizer=Adam (Kingma and Ba, 2015); network initialization=Kaiming He (He et al., 2015); learning rate (LR)=\(1e^{-3}\); weight decay=\(1e^{-4}\); the number of training epochs=500 for teacher models, 250 for student models; learning schedule=step LR with step size=\(\frac{1}{3}\times\)total training steps; data augmentations=global brightness and contrast variations (\(\pm 20\%\)), random rotations (-\(10^{\circ}\) to \(10^{\circ}\) along all axes); the number of heads (K) in teacher and student=5.
Footnote 2: The source code is available at [https://github.com/monikagrewal/OrganSegmentation](https://github.com/monikagrewal/OrganSegmentation).
## 4 Ablation Experiment
We conducted an ablation experiment to look into the individual effect of the components of our approach. As a baseline, we used two models: 3D U-Net trained with \(\mathcal{D}_{f}\), and 3D U-Net trained with \(\mathcal{D}_{f}^{clean}\). Note that the 3D U-Net trained with \(\mathcal{D}_{f}^{clean}\) is similar to the traditional setup of deep learning model development. In the first stage of ablation, we trained a K-head 3D U-Net teacher model with \(\mathcal{D}_{f}^{clean}\) (referred to as '_basic teacher_') followed by K-head 3D U-Net student model with the large dataset (\(\mathcal{D}_{f}^{clean}+\mathcal{D}_{p}\)) and uncertainty-guided loss (referred to as '_basic student_'). In the next stage, we employed the following additional data augmentations to introduce noise in the data: left-right flipping, masking an organ with a random intensity to simulate contrast, global elastic deformations, and
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & Dice (\%) & Surface Dice (\%) & HD \\ \hline
3D U-Net + \(\mathcal{D}_{f}\) & 83.47 (6.16) & 80.23 (6.82) & 16.06 (9.07) \\
3D U-Net + \(\mathcal{D}_{f}^{clean}\) & 85.02 (5.92)\({}^{*}\) & 82.00 (6.55)\({}^{*}\) & 12.44 (10.58)\({}^{*}\) \\ \hline _basic teacher_ & 85.36 (5.54)\({}^{*}\) & 82.33 (6.18)\({}^{*}\) & 11.61 (7.94)\({}^{*}\) \\ _basic student_ & 87.01 (4.62)\({}^{\star\dagger}\) & 84.64 (5.18)\({}^{\star\dagger}\) & 10.64 (8.00)\({}^{\star\dagger}\) \\ \hline _robust teacher_ & 85.31 (5.25)\({}^{*}\) & 82.30 (5.72)\({}^{*}\) & 11.57 (7.73)\({}^{*}\) \\ _basic teacher + robust student_ & 87.11 (4.28)\({}^{\star\dagger}\) & 84.76 (4.85)\({}^{\star\dagger}\) & 10.39 (6.68)\({}^{\star\dagger}\) \\ _robust teacher + robust student_ & 87.16 (4.19)\({}^{\star\dagger}\) & 84.82 (4.68)\({}^{\star\dagger}\) & 9.92 (4.72)\({}^{\star\dagger}\) \\ \hline _robust teacher + robust student_ - iter. 2 & 87.40 (4.13)\({}^{\star\dagger}\) & 85.30 (4.60)\({}^{\star\dagger}\) & 9.85 (4.86)\({}^{\star\dagger}\) \\ _robust teacher + robust student_ - iter. 3 & 87.35 (4.10)\({}^{\star\dagger}\) & 85.24 (4.63)\({}^{\star\dagger}\) & 9.96 (4.84)\({}^{\star\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean (standard deviation) of mean test performance per scan of the best models obtained from 5-fold cross-validation. Aug.: additional augmentations, HD: Hausdorff distance in mm at 95 percentile. Surface Dice were computed at a tolerance of 2.5mm (voxel spacing). \({}^{*}\)significant differences compared to 3D U-Net + \(\mathcal{D}_{f}\), \({}^{\dagger}\)significant differences compared to 3D U-Net + \(\mathcal{D}_{f}^{clean}\).
elastic deformations centered in either bowel bag or bladder as additional augmentations. We compared the performance of three models: a teacher model trained with \(\mathcal{D}^{clean}_{f}\) and additional augmentations (referred to as '_robust teacher_'), a student model trained with \(\mathcal{D}^{clean}_{f}+\mathcal{D}_{p}\) and additional augmentation, and using the imputed annotations from _basic teacher_ (referred to as '_basic teacher + robust student_'), and a student model trained with \(\mathcal{D}^{clean}_{f}+\mathcal{D}_{p}\) and additional augmentation, and using the imputed annotations from _robust teacher_ (referred to as '_robust teacher + robust student_'). Further, we performed 3 iterations of teacher-student training for _robust teacher + robust student_, wherein in each subsequent iteration, the student model became the teacher and a new student model was trained.
The mean and standard deviations of the performance metrics on test data from the best models obtained after 5-fold cross-validation are reported in Table 1. The distributions of performance metrics for each method (N = 105 test scans \(\times\) 5 models) were tested for normality using the Kolmogorov-Smirnov test. This was followed by a Friedman test for the main effect and Wilcoxon signed-rank test for post-hoc comparisons. A p-value less than 0.05 with adjustment for multiple comparisons was considered significant.
The automatic data cleaning had a significant impact on the test performance (\(p\) = 5.96\(e^{-}\)18, \(p\) = 6.76\(e^{-}\)17, \(p\) = 2.18\(e^{-}\)29 for Dice, Surface Dice (SD), and Hausdorff distance (HD), respectively), which was mainly due to better bowel bag segmentation. The automatic data cleaning increased the mean Dice coefficient of the bowel bag from 0.7947 to 0.8477 (performance metrics for all the OARs separately are provided in Appendix B). Furthermore, learning from a large dataset with the proposed teacher-student setup, annotation imputation, and uncertainty-guided training (_basic student_) provided a significant gain of 2.34% in mean Dice coefficient (\(p\) = 4.51\(e^{-}\)38), 3.22% in mean SD (\(p\) = 1.21\(e^{-}\)35), and 14.47% in mean HD (\(p\) = 1.51\(e^{-}\)15) as compared to learning from a small, fully annotated dataset (U-Net + \(D^{clean}_{f}\)). Adding noise to the data through additional augmentations provided only a marginal gain in the mean performance of the student model, but a considerable decrease in the standard deviations of HD indicating increased robustness towards variations in the test data. Further, iterating the teacher-student training yielded some
Figure 2: Representative examples of OARs contours. _Top row_: clinically available contours (manual), _Bottom row_: contours generated from OARs segmentation masks predicted by our approach (automatic). Further, the clinical acceptability grades (smaller value indicates better quality) are reported for each OAR.
performance gains, but only till the second iteration. A few representative examples from the results obtained by _basic teacher + robust student_ are shown in Figure 2.
### Comparison with the State-of-the-art (SOTA)
In comparison to SOTA approaches for CT image segmentation for OARs in cervical cancer radiation treatment (shown in Table 2), the performance of our approach seems better for the bowel bag, similar for the bladder and hips, but slightly worse for the rectum. Note that the results in (Wang et al., 2020; Liu et al., 2020, 2020; Rigaud et al., 2021) correspond to a small test dataset resulting from a single random split, which is susceptible to bias introduced during the splitting of the data. In terms of test dataset size, a comparison with (Rhee et al., 2020) is more suitable. However, (Rhee et al., 2020) had a comparatively larger training dataset also and trained separate models for each OAR. We believe that using our approach in combination with the data from (Rhee et al., 2020) may result in a better performance with a single model.
## 5 Clinical Acceptability Test
We conducted a validation study to assess the clinical acceptability of the automatically generated OARs segmentations. We used the _basic teacher + robust student_ model from the first data-split, to predict OARs segmentation masks in the first 4 scans in the test dataset, which were used to generate automatic contours. We showed3 both the clinically available contours and the automatically generated contours to a radiation oncologist (henceforth referred to as 'clinical expert'), without informing them about the method used to generate the contours. The clinical expert graded each contour for its clinical acceptability according to a 4-point Likert scale: 1=acceptable as it is, 2=acceptable but marginally deviating from exact anatomical definition (subjective to an observer), 3=acceptable with minor corrections because either a part of the organ was not delineated or a peripheral tissue was included in the contour, 4=not acceptable because a correction involving both deletion, as well as delineation of an additional contour, was required.
Footnote 3: The contours were presented on 2D transverse slices spaced at a 10mm distance to make it similar to the clinical scenario where the contours are delineated on 2D transverse slices. The clinical expert optionally inspected the contours and scans in coronal and sagittal view also to ensure comprehensiveness.
The clinical acceptability grades for the automatically and manually generated contours for all the graded 2D transverse slices and OARs are shown in Figure 3. None of the contours were given grade 4 implying that all the contours were o
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & A & B & C & D & E1 & E2 & Ours \\ \hline Bowel bag & - & - & 0.85 & - & 0.78 & 0.78 & 0.86 \\ Bladder & 0.91 & 0.92 & 0.91 & 0.89 & 0.90 & 0.91 & 0.92 \\ Hips & 0.88 & 0.905 & 0.90 & 0.935 & 0.89 & 0.92 & 0.93 \\ Rectum & 0.81 & 0.79 & 0.82 & 0.81 & 0.77 & 0.77 & 0.78 \\ \hline Number of test samples & 25 & 14 & 27 & 140 & 30 & 30 & 105 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean Dice coefficients reported in A:(Wang et al., 2020), B:(Liu et al., 2020), C:(Liu et al., 2020), D:(Rhee et al., 2020), E1:(Rigaud et al., 2021) model 1, E2:(Rigaud et al., 2021) model 2, and Ours: _robust teacher + robust student_.
quality either as it is or with adaptations. Further, not all of the clinically available contours were graded as 1, representing inter-observer variation. A Chi-squared test of goodness of fit indicated that the histograms of clinical acceptability grades of the automatically generated contours were significantly different from the manually generated contours for the bowel bag (\(\chi^{2}(1,N{=}58)=11.402,p=0.003\)). However, as shown in the Figure 3, it was unclear which contours (automatically or manually generated) were better. The clinical acceptability grades for automatically and manually generated contours were not significantly different for the bladder (\(\chi^{2}(1,N{=}27)=2.667,p=0.102\)), and hips (\(\chi^{2}(1,N{=}18)=2.250,p=0.134\)). For the rectum, the Chi-squared test statistics could not be obtained because the frequency counts corresponding to grade 3 were less than 5, however, it is apparent from the Figure 3 that the frequency counts in each category were similar for both the automatically and manually generated contours.
Qualitatively, the differences in grade 1 and grade 2 in all the organs were mainly attributed to inter-observer variance. In the case of hips, the window width and window level settings used to visualize the CT scans also influenced the difference between grade 1 and grade 2. Grade 3 corresponded to contours including mesorectum as a part of the bowel bag, and difference in cranial-caudal extent in the rectum.
## 6 Discussion and Conclusions
We investigated the possibility of using a large clinically available dataset of the abdominal region to learn a deep learning model for the automatic segmentation of OARs in cervical cancer radiation treatment. To the best of our knowledge, this is one of the few works in the direction of utilizing a large clinically available dataset containing missing annotations for learning a deep learning model. Our experimental results show that learning from a large dataset using our proposed approach yields significant performance gain despite missing annotations in the data. The obtained segmentations from our deep learning model were of clinically acceptable quality, which is encouraging.
Limitations of our work include an ablation study involving only a single run (i.e., network initialization), and a lack of experiments with different semantic segmentation architectures. Both decisions were consciously taken to find sensible results despite the expensive nature of training deep neural networks. Interesting future directions are 1) extending the
Figure 3: Comparison of clinical acceptability grades (smaller value indicates better quality) for clinically available contours (manual) and the contours generated from OARs segmentation masks predicted by our approach (automatic) for (a): bowel bag, (b): bladder, (c): hips, and (d) rectum.
current work to automatic segmentation of more OARs in cervical cancer radiation treatment e.g., sigmoid and anal canal, and 2) evaluating and learning from datasets of multiple hospitals and demographics to investigate and reduce possible bias in the predictions.
In conclusion, we demonstrated that training a deep learning model without using curated and specifically annotated medical imaging data, but with the capability of predicting clinically acceptable segmentation is possible. Apart from saving clinicians' time, our proposed approach leads to faster development time because of using the readily available data and increased test performance because of the increased dataset size.
The research is part of the research programme, Open Technology Programme with project number 15586, which is financed by the Dutch Research Council (NWO), Elekta (Elekta AB, Stockholm, Sweden) and Xomnia (Xomnia B.V., Amsterdam, the Netherlands). Further, the work is co-funded by the public-private partnership allowance for top consortia for knowledge and innovation (TKIs) from the Ministry of Economic Affairs.
We thank Jan Wiersma (email: [email protected]), Jeroen de Vries (email: [email protected]), and Bart van de Poel (email: [email protected]) for their contributions in obtaining the data, data curation, and data cleaning, respectively, in the initial stage of the project.
|
2303.08307 | Coordinating Fully-Cooperative Agents Using Hierarchical Learning
Anticipation | Learning anticipation is a reasoning paradigm in multi-agent reinforcement
learning, where agents, during learning, consider the anticipated learning of
other agents. There has been substantial research into the role of learning
anticipation in improving cooperation among self-interested agents in
general-sum games. Two primary examples are Learning with Opponent-Learning
Awareness (LOLA), which anticipates and shapes the opponent's learning process
to ensure cooperation among self-interested agents in various games such as
iterated prisoner's dilemma, and Look-Ahead (LA), which uses learning
anticipation to guarantee convergence in games with cyclic behaviors. So far,
the effectiveness of applying learning anticipation to fully-cooperative games
has not been explored. In this study, we aim to research the influence of
learning anticipation on coordination among common-interested agents. We first
illustrate that both LOLA and LA, when applied to fully-cooperative games,
degrade coordination among agents, causing worst-case outcomes. Subsequently,
to overcome this miscoordination behavior, we propose Hierarchical Learning
Anticipation (HLA), where agents anticipate the learning of other agents in a
hierarchical fashion. Specifically, HLA assigns agents to several hierarchy
levels to properly regulate their reasonings. Our theoretical and empirical
findings confirm that HLA can significantly improve coordination among
common-interested agents in fully-cooperative normal-form games. With HLA, to
the best of our knowledge, we are the first to unlock the benefits of learning
anticipation for fully-cooperative games. | Ariyan Bighashdel, Daan de Geus, Pavol Jancura, Gijs Dubbelman | 2023-03-15T01:41:20Z | http://arxiv.org/abs/2303.08307v2 | # Coordinating Fully-Cooperative Agents Using Hierarchical Learning Anticipation
###### Abstract.
Learning anticipation is a reasoning paradigm in multi-agent reinforcement learning, where agents, during learning, consider the anticipated learning of other agents. There has been substantial research into the role of learning anticipation in improving cooperation among self-interested agents in general-sum games. Two primary examples are Learning with Opponent-Learning Awareness (LOLA), which anticipates and shapes the opponent's learning process to ensure cooperation among self-interested agents in various games such as iterated prisoner's dilemma, and Look-Ahead (LA), which uses learning anticipation to guarantee convergence in games with cyclic behaviors. So far, the effectiveness of applying learning anticipation to fully-cooperative games has not been explored. In this study, we aim to research the influence of learning anticipation on coordination among common-interested agents. We first illustrate that both LOLA and LA, when applied to fully-cooperative games, degrade coordination among agents, causing worst-case outcomes. Subsequently, to overcome this miscoordination behavior, we propose Hierarchical Learning Anticipation (HLA), where agents anticipate the learning of other agents in a hierarchical fashion. Specifically, HLA assigns agents to several hierarchy levels to properly regulate their reasonings. Our theoretical and empirical findings confirm that HLA can significantly improve coordination among common-interested agents in fully-cooperative normal-form games. With HLA, to the best of our knowledge, we are the first to unlock the benefits of learning anticipation for fully-cooperative games.
Multi-agent reinforcement learning, Learning anticipation, Hierarchical reasoning, Fully-cooperative games +
Footnote †: journal: The 14th Workshop on Optimization and Learning in Multi-Agent Systems (OptLearnMAS-23) at the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Koch, N. Agran, B. A. (eds.), May 29 - June 2, 2023, London, United Kingdom. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.iiamas.org). All rights reserved.
## 1. Introduction
One of the key characteristics of learning in multi-agent systems is the non-stationary environment. As a result, agents should continuously interact with each other and adapt their strategies accordingly. However, in various game settings, these interactions commonly lead to worst-case outcomes for all agents (B
Considering the above, the key research goal of this work is to research the influence of learning anticipation on coordination among fully-cooperative agents. We believe that as multiple agents should interact and cooperate to achieve a common goal, learning anticipation has the potential to improve coordination among agents, resulting in better overall rewards.
To accomplish our goal, we first theoretically prove that in a two-agent two-action coordination game (Beng et al., 2015), both LOLA and LA have the tendency to lead to miscoordination among common-interested agents, causing a worse outcome. To solve this miscoordination problem and improve the applicability of learning anticipation to fully-cooperative games, we then propose Hierarchical Learning Anticipation (HLA), a new learning method explicitly developed for improving coordination in games with common-interested agents. Specifically, HLA assigns all agents to hierarchy levels, which define the reasoning orders of agents. We theoretically prove that HLA can avoid miscoordination in the aforementioned coordination game. Furthermore, we empirically show that in a two-agent three-action coordination game (Beng et al., 2015), HLA, as opposed to LOLA and LA, significantly improves coordination among agents, leading to better overall rewards. Finally, we discuss the shortcomings of our study and provide future research directions.
## 2. Background
Our work assumes a multi-agent task that is commonly described as a Markov Game (MG) (Beng et al., 2015). An MG can be defined as a tuple \((\mathcal{N},\mathcal{S},\{\mathcal{A}_{i}\}_{i\in\mathcal{N}},\{\mathcal{R}_ {i}\}_{i\in\mathcal{N}},\mathcal{T},\rho,\mathcal{V})\), where \(\mathcal{N}\) is the set of agents (\(|\mathcal{N}|=n\)), \(\mathcal{S}\) is the set of states, and \(\mathcal{A}_{i}\) is the set of possible actions for agent \(i\in\mathcal{N}\). Agent \(i\) chooses its action \(a_{i}\in\mathcal{A}_{i}\) through the policy network \(\pi_{\theta_{i}}:\mathcal{S}\times\mathcal{A}_{i}\rightarrow[0,1]\) parameterized by \(\theta_{i}\) conditioning on the given state \(s\in\mathcal{S}\). Given the actions of all agents, each agent \(i\) obtains a reward \(r_{i}\) according to its reward function \(\mathcal{R}_{i}:\mathcal{S}\times\mathcal{A}_{1}\times...\times\mathcal{A}_{ n}\rightarrow\mathbb{R}\). Given an initial state, the next state is produced according to the state transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}_{1}\times...\times\mathcal{A}_{n} \times\mathcal{S}\rightarrow[0,1]\). Given an episode \(\tau\) of horizon \(T\), the discounted return for each agent \(i\) at time step \(t\leq T\) is defined by \(G_{t}^{t}(\tau)=\sum_{l=t}^{T}Y^{l-l}r_{i}\) where \(y\) is a predefined discount factor. The expected return given the agents' policy parameters approximates the state value function for each agent \(V_{i}(s,\theta_{1},...,\theta_{n})=\mathbb{E}[G_{t}^{t}(\tau|s^{t}=s)]\). Each agent \(i\) aims to maximize the expected return given the distribution of the initial state \(\rho(s)\), denoted by the performance objective \(j_{i}=\mathbb{E}_{\rho(s)}V_{i}(s,\theta_{1},...,\theta_{n})\). A _naive agent_ updates its policy parameters in the direction of the objective's gradient
\[\nabla_{\theta_{i}}J_{i}=\mathbb{E}_{\rho(s)}\nabla_{\theta_{i}}V_{i}(s, \theta_{1},...,\theta_{n}). \tag{1}\]
**Learning With Opponent-Learning Awareness (LOLA)**. Unlike naive agents, LOLA agents modify their learning objectives by differentiating through the anticipated learning steps of the opponents (Beng et al., 2015). Given \(n=2\) for simplicity, a first-order LOLA agent assumes a naive opponent and uses policy parameter anticipation to optimize \(V_{1}^{\text{LOLA}}(s,\theta_{1},\theta_{2}+\Delta\theta_{2})\) where \(\Delta\theta_{2}=\mathbb{E}_{\rho(s)}\eta\nabla_{\theta_{2}}V_{2}(s,\theta_{1 },\theta_{2})\) and \(\eta\in\mathbb{R}^{+}\) is the prediction length. Using first-order Taylor expansion and by differentiating with respect to \(\theta_{1}\), the gradient adjustment for the first LOLA agent (Beng et al., 2015) is given by
\[\nabla_{\theta_{1}}V_{1}^{\text{LOLA}}(s,\theta_{1},\theta_{2}+ \Delta\theta_{2})\approx \nabla_{\theta_{1}}V_{1}+(\nabla_{\theta_{2}\theta_{1}}V_{1})^{ \intercal}\Delta\theta_{2}\] \[+\underbrace{(\nabla_{\theta_{1}}\Delta\theta_{2})^{\intercal} \nabla_{\theta_{2}}V_{1}}_{\text{shaping}}, \tag{2}\]
where \(V_{1}=V_{1}(s,\theta_{1},\theta_{2})\). The rightmost term in the LOLA update allows for active shaping of the opponent's learning. This term has been proven effective in enforcing cooperation in various games with self-interested agents, including IPD (Beng et al., 2015; Beng et al., 2015). The LOLA update can be further extended to non-naive opponents, resulting in HOLA agents (Beng et al., 2015; Beng et al., 2015).
**Look Ahead (LA)**. LA agents assume that the opponents' learning steps cannot be influenced, i.e., cannot be shaped (Beng et al., 2015; Beng et al., 2015). In other words, agent one assumes that the prediction step, \(\Delta\theta_{2}\), is independent of the current optimization, i.e., \(\nabla_{\theta_{1}}\Delta\theta_{2}=0\). Therefore, the shaping term disappears, and the gradient adjustment for the first-order LA agent will be
\[\nabla_{\theta_{1}}V_{1}^{\text{LA}}(s,\theta_{1},\theta_{2}+\bot\ \Delta\theta_{2})\approx\nabla_{\theta_{1}}V_{1}+(\nabla_{\theta_{2}\theta_{ 1}}V_{1})^{\intercal}\Delta\theta_{2}, \tag{3}\]
where \(\bot\) prevents gradient flowing from \(\Delta\theta_{2}\) upon differentiation.
The benefits of LOLA and LA have been frequently shown throughout the literature in games with self-interested agents (Beng et al., 2015; Beng et al., 2015; Beng et al., 2015). In the next section, we analyze the effectiveness of LOLA and LA in fully-cooperative games with common-interested agents, i.e., \(\mathcal{R}_{i}=\mathcal{R}_{j}\ \forall i,j\in\mathcal{N}\) and, consequently, \(V_{i}=V_{j}\ \forall i,j\in\mathcal{N}\).
## 3. Miscoordination Analysis in Fully-Cooperative Games
To investigate the influence of LOLA and LA on coordination among common-interested agents, we consider a two-agent two-action coordination game (Beng et al., 2015) with an added miscoordination penalty. The game is defined by a common reward matrix
\[\mathcal{R}_{1}=\mathcal{R}_{2}=\begin{bmatrix}\alpha&k\\ k&\alpha\end{bmatrix}, \tag{4}\]
where \(\alpha>0\) is the coordination reward and \(k\leq 0\) is the miscoordination penalty. We further define \(g=\alpha-k>0\) as the miscoordination regret. The agents are parameterized by \(\theta_{1}\in[0,1]\) and \(\theta_{2}\in[0,1]\), denoting the probability of choosing the first action by agents one and two, respectively. With a joint strategy \((\theta_{1},\theta_{2})\), the common value function of the game is
\[V_{1}(\theta_{1},\theta_{2})=V_{2}(\theta_{1},\theta_{2})=2g\theta_{1}\theta_{2}-g (\theta_{1}+\theta{2})+\alpha, \tag{5}\]
The game has two equilibrium points, i.e., \((\theta_{1}=0,\theta_{2}=0)\) and \((\theta_{1}=1,\theta_{2}=1)\), where each agent receives a reward of \(\alpha\). Furthermore, two miscoordination points of the games are \((\theta_{1}=0,\theta_{2}=1)\) and \((\theta_{1}=1,\theta_{2}=0)\), where agents receive a penalty of \(k\). To solve the coordination game, agents iteratively adjust their parameters in the direction of the value function gradients, i.e., \(\nicefrac{{\nu}}{{\theta_{i}}}\) and \(\nicefrac{{\nu}}{{\theta_{i}}}\) in naive update rule, \(\nicefrac{{\nu^{\text{LOLA}}}}{{\theta_{i}}}\) and \(\nicefrac{{\nu^{\text{LOLA}}}}{{\theta_{i}}}\) in LOLA, and \(\nicefrac{{\nu^{\text{LOLA}}}}{{\theta_{i}}}\) and \(\nicefrac{{\nu^{\text{LOLA}}}}{{\theta_{i}}}\) in LA. Therefore, we can analyze the dynamics of \(\theta_{1}\) and \(\theta_{2}\) in LA, LOLA, and naive agents to investigate their behaviors.
Theorem 3.1 ().: _If, in the previously defined two-agent two-action coordination game with a miscoordination regret \(g\), the agents are
_updated following the LA method and a fixed prediction length \(\eta\), then they can be subject to miscodination for \(g>\frac{1}{2\eta}\)._
**Proof**. Given Eq. 3, the unconstrained dynamics of LA agents can be defined by the following differential equations:
\[\begin{bmatrix}a_{0}/_{dt}\\ a_{0}/_{dt}\end{bmatrix}=\begin{bmatrix}4\eta g^{2}&2g\\ 2g&4\eta g^{2}\end{bmatrix}\begin{bmatrix}\theta_{1}\\ \theta_{2}\end{bmatrix}-\begin{bmatrix}2\eta g^{2}+g\\ 2\eta g^{2}+g\end{bmatrix}. \tag{6}\]
This system of equations has a unique fixed point (zero gradients) at \(\theta_{1}=\theta_{2}=0.5\) (see Figure 1). The eigenvalue analysis of the coefficient matrix yields two real eigenvalues, \(\lambda_{1}=4\eta g^{2}+2g\) and \(\lambda_{2}=4\eta g^{2}-2g\), and two respective diagonal and off-diagonal eigenvectors. While \(\lambda_{1}\) is always positive, the sign of \(\lambda_{2}\) depends on the values of both \(\eta\) and \(g\). For a fixed prediction length, non-positive values of \(\lambda_{2}\) are reached by \(g\leq\frac{1}{2\eta}\). In this case, the fixed point is an unstable saddle point (or unstable line in case of \(\lambda_{2}=0\)), and the agents, with any initial values of \(\theta_{1}\) and \(\theta_{2}\) (except on the fixed point itself), converge to the equilibrium points (see Figure 1-Left). However, when the miscodination regret increases, \(g>\frac{1}{2\eta}\), the fixed point becomes an unstable (source) point (see Figure 1-Right). Therefore, some initial values of \(\theta_{1}\) and \(\theta_{2}\) naturally lead to the miscodination points (\(\theta_{1}=0,\theta_{2}=1\)), and Theorem 3.1 is proved.
**Theorem 3.2**.: _If, in the previously defined two-agent two-action coordination game with a miscodination regret \(g\), the agents are updated following the LOLA method and a fixed prediction length \(\eta\), then they can be subject to miscodination for \(g>\frac{1}{4\eta}\)._
**Proof**. Given the LOLA update rule in Eq. 2, the unconstrained dynamics can be defined as
\[\begin{bmatrix}a_{0}/_{dt}\\ a_{0}/_{dt}\end{bmatrix}=\begin{bmatrix}8\eta g^{2}&2g\\ 2g&8\eta g^{2}\end{bmatrix}\begin{bmatrix}\theta_{1}\\ \theta_{2}\end{bmatrix}-\begin{bmatrix}4\eta g^{2}+g\\ 4\eta g^{2}+g\end{bmatrix}. \tag{7}\]
This system of equations has a unique fixed point, again at \(\theta_{1}=\theta_{2}=0.5\) (see Figure 1). The eigenvalue analysis of the coefficient matrix yields two real eigenvalues, \(\lambda_{1}=8\eta g^{2}+2g\) and \(\lambda_{2}=8\eta g^{2}-2g\), and two respective diagonal and off-diagonal eigenvectors. Similar to the case of LA agents, \(\lambda_{1}\) is always positive, and the sign of \(\lambda_{2}\) depends on the values of both \(\eta\) and \(g\). For a fixed prediction length, non-positive values of \(\lambda_{2}\) are reached by \(g\leq\frac{1}{4\eta}\). In this case, the fixed point is an unstable saddle point (or unstable line in case of \(\lambda_{2}=0\)), and the agents, with any initial values of \(\theta_{1}\) and \(\theta_{2}\) (except on the fixed point itself), converge to the equilibrium points. However, when the miscodination regret increases, \(g>\frac{1}{4\eta}\), the fixed point becomes an unstable (source) point. Therefore, some initial values of \(\theta_{1}\) and \(\theta_{2}\) naturally lead to the miscodination points, and Theorem 3.2 is proved.
**Theorem 3.3**.: _If, in the previously defined two-agent two-action coordination game with a miscodination regret \(g\), the agents follow the naive updates, then they are never subject to miscodination for any value of \(g\)._
**Proof**. In the case of the naive agents, we have
\[\begin{bmatrix}a_{0}/_{dt}\\ a_{0}/_{dt}\end{bmatrix}=\begin{bmatrix}0&2g\\ 2g&0\end{bmatrix}\begin{bmatrix}\theta_{1}\\ \theta_{2}\end{bmatrix}-\begin{bmatrix}g\\ g\end{bmatrix}. \tag{8}\]
Similar to the case of LOLA and LA, this system of equations has a unique fixed point (zero gradients) at \(\theta_{1}=\theta_{2}=0.5\). The eigenvalue analysis of the coefficient matrix yields two real eigenvalues, \(\lambda_{1}=2g\) and \(\lambda_{2}=-2g\), and two respective diagonal and off-diagonal eigenvectors. This time, however, the eigenvalues are of opposite signs for any values of \(g\), and the fixed point is always an unstable saddle point. Therefore, any initial values of \(\theta_{1}\) and \(\theta_{2}\) (except on the fixed point itself) naturally lead to the equilibrium points.
A closer inspection of LOLA and LA methods reveals two important aspects of their fundamental ideas.
1. Anticipating other agents' learning is only effective when it is close to their true future learning. Both LOLA and LA assume a reasoning order for other agents. If this assumption is wrong, it can negatively affect the coordination among the cooperative agents. For self-interested agents, it is natural for them to not unveil their true reasoning orders to each other as they have different goals. However, common-interested agents can benefit more from this reasoning information to achieve their common goal.
2. The idea of shaping other agents' learning can be misleading if the other agents do not follow, making agents more likely to suffer from miscoodination. LOLA agents constantly underestimate each other, and each agent intends to shape the learning of others. Letcher et al. (2016) indicated that these arcoant behaviors lead to outcomes that are strictly worse for all agents. It is also clear from Theorems 3.1 and 3.2 that the range of \(g\) that can lead to miscodination in LOLA (\(g>\frac{1}{4\eta}\)) is larger than the range of \(g\) in LA (\(g>\frac{1}{2\eta}\)).
Given the above discussion, we hypothesize that if agents be informed of the reasoning orders and properly follow the shaping plans, they can improve coordination among themselves.
## 4. Hierarchical Learning
Anticipation
In this section, we propose _Hierarchical Learning Anticipation_ (HLA), a methodology designed to improve coordination among fully-cooperative agents. In contrast to LOLA and LA, HLA determines a hierarchy among the agents to specify their reasoning orders. Specifically, we first assign \(n\) agents to \(n\) hierarchy levels, where levels one and \(n\) are the lowest and highest hierarchy levels, respectively. In each hierarchy level, the assigned agent is a _leader_ of the
Figure 1. Phase planes of the LA and LOLA dynamics. Left: unstable saddle fixed point. Right: unstable fixed point.
lower hierarchy levels and a _follower_ of the higher ones, with two reasoning rules:
1. A leader knows the reasoning orders of the followers and is one level higher
2. A follower cannot shape the leaders and only follows their shaping plans
With these reasoning rules, we can address the two previously mentioned shortcomings of LOLA and LA. Specifically, the first reasoning rule makes sure that a leader has correct assumptions about the followers' reasoning orders and, consequently, can accurately anticipate their future behaviors. With the second reasoning rule, we can control the shaping plans of the agents and make sure that the shaping plans are followed. Below, we describe the update rules in our proposed HLA.
For simplicity, we set \(n=2\), and we assume that agents one (\(\theta_{1}\)) and two (\(\theta_{2}\)) are assigned to the hierarchy levels one and two, respectively. In other words, agent one is a naive follower, and agent two is a first-order leader. Based on our first reasoning rule, the leader performs first-order reasoning, and its gradient adjustment is similar to a first-order LOLA agent:
\[\begin{split}\nabla_{\theta_{2}}V^{\text{HLA-Leader}}(s, \theta_{1}+\Delta\theta_{1},\theta_{2})\simeq&\nabla_{\theta_{2 }}V+(\nabla_{\theta_{1}\theta_{2}}V)^{\top}\Delta\theta_{1}\\ &+(\nabla_{\theta_{2}}\Delta\theta_{1})^{\top}\nabla_{\theta_{ 1}}V,\end{split} \tag{9}\]
where \(V=V(s,\theta_{1},\theta_{2})\) is the common value function, and \(\Delta\theta_{1}=\mathbb{E}_{\rho(s)}\eta\nabla_{\theta_{1}}V\). However, unlike first-order LOLA agents, the first-order leader knows the reasoning level of the follower, which is a naive agent. The shaping plan of the first-order leader is to change its parameters as
\[\tilde{\theta}_{2}=\theta_{2}+\mathbb{E}_{\rho(s)}\eta\nabla_{\theta_{2}}V^{ \text{HLA-Leader}}(s,\theta_{1}+\Delta\theta_{1},\theta_{2}), \tag{10}\]
so that an optimal increase in the common value is achieved after its new parameters are taken into account by the naive follower. Therefore, based on our second reasoning rule, the naive follower must follow the plan and adjust its parameters through
\[\begin{split}\nabla_{\theta_{1}}V^{\text{HLA- Follower}}(s,\theta_{1},\tilde{\theta}_{2})\approx\nabla_{\theta_{1}}V\\ +(\nabla_{\theta_{2}\theta_{1}}V)^{\top}\mathbb{E}_{\rho(s)}\eta \nabla_{\theta_{2}}V^{\text{HLA-Leader}}(s,\theta_{1}+\Delta\theta_{2}, \theta_{2}),\end{split} \tag{11}\]
Since the follower can no longer shape the leader, the shaping term in Eq. (11) is zero. Therefore, the gradient adjustment for the naive follower in HLA is similar to a first-order LA agent, which predicts the leader parameters as \(\theta_{2}+\mathbb{E}_{\rho(s)}\eta\nabla_{\theta_{1}}V^{\text{HLA-Leader}}(s, \theta_{1}+\Delta\theta_{1},\theta_{2})\).
Algorithm 1 illustrates the HLA update rules for the case of \(n\) agents. In the theorem below, we show how HLA agents can effectively avoid miscoordination in the two-agent two-action coordination game.
**Theorem 4.1**.: _If, in the previously defined two-agent two-action coordination game with a miscoordination regret \(g\), the agents are updated following HLA, then they are not subject to miscoordination for any value of \(g\)._
**Proof**. Given Eqs. 9 and 11, the unconstrained dynamics of the HLA agents can be defined by the following differential equations
\[\begin{bmatrix}\nicefrac{{\partial}_{1}}{{\partial\theta_{1}}}&=\begin{bmatrix} 4\eta g^{2}&2g+16\eta^{2}g^{3}\\ 2g&8\eta g^{2}\end{bmatrix}\begin{bmatrix}\theta_{1}\\ \theta_{2}\end{bmatrix}-\begin{bmatrix}8\eta^{2}g^{3}+4\eta g^{2}+g\\ 4\eta g^{2}+g\end{bmatrix}, \tag{12}\]
resulting in a unique fixed point at \(\theta_{1}=\theta_{2}=0.5\) and two real eigenvalues, \(\lambda=6\eta g^{2}\pm 2p\sqrt{9\eta^{2}g^{2}+1}\). Unlike the case of LOLA and LA, the eigenvalues are now of opposite signs for any values of \(g\), and the fixed point is always an unstable saddle point. Therefore, any initial values of \(\theta_{1}\) and \(\theta_{2}\) (except on the fixed point itself) naturally lead to the equilibrium points, and Theorem 4.1 is proved.
With Theorem 4.1, we have shown that HLA naturally avoids miscoordination, and therefore, our hypothesis is correct. However, the main goal is to demonstrate that HLA improves coordination among common-interested agents with respect to naive learning, which does not take into account learning anticipation at all. If HLA does not improve the coordination, there is no clear benefit over the naive learners, as they also avoid miscoordination.
To further show the benefits of HLA, we employ a standard two-agent three-action coordination game [2]. The game has a common
Figure 2: Converged results for various values of miscoordination regret in the three-action coordination game.
reward matrix
\[\mathcal{R}_{1}=\mathcal{R}_{2}=\begin{bmatrix}10&0&k\\ 0&2&0\\ k&0&10\end{bmatrix}, \tag{13}\]
and we define \(g=10-k\) as the miscoordination regret. Each agent \(i\in\{1,2\}\) is parameterized with three parameters: \(\theta_{i}=\{\theta_{i}^{1},\theta_{i}^{2},\theta_{i}^{3}\}\) (\(\theta_{i}^{j}>0\;\forall j\in\{1,2,3\}\) and \(\sum_{j}\theta_{i}^{j}=1\)), representing the probability of taking the actions one, two, and three, respectively. Consequently, the common value function of the game can be defined as
\[V_{1}=V_{2}=10(\theta_{1}^{1}\theta_{2}^{1}+\theta_{1}^{3}\theta_{2}^{3})+2 \theta_{1}^{2}\theta_{2}^{2}+k(\theta_{1}^{1}\theta_{2}^{3}+\theta_{1}^{3} \theta_{1}^{1}). \tag{14}\]
The game has two global equilibrium points (\(\theta_{1}^{1}=1,\theta_{2}^{1}=1\) and \(\theta_{1}^{3}=1,\theta_{2}^{3}=1\)), one local equilibrium point (\(\theta_{1}^{2}=1,\theta_{2}^{2}=1\)), and two miscoordination points (\(\theta_{1}^{1}=1,\theta_{2}^{3}=1\) and \(\theta_{1}^{3}=1,\theta_{2}^{3}=1\)).
In Figure 2, we depict the converged results for this game for naive, LA, LOLA, and HLA agents, for various values of miscoordination regret \(g\). The experiments are run 500 times until convergence, with random initializations. For HLA, we randomly assigned the agents to the hierarchy levels (leader and follower) in each experiment. From Figure 2, we find that both LA and LOLA agents are subject to miscoordination for high values of \(g\), which is consistent with our findings for the two-action coordination game. However, the most interesting aspect of this experiment is that by increasing the value of \(g\), the coordination among the naive agents reduces, leading them to the local equilibrium point, whereas our HLA consistently achieves the highest reward, independently of the miscoordination regret. These results clearly show the benefits of HLA over other methods.
## 5. Discussion and Future Work
In this study, we extended the applicability of learning anticipation to games with fully-cooperative agents. We demonstrated that methods such as LOLA and LA, which heavily benefit from learning anticipation in general-sum games, can significantly reduce coordination when the agents are fully-cooperative. We first hypothesized that when agents know the reasoning orders of others and properly follow the shaping plans, they could improve coordination among themselves. To verify our hypothesis, we proposed the novel HLA method, which incorporates a hierarchy to regulate the reasoning orders and shaping plans of agents. Having HLA, we can now use the benefits of learning anticipation in fully-cooperative games.
Nevertheless, there are still some unanswered questions about applying HLA to more complex fully-cooperative games. For instance, we limit ourselves to fully-cooperative normal-form games with differentiable objective functions where agents can access gradients and Hessians. In many multi-agent problems, the objective functions are non-differentiable. In these cases, the agents must estimate the higher-order gradients with various approximation methods. Furthermore, the agents may not have access to other agents' exact parameters, and they may need to infer the parameters from state-action trajectories. Consequently, future studies on the current topic are required.
|
2303.13041 | gDoc: Automatic Generation of Structured API Documentation | Generating and maintaining API documentation with integrity and consistency
can be time-consuming and expensive for evolving APIs. To solve this problem,
several approaches have been proposed to automatically generate high-quality
API documentation based on a combination of knowledge from different web
sources. However, current researches are weak in handling unpopular APIs and
cannot generate structured API documentation. Hence, in this poster, we propose
a hybrid technique(namely \textit{gDoc}) for the automatic generation of
structured API documentation. We first present a fine-grained search-based
strategy to generate the description for partial API parameters via computing
the relevance between various APIs, ensuring the consistency of API
documentation. Then, we employ the cross-modal pretraining Seq2Seq model M6 to
generate a structured API document for each API, which treats the document
generation problem as a translation problem. Finally, we propose a heuristic
algorithm to extract practical parameter examples from API request logs. The
experiments evaluated on the online system show that this work's approach
significantly improves the effectiveness and efficiency of API document
generation. | Shujun Wang, Yongqiang Tian, Dengcheng He | 2023-03-23T05:11:24Z | http://arxiv.org/abs/2303.13041v1 | # gDoc: Automatic Generation of Structured API Documentation
###### Abstract.
Generating and maintaining API documentation with integrity and consistency can be time-consuming and expensive for evolving APIs. To solve this problem, several approaches have been proposed to automatically generate high-quality API documentation based on a combination of knowledge from different web sources. However, current researches are weak in handling unpopular APIs and cannot generate structured API documentation. Hence, in this poster, we propose a hybrid technique(namely _gDoc_) for the automatic generation of structured API documentation. We first present a fine-grained search-based strategy to generate the description for partial API parameters via computing the relevance between various APIs, ensuring the consistency of API documentation. Then, we employ the cross-modal pretraining Seq2Seq model M6 to generate a structured API document for each API, which treats the document generation problem as a translation problem. Finally, we propose a heuristic algorithm to extract practical parameter examples from API request logs. The experiments evaluated on the online system show that this work's approach significantly improves the effectiveness and efficiency of API document generation.
API, Documentation, Seq2Seq, Search, M6 +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: journal: Computer
the internal relationship between APIs to realize parameter-level API documentation generation(See Figure 1(b) black dotted line).
* gDoc adopt the Seq2Seq model to translate API meta to API documentation, treating the document generation problem as a translation problem. Hence, gDoc can generate a document for any API(See Figure 1(b) red dotted line).
* We propose a heuristic algorithm to extract valuable parameter examples from API request logs.
* gDoc has been fully applied to the generation of Alibaba Cloud API documents. The adoption rate of parameter descriptions and examples generated by gDoc exceeds 80%.
## 2. Running Example
As shown in Figure 2, we demonstrate an automated production process for API documentation. When the documentation engineer clicks into an API document, the blank fields in the document will be assigned a series of recommended values (hidden under the smart recommendation button).
## 3. Search-Based Generation
**A Phenomenon:** we have noticed that the same parameter may be widely present in multiple API documents.
Table 2, 3 exhibits the input parameters and output parameters of two APIs(i.e., SendSms, AddSmsSign).
**Assumption :** we believe that the presence of parameters in multiple API documents is widespread. Furthermore, we assume that the same parameter may have the same semantics in different API documents (i.e., with the same description and example).
**Analysis:** As shown in Figure 3, we conducted a fine-grained analysis of Alibaba Cloud's 29217 APIs. These APIs have a total of 390796 parameters. After deduplication, the total number of parameters is 108882, and the compression ratio is 3.56, which means that a parameter will appear 3.56 times in various API documents on average. In Figure 3, it is assumed that all the same parameters have the same meaning, but this Assumption may not necessarily hold; thus, we further verify this Assumption. We cluster the parameter descriptions according to the parameter names and take the similarity of the two most similar descriptions in the cluster as the set similarity. In the end, we found that the average similarity of all sets was 0.72, which proved the correctness of our hypothesis. That is to say, parameters with the same name are more likely to have the same meaning.
**Accomplish:** Given a blank API document \(D_{i}\), for any parameter named \(p_{i,j}\) in \(D_{i}\), we recommend content named \(p_{i,j}\) with description and example values in other API documents as candidate recommendation item. This method can improve the production efficiency of API documents and improve the consistency of API documents.
## 4. Translation-Based Generation
Search-based generation does not guarantee that candidate descriptions will be generated for all API parameters. Therefore, in this section we propose a translation-based approach to produce a high-quality description for all parameters.
Natural language generation (NLG), also known as text generation, is one of the essential tasks in natural language processing(Krizhevsky et al., 2017). This paper concentrates on applying the Seq2Seq model to solve the OpenAPI documentation generation problem, dramatically reducing document production and maintenance overhead.
Table 4 exhibits an example of OpenAPI documentation. In our work, we mainly focus on the _Description_ generation by translating the combined OpenAPI parameter name and OpenAPI name.
We first introduce the RNN-based Seq2Seq model. A recurrent neural network (RNN) model recurrently calculates a vector named recurrent state or hidden state \(h_{n}\) by taking a sequence of words \(\{\omega_{1},\omega_{2},\ldots,\omega_{N}\}\) :
\[h_{n}=f\left(h_{n-1},\omega_{n}\right),n\in(1,N),h_{0}=0\]
The \(h_{0}\) denotes the initial state and is always set as zero at the training time. Usually, \(h_{n}\) depends on the current word \(\omega_{n}\) and
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline
**Parameter** & **Type** & **Required** & **Description** & **Example** \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{c} SignName \\ PhoneNumbers \\ \end{tabular} } & String & Yes & Signature Name & Allynn \\ & String & Yes & Phone Numbers & [186\({}^{\circ}\)\({}^{\circ}
previous ones before the current time step. Equation \(1,f\) denotes a parametrized nonlinear function: sigmoid, hyperbolic tangent, long-short term memory (LSTM), and recurrent gate unit (GRU). The hidden state will lose long contextual information when a vanilla RNN, such as a sigmoid or hyperbolic tangent, is used. LSTM or GRU can handle longer-term contexts by bringing in a memory cell. Moreover, GRU requires less computational cost compared with LSTM. Thus, GRU is used as the RNN cell unit. The equations of GRU are summarized as follows:
\[z_{t} =\sigma\left(W_{t}z_{0t}+U_{2}h_{t-1}\right)\] \[r_{t} =\sigma\left(W_{t}z_{0t}+U_{t}h_{t-1}\right)\] \[\widetilde{\mathrm{h}}_{t} =\tanh\left(W_{0}\omega_{t}+\mathrm{U}\left(\mathrm{r}_{t}*\mathrm{ h}_{t-1}\right)\right)\] \[h_{t} =(1-z_{t})*h_{t-1}+z_{t}*\widetilde{h}_{t}\]
In the Equation above, the \(\sigma\) is the nonlinear function, i.e., logistic sigmoid, which limits output to range \([0,1]\). \(z_{t}\) is the update gate deciding the weight of input information past, and \(r_{t}\) is the reset gate determining the weight of the last state. The candidate updates \(\widetilde{h}_{t}\) and controls the percentage of information obtained from \(h_{t-1}\) with a reset gate. The final update \(h_{t}\) depends on the update gate and candidate update. The subscript letter \(t\) represents the time step.
In this paper, we employ the Seq2Seq model proposed by M6(Dong et al., 2017), a cross-modal pretraining method, referring to Multi-Modality to MultiModality Multitask Mega-transformer, for unified pretraining on the data of single modality and multiple modalities. M6 scales the model size to 10 billion and 100 billion parameters and builds the Chinese's largest pre-trained model. This paper applies the M6 model to OpenAPI documentation generation applications and demonstrates its outstanding performance.
## 5. Parameter Example Generation
The examples of the parameter are essential. An intuitive way to obtain examples of parameters is to extract them from the API request log. However, randomly extracting examples from massive values is not enough. Take Alibaba Cloud API request logs as an example. There are more than 16 billion OpenAPI requests every day. Analyzing the total amount of logs every day requires strong hardware resource support. More importantly, it will waste much time. We believe that the example values of parameters should be universal. For example, 90% parameter values are composed of pure English letters, and the examples should be one of the values of this 90% rather than others.
As shown in Algorithm 1, we take a two-stage approach: Mapper and Reducer, to extract common parameter features from an ocean of values. Specifically, we emphasize three types of parameter features:
* Component Element Analysis.
* Element Arrangement Analysis.
* Length Analysis.
There are two main stages of parameter abstraction:
**Mapper:** In the mapper stage, we mainly focus on extracting local features of parameter values. Specifically,
* **Extracting Common Subsequence** will extract the longest common subsequences in the parameter set \(v_{i}\).
* **Transformer And Compress** will translate a specific parameter value into an abstract value. The translation rules are as follows: The transformer stage is responsible for abstract parameters. For example, \(123\) will be abstracted as _ddd_, and the compressing stage will further translate it into a character \(d\).
* **LengthComputing** mainly computes the length information of parameters and their frequency.
**Reducer:** In this stage, the local parameter features are combined to extract the global features of the parameters.
An example output of algorithm 1 is as follows:
```
1{
2parameter_pattern=:"X_d",
3"rate=:0.994,
4"examples=:I
5"SMS_41515455",
6"SNS_210775383",
7"SNS_216370632",
8"SNS_173474376",
9"SNS_186610582",
10"SNS_182400031"
11],
12"common_100":"",Commonstringfor100%
13"common_80":"SNS_",Commonstringfor80%
14"common_60":"SNS_"Commonstringfor60%
15
\begin{table}
\begin{tabular}{l|l} \hline Representation & Description \\ \hline \(z\) & The Chinese character \\ \(x\) & English lowercase characters \\ \(X\) & English uppercase characters \\ \(d\) & Number \\ & Other characters reserved \\ \hline \end{tabular}
\end{table}
Table 5. Transform Rules
## 6. Effectiveness Evaluation
Alibaba Cloud has numerous document development engineers responsible for generating API documents. We deploy gDoc online to observe the acceptance rate of gDoc output results. For example, given a parameter TemplateCode, _gDoc_ recommends candidate content. If the documentation engineer selects one of the recommended items as the content of the API documentation, the recommendation is considered valid.
\[\textit{Acceptance}\;Rate=\frac{Valid\;\textit{Recommendation}}{All\;\textit{Recommendations}} \tag{1}\]
### Overall Evaluation
As shown in Figure 4, we have counted the acceptance rate of gDoc in the last ten weeks, and it is exciting that its acceptance rate has remained above 80%, which is an excellent testament to the effectiveness of _gDoc_. Up to now, gDoc has been deployed on Alibaba Cloud for more than one year, with a comprehensive acceptance rate of 83.8%.
### Search-based Generation Evaluation
The acceptance rate of the search-based document generation method is maintained at around 90%. This method considers that there is an intrinsic relationship between APIs, which is often reflected in the fact that if API \(\alpha\) and API \(\beta\) come from the same producer (or belong to the same category), and \(\alpha\) and API \(\beta\) contain parameters with the same name, then the two parameters likely to have the same meaning.
### Translation-based Generation Evaluation
Unlike the previous integration of API documentation relying on the existing web knowledge, the translation-based method directly produces information and can generate documentation for any API. Although the acceptance rate of translation-based document generation methods is lower than that of search-based document generation methods, the overall acceptance rate is still high. The acceptance rate of 70%+ is acceptable proof that translation-based methods can be applied to API documentation generation.
## 7. Conclusions
In this research, we presented an automatic API documentation generation method _gDoc_ to maintain up-to-date documentation with usage examples for an evolving API. Our primary findings provide evidence that _gDoc_ can be used as a practical OpenAPI documentation tool. By leveraging existing API documentation and the Seq2Seq model to document APIs, we have reduced human efforts with automation in the API documentation process while improving the quality of API Documentation. In addition, we discussed techniques to deal with custom content flexible API elements and including the API's documentation with every life-cycle step to establish a quick feedback loop.
###### Acknowledgements.
This work was supported by Alibaba Group through Alibaba Innovative Research Program.
|
2310.04405 | Resummed spinning waveforms from five-point amplitudes | We compute the classical tree-level five-point amplitude for the two-to-two
scattering of spinning celestial objects with the emission of a graviton. Using
this five-point amplitude, we then turn to the computation of the leading-order
time-domain gravitational waveform. The method we describe is suitable for
arbitrary values of classical spin of Kerr black holes and does not require any
expansion in powers of the spin. In this paper we illustrate it in the simpler
case of the scattering of one Kerr and one Schwarzschild black hole. An
important ingredient of our calculation is a novel form of the Compton
amplitude with spinning particles including contact terms derived from matching
to black-hole perturbation theory calculations. This ensures that our waveform
is valid up to at least fourth order in the spin. Our method can be applied
immediately to generate improved waveforms once higher-order contact terms in
the Compton amplitude become available. Finally, we show the formula for the
gravitational memory to all orders in the spin, which is in agreement with our
results. | Andreas Brandhuber, Graham R. Brown, Gang Chen, Joshua Gowdy, Gabriele Travaglini | 2023-10-06T17:54:57Z | http://arxiv.org/abs/2310.04405v3 | # Resummed spinning waveforms from five-point amplitudes
###### Abstract
We compute the classical tree-level five-point amplitude for the two-to-two scattering of spinning celestial objects with the emission of a graviton. Using this five-point amplitude, we then turn to the computation of the leading-order time-domain gravitational waveform. The method we describe is suitable for arbitrary values of classical spin of Kerr black holes and does not require any expansion in powers of the spin. In this paper we illustrate it in the simpler case of the scattering of one Kerr and one Schwarzschild black hole. An important ingredient of our calculation is a novel form of the Compton amplitude with spinning particles including contact terms derived from matching to black-hole perturbation theory calculations. This ensures that our waveform is valid up to at least fourth order in the spin. Our method can be applied immediately to generate improved waveforms once higher-order contact terms in the Compton amplitude become available. Finally, we show the formula for the gravitational memory to all orders in the spin, which is in agreement with our results.
###### Contents
* 1 Introduction
* 2 Kinematics of the scattering and spin variables
* 3 Classical gravitational Compton amplitude with spin
* 3.1 Three-point amplitude
* 3.2 The Compton amplitude
* 4 Spinning five-point amplitude
* 5 The time-domain waveform
* 5.1 Waveforms from amplitudes
* 5.2 A scalar warm-up
* 5.3 General expression of the time-domain waveform for arbitrary spins
* 6 The waveform from the scattering of a Schwarzschild and a Kerr black hole
* 6.1 The \(q_{1}^{2}\)-channel
* 6.2 The \(q_{2}^{2}\)-channel
* 6.3 Discussion of the resummed spin waveform
* 7 Comparison with the spin-expanded integrand
* 8 Gravitational memory
* 8.1 General strategy
* 8.2 Four-point two-to-two spinning amplitude
* 8.3 Fourier transform to impact parameter space
* 8.4 Result for the gravitational memory
* A Simplifying the four-point amplitude
* B More on the integrand
Introduction
Since the first direct observation of gravitational waves [1; 2; 3; 4; 5], a flurry of observations and theoretical predictions have greatly advanced the fields of black-hole physics and general relativity. Important questions regarding the intrinsic properties of black holes, the dynamics of binary black-hole processes, and more, can all be investigated in depth through high-precision gravitational-wave observations and theoretical calculations.
One widely used and highly successful analytical tool for the study of binary black-hole systems is the Post-Newtonian (PN) expansion [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39] and the effective one-body formulation [40; 41; 42; 43; 44; 45]. Recently, several varieties of modern methods, e.g. the double copy [46; 47; 48; 49; 50; 51; 52; 53], the Kosower-Maybee-O'Connell (KMOC) formalism [54], heavy-mass effective theories [55; 56; 57; 58; 59; 60; 61; 62], the eikonal approach [63; 64; 65; 66], velocity cuts and the exponential representation of the \(S\)-matrix [67; 68; 69], worldline effective theory [70; 71; 72; 73; 74; 75] and worldline quantum field theory [76; 77; 78], have emerged as powerful theoretical frameworks for studying binary black-hole physics to high Post-Minkowskian (PM) order from different points of view. In particular they have been successfully applied to compute the conservative part of the binary dynamics of gravitationally interacting systems [67; 69; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107] to high orders in the PM expansion.
Research directly focused on the gravitational waveforms of binary black-hole systems in the PM expansion is evolving rapidly. The tree-level waveforms for spinless objects were computed in [108; 109; 110] and reproduced in [77; 111] in the worldline picture. The tree-level waveform was studied in [112; 113] using the scattering-amplitude based KMOC formalism [54; 112] and investigated using the eikonal approach in [63; 64]. At one loop, the study of the gravitational waveform was initiated recently in [114; 115; 116; 62] where the principal value contribution was obtained and shown to be consistent between KMOC and a heavy-mass effective field theory (HEFT) framework. The remaining terms beyond this principal value part were pointed out in [117] and shown to give an additional contribution to the waveform. The existence of such terms was also suggested by comparing with the Multipolar-Post-Minkowskian waveform in [118].
Gravitational waveforms are influenced by various intrinsic properties of black holes. One of the most significant factors among them is their spin. An important building block for including spin effects in waveforms is the minimal coupling between a classical spinning black hole and a graviton obtained using the massive spinorhelicity formalism [119]. Further important developments made use of spinor helicity [120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 56; 133; 135], the covariant amplitude form [136; 137; 138; 139; 140; 141; 43], gravitational solutions [144; 145; 146; 147; 148], and the worldline picture [76; 77; 103; 106; 149; 150]. At tree level, the spin contribution to the waveform up to quadratic order was obtained in [151; 152] using a worldline effective theory.
In this paper, by employing the definition of waveforms in terms of five-point amplitudes [112], we compute gravitational waveforms involving spinning black holes, crucially without the need to expand in their spin. The building blocks entering the recursive BCFW construction [153; 154] of the five-point amplitude, adapted to the classical amplitude [62], are the three-point and four-point Compton amplitudes with massive particles of arbitrary classical spin, which were constructed in [155; 156] using a bootstrap technique which makes use of entire functions. After expanding in spin, this form of this Compton amplitude agrees with results obtained from black-hole perturbation theory [134; 142; 145] for Kerr black holes up to at least fourth order in spin.
In this work we mainly focus on the time-domain waveform. First, we perform the Fourier transform over the frequency; the exponential factors in the spinning amplitude then produce a simple extra delta function when transforming to impact parameter space [151; 77; 114]. This additional delta function localises the integral further and simplifies the tree-level amplitude greatly. Finally, thanks to Cauchy's theorem, the remaining one-dimensional integral localises to contour integrals around physical poles only. We stress here that our approach does not require any expansion in the spin parameters. Importantly, this allows us to preserve the (partially) resummed form of the Compton amplitude, and thus enables us to obtain a first glimpse at large-spin effects in gravitational waveforms.
The rest of the paper is organised as follows. In the next section we introduce the kinematics of the process, together with the definition of the spin variables we employ. In Section 3 we introduce the three-point amplitude and the Compton amplitude with spinning particles. These are then used in Section 4 to construct the five-point amplitude of four massive spinning particles with the emission of a gravitational wave, using a particular form of BCFW recursion relation introduced in [62] for classical amplitudes. In Section 5 we introduce the general method to compute the time-domain waveforms and illustrate how this computation reduces to a sum of residues on physical factorisation poles only, in the simpler case of spinless particles. We then present the general expression of the waveform for arbitrary spins of the two black holes. In Section 6 we specialise to the case of a Schwarzschild and a Kerr black hole, and also present several plots of the waveforms for increasing values of the spin of the Kerr black hole. In Section 7 we make some interesting observations by comparing the waveforms obtained using the resummed Compton amplitudes to those derived from the Compton amplitudes expanded in the spin parameter. Section 8 presents a short derivation of the memory of the gravitational wave in the spinning case, to all orders in the spins of the celestial objects, which we have then used to test our analytic results. Finally, two appendices complete the paper. In Appendix A we perform some useful simplification of the expression of the four-point Compton amplitude, which are convenient in the derivation of the
memory; and in Appendix B we list the coefficients appearing in the \(q_{1}^{2}\)- and \(q_{2}^{2}\)-channels of the classical, tree-level five-point amplitude derived in Section 5.
The interested reader can find _Mathematica_ notebooks with expressions for the spinning HEFT amplitudes with one emitted graviton, and explicit time-domain waveform results in the system of a Schwarzschild and a Kerr black hole _Spinning-Waveform_ GitHub repository.
**Note added:** While preparing this manuscript we became aware of the nice work [157], with which our paper has some overlap. We have checked that our results agree with theirs.
## 2 Kinematics of the scattering and spin variables
Here we review the kinematics of the scattering of two heavy spinning particles of masses \(m_{1}\) and \(m_{2}\) and spin vectors \(a_{1}\) and \(a_{2}\), with the emission of a graviton of momentum \(k\):
\[\begin{split} p_{2}=\bar{p}_{2}+\frac{q_{2}}{2}\qquad p_{2}^{ \prime}=\bar{p}_{2}-\frac{q_{2}}{2}\\ p_{1}=\bar{p}_{1}+\frac{q_{1}}{2}\qquad\end{split} \tag{1}\]
As usual we have introduced barred variables, defined as [158; 87]
\[\begin{split} p_{1}&=\bar{p}_{1}+\frac{q_{1}}{2}\,, \qquad p_{1}^{\prime}=\bar{p}_{1}-\frac{q_{1}}{2}\,,\\ p_{2}&=\bar{p}_{2}+\frac{q_{2}}{2}\,,\qquad p_{2}^{ \prime}=\bar{p}_{2}-\frac{q_{2}}{2}\,,\end{split} \tag{2}\]
which satisfy
\[\bar{p}_{1}\!\cdot\!q_{1}=\bar{p}_{2}\!\cdot\!q_{2}=0\,. \tag{3}\]
We also introduce barred masses,
\[\bar{m}_{i}^{2}\coloneqq\bar{p}_{i}^{2}=m_{i}^{2}-\frac{q_{i}^{2}}{4}\,, \qquad i=1,2\,, \tag{4}\]
with the HEFT expansion being organised in powers of the \(\bar{m}_{i}\).
To parameterise the scattering process we choose five independent Lorentz-invariant quantities as in [62],
\[y\coloneqq v_{1}\!\cdot\!v_{2}\geq 1\,\qquad q_{i}^{2}\leq 0\,\qquad w_{i} \coloneqq v_{i}\!\cdot\!k\geq 0\,,\qquad i=1,2, \tag{5}\]
where the four-velocities are defined from \(p_{i}{=}m_{i}v_{i}\), with \(v_{i}^{2}{=}1\). We also note that \(y\) is the relativistic factor \(\frac{1}{\sqrt{1-v_{\rm rel}^{2}}}\), where \(v_{\rm rel}\) is the relative velocity of one of the two heavy objects in the rest frame of the other. We will also be using the barred versions \(\bar{w}_{i}\coloneqq\bar{v}_{i}{\cdot}k\) and \(\bar{y}\coloneqq\bar{v}_{1}{\cdot}\bar{v}_{2}\) of the above quantities, with \(\bar{p}_{i}\coloneqq\bar{m}_{i}\bar{v}_{i}\) and \(\bar{v_{i}}^{2}=1\).
The spin tensors for incoming and outgoing massive particles in terms of the spin vectors \(s_{i}\) are given by:
\[S_{i}^{\sigma\nu}(p_{i})=-\frac{1}{m_{i}}\epsilon^{\rho\nu\alpha\beta}p_{i\, \alpha}s_{i\,\beta}(p_{i}),\qquad S_{i}^{\sigma\nu}(p_{i}^{\prime})=-\frac{1}{ m_{i}}\epsilon^{\sigma\nu\alpha\beta}p_{i\,\alpha}^{\prime}s_{i\,\beta}(p_{i}^{ \prime}) \tag{6}\]
To expand this in the heavy-mass limit we change variables from \(p_{i},p_{i}^{\prime}\) to \(\bar{p}_{i}\) and \(q_{i}\) as in (2). We follow the method of [125] and use an infinitesimal Lorentz transformation from \(\bar{p}_{i}\) to \(\bar{p}_{i}\pm\frac{q}{2}\) to write
\[\begin{split} s_{i}^{\mu}\big{(}\bar{p}_{i}\pm\frac{q}{2}\big{)}& =(\delta_{\nu}^{\mu}\pm\omega^{\mu}{}_{\nu})s_{i}(\bar{p}_{i})^{ \nu}\\ &=\Big{[}\delta_{\nu}^{\mu}\mp\frac{1}{2\bar{m}^{2}}(\bar{p}_{i}^ {\mu}q_{\nu}-q^{\mu}\bar{p}_{i\,\nu})\Big{]}s_{i}(\bar{p}_{i})^{\nu}\\ &=s_{i}(\bar{p}_{i})^{\mu}\mp\frac{\bar{p}_{i}^{\mu}}{2\bar{m}^{2 }}q{\cdot}s_{i}(\bar{p}_{i})+\mathcal{O}(\bar{m}^{-2})\,.\end{split} \tag{7}\]
This is valid since \(\bar{m}_{i}\) (which will eventually be the classical mass) is much larger than the typical value of \(q\). This lets us expand the spin tensors as
\[\begin{split} S_{i}^{\sigma\nu}(p)&=-\frac{1}{ \bar{m}_{i}}\epsilon^{\sigma\nu\alpha\beta}\bar{p}_{i\,\alpha}s_{i\,\beta}( \bar{p})-\frac{1}{2\bar{m}_{i}}\epsilon^{\sigma\nu\alpha\beta}q_{\alpha}s_{i\, \beta}(\bar{p}_{i})+\cdots\,,\\ S_{i}^{\sigma\nu}(p^{\prime})&=-\frac{1}{\bar{m}_{ i}}\epsilon^{\sigma\nu\alpha\beta}\bar{p}_{i\,\alpha}s_{i\,\beta}(\bar{p}_{i})+ \frac{1}{2\bar{m}_{i}}\epsilon^{\sigma\nu\alpha\beta}q_{\alpha}s_{i\,\beta}( \bar{p}_{i})+\cdots\,,\end{split} \tag{8}\]
where, remarkably, the shifts in \(s_{i}^{\mu}(p_{i}^{(\prime)})\) drop out to this order in the \(\bar{m}\) expansion, due to the antisymmetry of the Levi-Civita. We can also define the classical spin parameter as
\[a_{i}^{\mu}\coloneqq\frac{s_{i}^{\mu}(\bar{p}_{i})}{\bar{m}_{i}}\,, \tag{9}\]
to write
\[\begin{split} S_{i}^{\sigma\nu}(p_{i}^{\prime})&=- \epsilon^{\sigma\nu\alpha\beta}\Big{(}\bar{p}_{i\,\alpha}-\frac{q_{\alpha}}{2} \Big{)}a_{i\,\beta}+\mathcal{O}(\bar{m}_{i}^{-1})\,,\\ S_{i}^{\sigma\nu}(p_{i})&=-\epsilon^{\sigma\nu \alpha\beta}\Big{(}\bar{p}_{i\,\alpha}+\frac{q_{\alpha}}{2}\Big{)}a_{i\,\beta} +\mathcal{O}(\bar{m}^{-1})\,.\end{split} \tag{10}\]
Finally, in the large \(\bar{m}_{i}\) limit the two spin tensors in (10) become the same, and we define our classical spin tensors as
\[S_{i}^{\mu\nu}\coloneqq-\epsilon^{\mu\nu\rho\sigma}\bar{p}_{i\, \rho}a_{i\,\sigma}\,, \tag{11}\]
which satisfies \(S^{\mu\nu}_{i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The four-point classical Compton amplitude can be divided into three pieces [156],
\[\mathcal{M}_{4}=\frac{-i\mathcal{N}_{\mathrm{dc}}}{2k_{1}\!\cdot\!k_{2}}+\frac{-i \mathcal{N}_{\mathrm{r}}}{4\bar{p}\!\cdot\!k_{1}\bar{p}\!\cdot\!k_{2}}-i \mathcal{N}_{\mathrm{c}}. \tag{10}\]
The first term is computed ignoring the "spin flip effect" and can be obtained using the double copy [155],
\[\mathcal{N}_{\mathrm{dc}} =-\Bigg{[}\frac{\mathsf{w}_{1}\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot \!\mathsf{w}_{2}}{k_{1}\!\cdot\!\bar{p}}-\Big{(}iG_{2}\left(x_{1},x_{2}\right) \left(a\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!S\!\cdot\!k_{2}+a\!\cdot\!F_{2}\! \cdot\!F_{1}\!\cdot\!S\!\cdot\!k_{1}\right)\] \[+iG_{1}\left(x_{12}\right)\mathrm{tr}\left(F_{1}\!\cdot\!S\!\cdot \!F_{2}\right)+G_{1}(x_{1})G_{1}(x_{2})(a\!\cdot\!F_{1}\!\cdot\!\bar{p}a\! \cdot\!F_{2}\!\cdot\!k_{1}-a\!\cdot\!F_{1}\!\cdot\!k_{2}a\!\cdot\!F_{2}\!\cdot \!\bar{p})\] \[+k_{1}\!\cdot\!\bar{p}\,G_{1}(x_{1})G_{1}(x_{2})a\!\cdot\!F_{1} \!\cdot\!F_{2}\!\cdot\!a\bigg{)}\Bigg{]}\Big{(}\frac{\bar{p}\!\cdot\!F_{1}\! \cdot\!F_{2}\!\cdot\!\bar{p}}{k_{2}\!\cdot\!\bar{p}}\Big{)}\,. \tag{11}\]
with
\[x_{i}\coloneqq k_{i}\!\cdot\!a\,,\quad x_{i\ldots j}\coloneqq(k_{i}+\cdots+k _{j})\!\cdot\!a\,\quad F_{i}^{\mu\nu}=k_{i}^{\mu}\varepsilon_{i}^{\nu}- \varepsilon_{i}^{\mu}k_{i}^{\nu}\,, \tag{12}\]
Note that it contains both massless and massive poles and we already take the HEFT expansion. This term is the minimal amplitude to fit the test particle scattering angle in the Kerr metric.
The second term is due to the spin-flip effect, and only gives rise to massive poles,
\[\mathcal{N}_{\mathrm{r}} =\Big{(}(\partial_{x_{1}}-\partial_{x_{2}})G_{1}(x_{1})G_{1}(x_{2 })\Big{)}\] \[\times\Big{(}\bar{p}\!\cdot\!k_{2}(\bar{p}^{2}a\!\cdot\!F_{1}\! \cdot\!F_{2}\!\cdot\!aa\!\cdot\!F_{2}\!\cdot\!F_{1}\!\cdot\!\bar{p}+a^{2}\bar {p}\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!\bar{p}a\!\cdot\!F_{1}\!\cdot\!F_{2} \!\cdot\!\bar{p})-(1\leftrightarrow 2)\Big{)}\] \[+i\Big{(}(\partial_{x_{1}}-\partial_{x_{2}})G_{2}(x_{1},x_{2}) \Big{)}\] \[\times\Big{(}\bar{p}\!\cdot\!k_{2}a\!\cdot\!F_{2}\!\cdot\!F_{1} \!\cdot\!\bar{p}(a\!\cdot\!F_{2}\!\cdot\!\bar{p}a\!\cdot\!\widetilde{F}_{1} \!\cdot\!\bar{p}\!-\!a\!\cdot\!F_{1}\!\cdot\!\bar{p}a\!\cdot\!\widetilde{F}_{2 }\!\cdot\!\bar{p})+(1\leftrightarrow 2)\Big{)}\,, \tag{13}\]
where \(\widetilde{F}^{\mu\nu}\equiv\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}\) denotes the Hodge dual of the linearised field strength. The last contribution consists of contact terms,
\[\mathcal{N}_{\mathrm{c}} =\Big{(}\frac{(\partial_{x_{1}}-\partial_{x_{2}})^{2}}{2!}G_{1}(x _{1})G_{1}(x_{2})\Big{)}\Big{(}a\!\cdot\!F_{1}\!\cdot\!\bar{p}a\!\cdot\!F_{2} \!\cdot\!\bar{p}a\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!a \tag{14}\] \[-\frac{1}{2}a^{2}(a\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!\bar{p}a \!\cdot\!F_{2}\!\cdot\!F_{1}\!\cdot\!\bar{p}-a\!\cdot\!F_{1}\!\cdot\!F_{2}\! \cdot\!a\bar{p}\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!\bar{p})\Big{)}\] \[+e_{1}\Big{(}\frac{i(\partial_{x_{1}}-\partial_{x_{2}})^{2}}{2! }G_{2}(x_{1},x_{2})\Big{)}\Big{(}a\!\cdot\!F_{1}\!\cdot\!F_{2}\!\cdot\!aa\! \cdot\!F_{2}\!\cdot\!\bar{p}a\!\cdot\!\widetilde{F}_{1}\!\cdot\!\bar{p}-(1 \leftrightarrow 2)\Big{)}\,.\]
The \(G\)-functions appearing in the expressions above can be defined in terms of hyperbolic functions as [155]
\[G_{1}(x):=\frac{\sinh(x)}{x}\,,\]
\[G_{2}(x_{1},x_{2}):=\frac{1}{x_{2}}\Big{(}\frac{\sinh(x_{12})}{x_{12}}-\cosh(x_{2} )\,\frac{\sinh(x_{1})}{x_{1}}\Big{)}, \tag{21}\]
and are entire functions, free of singularities. Note that \(G_{2}(x_{2},x_{1})=-G_{2}(x_{1},x_{2})\).
The above contact terms in the first two lines of (20) only begin contributing at quartic order in the spin and their numerical coefficients have been fixed against results at quartic order in [142; 134; 145] arising from black-hole perturbation theory. The last line in (20) contributes from quintic order in the spin and its numerical coefficient \(e_{1}\) is fixed by the conjectured "spin-shift symmetry" applied at this order [142; 133] to be \(e_{1}=-3/4\).
Following the pattern from the entire functions point of view, we could also have more contact terms as, for example
\[\begin{split}& e_{2}\Big{(}\frac{i(\partial_{x_{1}}-\partial_{x_{2}})^{ 2}}{2!}G_{2}(x_{1},x_{2})\Big{)}\Big{(}\bar{p}^{2}(a\!\cdot\!F_{1}\!\cdot\!F_{2 }\!\cdot\!a)(a\!\cdot\!F_{2}\!\cdot\!\widetilde{F}_{1}\!\cdot\!a)-(1 \leftrightarrow 2)\Big{)}\,,\\ & e_{3}\Big{(}\frac{i(\partial_{x_{1}}-\partial_{x_{2}})^{2}}{2! }G_{2}(x_{1},x_{2})\Big{)}\Big{(}a^{2}(a\!\cdot\!F_{2}\!\cdot\!F_{1}\!\cdot\! \bar{p})(a\!\cdot\!\widetilde{F}_{1}\!\cdot\!F_{2}\!\cdot\!\bar{p})-(1 \leftrightarrow 2)\Big{)}\,.\end{split} \tag{22}\]
These terms would start contributing at \(\mathcal{O}(a^{5})\) but they are ruled out by spin-shift symmetry. In the future one may want to add these contact terms and relax the spin-shift symmetry constraint, however we do not consider this in this paper. Therefore, the results derived here are applicable to Kerr black holes at least to quartic order in the spin. Thus, we will set \(e_{1}=-3/4,\,e_{2}=0,\,e_{3}=0\), although our method makes it easy to deal with any values of the \(e_{i}\)'s and also further contact terms starting at \(\mathcal{O}(a^{6})\) and beyond.
## 4 Spinning five-point amplitude
The crucial ingredient to compute the waveforms is the classical part of the five-point amplitude of two spinning celestial objects with one radiated graviton.1 It can be derived using the HEFT BCFW recursion relation introduced in [62] and is obtained from the following two recursive diagrams,
Footnote 1: In the next section we will see that actually only the residues on the physical factorisation channels are needed for computing the waveform. However, since the computation of the five-point amplitude is so simple we cannot resist to present it here.
\[\begin{split}\bar{v}_{2},a_{2}\parbox{142.362pt}{ \includegraphics[scale=142.362pt]{figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfigures/figuresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfiguresfigures/figuresfiguresfiguresfiguresfiguresfiguresfigures/figures
without poles in \(q_{1}^{2}\) or \(q_{2}^{2}\) but possibly with massive poles). In the spinning case we will follow the same procedure and, although we have no general proof that these contact terms are captured fully, we have checked that the contributions from the two BCFW diagrams satisfy the correct soft behaviour. Regardless, such contact terms without poles in \(q_{1}^{2}\) or \(q_{2}^{2}\) do not contribute to the tree-level waveform as we will see in Sections 5 and 7.
The contribution of each of the two diagrams is obtained by gluing a three-point amplitude with a four-point Compton amplitude, given in (11) and (12), respectively. In doing so one has to sum over the intermediate states of the exchanged graviton, using
\[\sum_{h}\varepsilon^{\mu_{a}}_{-\bar{q}}\varepsilon^{\nu_{a}}_{-\bar{q}} \varepsilon^{\mu_{b}}_{\bar{q}}\varepsilon^{\nu_{b}}_{\bar{q}}=\frac{1}{2} \Big{[}\eta^{\mu_{a}\mu_{b}}\eta^{\nu_{a}\nu_{b}}+\eta^{\mu_{a}\nu_{b}}\eta^{ \nu_{a}\mu_{b}}-\frac{2}{D-2}\eta^{\mu_{a}\nu_{a}}\eta^{\mu_{b}\nu_{b}}\Big{]}\;. \tag{13}\]
For convenience, we introduce a tensor current by extracting the polarisation vector from the Compton amplitude:
\[\mathcal{J}^{\mu\nu}_{i}\varepsilon_{\mu\nu}=\mathcal{M}_{4}(-q_{i},k,\bar{v }_{i},a_{i}),\hskip 28.452756pt\mathcal{J}^{\mu\nu}_{i,0}\varepsilon_{\mu\nu}= \mathcal{M}_{4}(-q_{i},k,\bar{v}_{i},a_{i}=0)\,. \tag{14}\]
Then, the amplitude in each channel is of the form
\[\mathcal{M}_{q_{1}^{2}} =\frac{1}{q_{1}^{2}}\sum_{h}\Big{(}\cosh(a_{1}\!\cdot\!q_{1})( \bar{v}_{1}\!\cdot\!\varepsilon)^{2}\varepsilon_{\mu\nu}\mathcal{J}^{\mu\nu}_{ 1}-iG(a_{1}\!\cdot\!q_{1})\bar{v}_{1}\!\cdot\!\varepsilon q_{1}\!\cdot\!S_{1}\! \cdot\!\varepsilon\varepsilon_{\mu\nu}\mathcal{J}^{\mu\nu}_{1}\Big{)}\] \[=\frac{1}{q_{1}^{2}}\Big{(}\cosh(a_{1}\!\cdot\!q_{1})\big{(}\bar{ v}_{1}\!\cdot\!\mathcal{J}_{1}\!\cdot\!\bar{v}_{1}-\frac{1}{2}\mathrm{tr}( \mathcal{J}_{1})\big{)}-\frac{i}{2}G(a_{1}\!\cdot\!q_{1})\big{(}q_{1}\!\cdot\! S_{1}\!\cdot\!\mathcal{J}_{1}\!\cdot\!\bar{v}_{1}-\bar{v}_{1}\!\cdot\!\mathcal{J}_{1}\! \cdot\!S_{1}\!\cdot\!q_{1})\Big{)}\,, \tag{15}\]
and
\[\mathcal{M}_{q_{2}^{2}} =\frac{1}{q_{2}^{2}}\sum_{h}\Big{(}\cosh(a_{2}\!\cdot\!q_{2})( \bar{v}_{2}\!\cdot\!\varepsilon)^{2}\varepsilon_{\mu\nu}\mathcal{J}^{\mu\nu}_ {2}-iG(a_{2}\!\cdot\!q_{2})\bar{v}_{2}\!\cdot\!\varepsilon q_{2}\!\cdot\!S_{2} \!\cdot\!\varepsilon\varepsilon_{\mu\nu}\mathcal{J}^{\mu\nu}_{2}\Big{)}\] \[=\frac{1}{q_{2}^{2}}\Big{(}\cosh(a_{2}\!\cdot\!q_{2})\big{(}\bar{ v}_{2}\!\cdot\!\mathcal{J}_{2}\!\cdot\!\bar{v}_{2}-\frac{1}{2}\mathrm{tr}( \mathcal{J}_{2})\big{)}-\frac{i}{2}G(a_{2}\!\cdot\!q_{2})\big{(}q_{2}\!\cdot\! S_{2}\!\cdot\!\bar{v}_{2}-\bar{v}_{2}\!\cdot\!\mathcal{J}_{2}\!\cdot\!S_{2}\! \cdot\!q_{2})\Big{)}\,. \tag{16}\]
The full amplitude can be obtained directly adding (15) and (16),
\[\mathcal{M}_{5,\mathrm{HEFT}}=\mathcal{M}_{q_{1}^{2}}+\mathcal{M}_{q_{2}^{2}}\,. \tag{17}\]
Both channels have the spurious pole \(\frac{1}{k\cdot q_{1}}\), which cancels after summing the two contributions. To see this, we must use the Bianchi identity in \(D\)-dimensional momentum space [163]
\[A\!\cdot\!F_{k}\!\cdot\!B\ k\!\cdot\!C+B\!\cdot\!F_{k}\!\cdot\!C\ A\!\cdot\!k+C \!\cdot\!F_{k}\!\cdot\!A\ k\!\cdot\!B=0\,, \tag{18}\]
where \(A,B,C\) can be any vector. For example, a particular application is
\[\bar{v}_{1}\!\cdot\!S_{2}\!\cdot\!F_{k}\!\cdot\!q_{2}=\frac{k\!\cdot\!q_{2}\bar{v }_{1}\!\cdot\!F_{k}\!\cdot\!S_{2}\!\cdot\!\bar{v}_{1}-k\!\cdot\!S_{2}\!\cdot\! \bar{v}_{1}\bar{v}_{1}\!\cdot\!F_{k}\!\cdot\!q_{2}}{q_{2}\!\cdot\!\bar{v}_{1}}. \tag{4.8}\]
The resulting expression for the amplitude only contains the following four field-strength products:
\[a_{1}\!\cdot\!F_{k}\!\cdot\!\bar{v}_{1},\qquad\quad a_{2}\!\cdot \!F_{k}\!\cdot\!\bar{v}_{1},\qquad\quad q_{1}\!\cdot\!F_{k}\!\cdot\!\bar{v}_{1 },\qquad\quad v_{1}\!\cdot\!F_{k}\!\cdot\!\bar{v}_{2},\quad\quad q_{1}\!\cdot \!S_{1}\!\cdot\!F_{k}\!\cdot\!\bar{v}_{1},\] \[q_{1}\!\cdot\!S_{2}\!\cdot\!F_{k}\!\cdot\!\bar{v}_{1},\quad\quad v _{1}\!\cdot\!F_{k}\!\cdot\!S_{1}\!\cdot\!\bar{v}_{2},\quad\quad\bar{v}_{1}\! \cdot\!F_{k}\!\cdot\!S_{2}\!\cdot\!\bar{v}_{1},\quad\quad\mathrm{tr}\left(F_{k} \!\cdot\!S_{1}\right),\qquad\mathrm{tr}\left(F_{k}\!\cdot\!S_{2}\right). \tag{4.9}\]
The complete expression for the five-point amplitude of two spinning black holes is included in the GitHub repository associated to this paper.
In this paper we will present waveforms in the simpler situation of the scattering of a Schwarzschild and a Kerr black hole, deferring the study of the waveform produced by two Kerr black holes to [164]. Without loss of generality, we will therefore set \(a_{2}\)=0, which dramatically simplifies the contribution from the \(q_{1}^{2}\)-channel. Then the amplitude in each channel has a very compact form
\[\mathcal{M}_{q_{1}^{2}} =\frac{1}{q_{1}^{2}}\Big{[}\cosh(a_{1}\!\cdot\!q_{1})\big{(}\bar {v}_{1}\!\cdot\!\mathcal{J}_{2,0}\!\cdot\!\bar{v}_{1}-\frac{1}{2}\mathrm{tr} (\mathcal{J}_{2,0})\big{)} \tag{4.10}\] \[-\frac{i}{2}G(a_{1}\!\cdot\!q_{1})\big{(}q_{1}\!\cdot\!S_{1}\! \cdot\!\mathcal{J}_{2,0}\!\cdot\!\bar{v}_{1}-\bar{v}_{1}\!\cdot\!\mathcal{J}_ {2,0}\!\cdot\!S_{1}\!\cdot\!q_{1}\big{)}\Big{]}\,,\]
and
\[\mathcal{M}_{q_{2}^{2}}=\frac{1}{q_{2}^{2}}\Big{[}\bar{v}_{2}\! \cdot\!\mathcal{J}_{1}\!\cdot\!\bar{v}_{2}-\frac{1}{2}\mathrm{tr}(\mathcal{J }_{1})\Big{]}\,. \tag{4.11}\]
## 5 The time-domain waveform
### Waveforms from amplitudes
We begin by briefly reviewing the emergence of waveforms in black-hole scattering. We consider the classical gravitational field produced by the scattering of two black holes which are modelled by two massive spinning particles using the KMOC approach [54, 112]. The corresponding initial two-particle state has the form
\[|\psi\rangle_{\mathrm{in}}\coloneqq\int\!\prod_{j=1}^{2}d\Phi(p_{j})e^{ip_{1 }\cdot b}\phi(p_{1})\phi(p_{2})|p_{1},a_{1},p_{2},a_{2}\rangle_{\mathrm{in}}\;. \tag{5.1}\]
Following [54, 62, 112, 114, 115], one finds that
\[\langle h^{\mathrm{out}}_{\mu\nu}(x)\rangle_{\psi}\!=\!\kappa\, \int\!\prod_{j=1}^{2}d\Phi(\bar{p}_{j})\,|\phi(\bar{p}_{1})|^{2}|\phi(\bar{p}_ {2})|^{2}\Big{[}\sum_{h}\int\!d\Phi(k)e^{-ik\cdot x}\,\varepsilon^{(h)*}_{\mu \nu}(\hat{k})\left[i\,W\right]\,+\,\mathrm{h.c.}\Big{]}, \tag{5.2}\]
where \(k{:=}\omega\hat{k}\). Here \(W{=}W(\bar{b},k;h)\) is the spectral waveform for the emission of a graviton of momentum \(k\) and helicity \(h\), which at leading order in the PM expansion is2
Footnote 2: The factor of \(-i\) cancels the \(i\) from our definition of amplitudes as matrix elements of \(i\,T\).
\[W(b,k^{h})\coloneqq-i\int\!d\mu^{(D)}\ e^{iq_{1}\cdot b}\ {\cal M}_{5,{\rm HEFT}}(q_{1},q_{2},a_{1},a_{2};h)\,, \tag{10}\]
where we have introduced the \(D\)-dimensional measure (for regularisation purposes)
\[d\mu^{(D)}\coloneqq\frac{d^{D}q_{1}}{(2\pi)^{D-1}}\frac{d^{D}q_{2}}{(2\pi)^{D -1}}\,(2\pi)^{D}\delta^{(D)}(q_{1}+q_{2}-k)\delta(2\bar{p}_{1}{\cdot}q_{1}) \delta(2\bar{p}_{2}{\cdot}q_{2})\,, \tag{11}\]
with \(q_{1,2}{=}p_{1,2}{-}p_{1,2}^{\prime}\) being the momentum transfers, and \(D{=}4{-}2\epsilon\). Here we are ignoring zero-modes in the amplitude which only have support when the graviton energy \(\omega\) is zero.
In the far-field limit, corresponding to large observer distance \(r{:=}|\vec{x}|\) and time \(t\) with fixed retarded time \(u{:=}t{-}r\), (11) can be simplified to3
Footnote 3: Henceforth, we omit an overall factor of \(\int\!\prod_{j=1}^{2}d\Phi(p_{j})\,|\phi(p_{1})|^{2}|\phi(p_{2})|^{2}\).
\[\langle h^{\rm out}_{\mu\nu}(x)\rangle_{\psi}\,{=}\,\frac{\kappa}{4\pi r}\, \Big{[}\sum_{h}\varepsilon^{(h)*}_{\mu\nu}(\hat{k})\int_{0}^{+\infty}\!\frac{d \omega}{2\pi}e^{-i\omega u}\,W(b,k^{h})\,+\,{\rm h.c.}\Big{]}_{k=\omega(1,8)}. \tag{12}\]
Alternatively, extending the \(\omega\) integration from \(-\infty\) to \(+\infty\),
\[\begin{split}&\langle h^{\rm out}_{\mu\nu}(x)\rangle_{\psi}= \frac{\kappa}{4\pi r}\,\Big{[}\sum_{h}\int_{-\infty}^{+\infty}\!\frac{d\omega }{2\pi}e^{-i\omega u}\\ &\Big{[}\varepsilon^{(h)*}_{\mu\nu}(\hat{k})\,\theta(\omega)\, \left.W\!\left(b,k^{h}\right)\right|_{k=\omega(1,\hat{\bf x})}+\left. \varepsilon^{(h)}_{\mu\nu}(\hat{k})\theta(-\omega)\,\left.W^{*}\!\left(b,k^{ h}\right)\right|_{k=-\omega(1,\hat{\bf x})}\right].\end{split} \tag{13}\]
We now define4
Footnote 4: We comment that in our normalisations, the combination \(\langle h_{+}-ih_{\times}\rangle\) is proportional to the strain \(h(x)\), specifically \(h(x){=}-\,(1/2)\langle h_{+}-ih_{\times}\rangle\), where the strain is related to the Newman-Penrose scalar \(\Psi_{4}\) as \(\Psi_{4}{=}d^{2}h/du^{2}\).
\[\langle h_{+}\pm ih_{\times}\rangle\coloneqq\langle h^{\rm out}_{\mu\nu} \rangle\varepsilon^{\mu\nu}_{(\pm\pm)}\coloneqq\frac{1}{4\pi r}(h_{+}^{\infty }\pm ih_{\times}^{\infty})\,. \tag{14}\]
Using the properties of the positive/negative helicity polarisation vectors \(\varepsilon^{(\pm)*}_{\mu}{=}\varepsilon^{\mu(\mp)}\), \(\varepsilon^{(\pm)*}_{\mu}{\varepsilon^{\mu(\pm)}}=-1\), \(\varepsilon^{(\pm)*}_{\mu}{\varepsilon^{\mu(\mp)}}=0\) we get
\[h_{+}^{\infty}\pm ih_{\times}^{\infty}=\kappa\int_{-\infty}^{+\infty}\!\frac{d \omega}{2\pi}e^{-i\omega u}\Big{[}\theta(\omega)\,\left.W\!\left(b,k^{\pm} \right)\right|_{k=\omega(1,\hat{\bf x})}+\left.\theta(-\omega)\,\left.W^{*}\! \left(b,k^{\mp}\right)\right|_{k=-\omega(1,\hat{\bf x})}\Big{]}. \tag{15}\]
We can now combine the two terms in (15). In order to do so, we first note that the five-point spinning amplitude has the form
\[-i{\cal M}_{5,{\rm HEFT}}=\varepsilon_{\mu\nu}(k)m^{\mu\nu}\,,\qquad{\rm with} \qquad m^{\mu\nu}=m^{\mu\nu}_{\rm even}+im^{\mu\nu}_{\rm odd}\,, \tag{16}\]
where \(m^{\mu\nu}_{\rm even}\) and \(m^{\mu\nu}_{\rm odd}\) are real, and contain even and odd powers of the spin, respectively. Then we observe that we can separate out the \(\omega\) dependence of the amplitude: we perform a rescaling of \(q_{1,2}\) and define
\[q_{1,2}=\omega\hat{q}_{1,2},\qquad k=\omega\hat{k}. \tag{5.10}\]
Then we have
\[\mathcal{M}_{5,{\rm HEFT}}(q_{1},q_{2},k^{h},a_{1},a_{2})\big{|}_{S^{n}}= \frac{\omega^{n}}{\omega^{2}}\left.\mathcal{M}_{5,{\rm HEFT}}(\hat{q}_{1},\hat {q}_{2},\hat{k}^{h},a_{1},a_{2})\right|_{S^{n}}\,, \tag{5.11}\]
where \(\left.|_{S^{n}}\right.\) denotes the term containing \(n\) powers of the spin in the HEFT amplitude. Note that \(\mathcal{M}_{5,{\rm HEFT}}(\hat{q}_{1},\hat{q}_{2},\hat{k}^{h},a)\) is \(\omega\)-independent. Combining (5.9) and (5.11) we find that
\[W^{*}(b,k^{h})\big{|}_{k=-\omega(1,\hat{\mathbf{x}})}=\left.W(b,k^{-h})\right| _{k=\omega(1,\hat{\mathbf{x}})}\,, \tag{5.12}\]
and we can thus rewrite
\[h^{\infty}_{+}\pm ih^{\infty}_{\times}=\kappa\int_{-\infty}^{+\infty}\!\frac{d \omega}{2\pi}e^{-i\omega u}\left.W(b,k^{\pm})\right|_{k=\omega(1,\hat{\mathbf{ x}})}\,. \tag{5.13}\]
For convenience, in the following we will call this quantity
\[\begin{split} h^{\infty}(u)&\coloneqq\kappa\int_{- \infty}^{+\infty}\!\frac{d\omega}{2\pi}e^{-i\omega u}\left.W(b,k)\right|_{k= \omega(1,\hat{\mathbf{x}})}\\ &=-i\kappa\int_{-\infty}^{+\infty}\!\frac{d\omega}{2\pi}e^{-i \omega u}\int\!\frac{d^{4}q_{1}}{(2\pi)^{2}}\delta(2\bar{p}_{1}\!\cdot\!q_{1}) \delta(2\bar{p}_{2}\!\cdot\!(k-q_{1}))\ e^{iq_{1}\cdot b}\ \mathcal{M}_{5,{\rm HEFT}}\,,\end{split} \tag{5.14}\]
leaving the dependence on the helicity understood, and where in all formulae \(k\)=\(\omega(1,\hat{\mathbf{x}})\).
The above no longer appears manifestly real but in fact it is (when expressed in a basis of real polarisation tensors) thanks to the properties of \(-i\mathcal{M}_{5,{\rm HEFT}}\) in (5.9) and (5.11). That is, a real term in the amplitude has an even power of the spin and hence after the re-scaling (5.10) is an even function of \(\omega\); its Fourier transform is thus real. On the other hand, terms containing a factor of \(i\) will feature an odd power of the spin and so are odd functions of \(\omega\); their Fourier transform is thus imaginary and this cancels the additional factor of \(i\), with the final result being real.
### A scalar warm-up
Here we detail the computation of the scalar tree-level waveform, as a warm-up to the spinning case. Many of the simplifications we discuss here apply to the spinning waveform as well, in particular the intriguing fact that the computation boils down to a simple application of Cauchy's theorem. We begin with the expression for the
waveform (5.14) derived in the previous section (here and for the rest of the paper we will drop the explicit bars on all of the variables to reduce clutter)
\[h^{\infty}(u)=-i\kappa\int_{-\infty}^{+\infty}\!\frac{d\omega}{2\pi}e^{-i\omega u }\int\!\frac{d^{4}q_{1}}{(2\pi)^{2}}\delta(2p_{1}\!\cdot\!q_{1})\delta(2p_{2} \!\cdot\!(k-q_{1}))\ e^{iq_{1}\!\cdot b}\ {\cal M}_{5,{\rm HEFT}}\,, \tag{5.15}\]
First, we rescale the momentum transfers by \(\omega\), as discussed above, using \(q_{1,2}\!\!=\!\!\omega\hat{q}_{1,2}\) and \(k\!\!=\!\!\omega\hat{k}\). The classical scalar amplitude then scales universally like \(\omega^{-2}\), which cancels the power of \(\omega^{2}\) coming from the change of variables, to give
\[h^{\infty}(u)\!\!=\!\!-i\kappa\int_{-\infty}^{+\infty}\!\frac{d\omega}{2\pi}e ^{-i\omega u}\int\!\frac{d^{4}\hat{q}_{1}}{(2\pi)^{2}}\delta(2p_{1}\!\cdot\! \hat{q}_{1})\delta(2p_{2}\!\cdot\!(\hat{k}-\hat{q}_{1}))\ e^{i\omega\hat{q}_{1 }\!\cdot b}\ {\cal M}_{5,{\rm HEFT}}(\hat{q}_{1},\hat{q}_{2},\hat{k}^{h})\,. \tag{5.16}\]
In addition, it is useful to rescale the energy and retarded time by \(\sqrt{-b^{2}}\), as \(\omega\to\omega/\sqrt{-b^{2}}\) and \(u\to\sqrt{-b^{2}}u\). Effectively this means we are measuring the retarded time \(u\) in units of \(\sqrt{-b^{2}}\). With this choice, the tree-level waveform becomes
\[h^{\infty}(u)\!\!=\!\!\frac{-i\kappa}{\sqrt{-b^{2}}}\int_{-\infty}^{+\infty} \!\frac{d\omega}{2\pi}e^{-i\omega u}\int\!\frac{d^{4}\hat{q}_{1}}{(2\pi)^{2}} \delta(2p_{1}\!\cdot\!\hat{q}_{1})\delta(2p_{2}\!\cdot\!(\hat{k}\!-\!\hat{q}_ {1}))\ e^{\frac{i\omega\cdot\hat{q}_{1}\!\cdot b}{\sqrt{-b^{2}}}}\ {\cal M}_{5,{\rm HEFT}}(\hat{q}_{1},\hat{q}_{2},\hat{k}^{h})\,. \tag{5.17}\]
In fact, we are free to set \(\sqrt{-b^{2}}=1\) in the expression above (and in all subsequent expressions) since \(b^{\mu}\) only appears in the exponent through \(b^{\mu}/\sqrt{-b^{2}}\). To restore \(\sqrt{-b^{2}}\) we simply count the mass dimension of the expression, obtaining the \(1/\sqrt{-b^{2}}\) factor above. Similarly, one can recover the original definition of the retarded time \(u\) by counting mass dimension.
Next, it is useful to split the amplitude into the two terms coming from the BCFW diagrams (4.1). This gives us two contributions to the waveform, which we call \(h_{1}^{\infty}(u)\) and \(h_{2}^{\infty}(u)\)
\[h_{q_{1}^{2},q_{2}^{2}}^{\infty}(u)=-i\kappa\int_{-\infty}^{+\infty}\!\frac{d \omega}{2\pi}e^{-i\omega u}\int\!\frac{d^{4}\hat{q}_{1}}{(2\pi)^{2}}\delta(2p_ {1}\!\cdot\!\hat{q}_{1})\delta(2p_{2}\!\cdot\!(\hat{k}-\hat{q}_{1}))\ e^{i \omega\hat{q}_{1}\!\cdot b}\ {\cal M}_{q_{1}^{2},q_{2}^{2}}\,. \tag{5.18}\]
The two contributions \({\cal M}_{q_{1}^{2}}\) and \({\cal M}_{q_{2}^{2}}\) are related by the replacements \(v_{1}\!\!\leftrightarrow\!v_{2},q_{1}\!\!\leftrightarrow\!q_{2}\), which allows us to obtain the waveform contribution in the \(q_{2}^{2}\)-channel from the \(q_{1}^{2}\)-channel. To do this we perform the following replacements
\[h_{q_{1}^{2}}^{\infty}(b\!\cdot\!k-u)\xrightarrow{v_{1}\leftrightarrow v_{2}}h _{q_{2}^{2}}^{\infty}(u)\,, \tag{5.19}\]
which can be seen immediately using the definition (5.18). The asymmetric shift in the proper time \(u\) is due to our asymmetric choice of impact parameter in (5.1).
To compute the first cut, we decompose \(\hat{q}_{1}\) onto a basis of four-vectors [112]
\[\hat{q}_{1}^{\mu}=z_{1}v_{1}^{\mu}+z_{2}v_{2}^{\mu}+z_{v}v_{\perp}^{\mu}+z_{b} b^{\mu}\,, \tag{5.20}\]
where
\[v_{\perp}\coloneqq\epsilon(v_{1},v_{2},b,\bullet)\,, \tag{5.21}\]
and then change integration variables from \(q_{1}\) to \(z_{1},z_{2},z_{v},z_{b}\). In this parameterisation, we can use the two delta functions in (5.18) to localise the variables \(z_{1}\) and \(z_{2}\) to
\[z_{1}=\frac{(v_{2}\!\cdot\!k)(v_{1}\!\cdot\!v_{2})}{(v_{1}\!\cdot\!v_{2})^{2}-1 }=\frac{w_{2}y}{y^{2}-1},\quad z_{2}=-\frac{v_{2}\!\cdot\!k}{(v_{1}\!\cdot\!v_{ 2})^{2}-1}=-\frac{w_{2}}{y^{2}-1}\,. \tag{5.22}\]
The remaining integrals are then over \(z_{v},z_{b}\) and \(\omega\),
\[h^{\infty}_{q_{1}^{2}}(u)=\frac{-i\kappa}{(4\pi)^{2}m_{1}m_{2}}\int_{-\infty}^ {+\infty}\!\frac{d\omega}{2\pi}dz_{v}dz_{b}e^{-i\omega(u+z_{b})}\mathcal{M}_{ q_{1}^{2}}\Big{|}_{z_{1}=\frac{w_{2}y}{y^{2}-1},\,z_{2}=-\frac{w_{2}}{y^{2}-1}}\,. \tag{5.23}\]
The integral over \(\omega\) also gives a delta function which we can immediately use to localise the \(z_{b}\) integral,
\[h^{\infty}_{q_{1}^{2}}(u)=\frac{-i\kappa}{(4\pi)^{2}m_{1}m_{2}}\int_{-\infty}^ {+\infty}\!dz_{v}\mathcal{M}_{q_{1}^{2}}\Big{|}_{z_{1}=\frac{w_{2}y}{y^{2}-1}, \,z_{2}=-\frac{w_{2}}{y^{2}-1},\,z_{b}=-u}\,. \tag{5.24}\]
To compute the final integral in \(z_{v}\) we use Cauchy's residue theorem, hence we need to examine the pole structure of the \(q_{1}^{2}\)-cut. The integrand contains three types of poles in \(z_{v}\) which arise from certain denominator structures in the tree-level amplitude. These are
\[\text{Physical pole:}\quad\frac{1}{q_{1}^{2}}\sim\frac{1}{(z_{v}- iA)(z_{v}+iA)}, \tag{5.25}\] \[\text{Spurious pole:}\quad\frac{1}{q_{1}^{2}\,q_{1}\!\cdot\!k} \sim\frac{1}{(z_{v}-iA)(z_{v}+iA)(z_{v}-B)}\,,\] (5.26) \[\text{Pole at infinity:}\quad\begin{cases}\frac{z_{v}}{q_{1}^{2}} \sim\frac{z_{v}}{(z_{v}-iA)(z_{v}+iA)}\xrightarrow[z_{v}\to\infty]{}\xrightarrow [z_{v}\to\infty]{}\xrightarrow[z_{v}\to\infty]{}\xrightarrow[z_{v}\to\infty]{} \xrightarrow[z_{v}\,,\end{cases}\end{cases} \tag{5.27}\]
where \(A\) and \(B\) are real functions of the external kinematics. To compute the \(z_{v}\) integral we will close the integration contour in the upper half plane to capture the pole at \(z_{v}=iA\) and regulate the pole at infinity with a principal value prescription. This is equivalent to taking the integration limits \(z_{v}\to-\infty\) and \(z_{v}\to+\infty\) in a symmetric fashion, and implies that the pole at infinity receives an extra factor of \(\frac{1}{2}\). The spurious pole at \(z_{v}=B\) (coming from the factor \(q_{1}\!\cdot\!k\)) lies on the integration contour, however we know that this pole cancels when we combine the two cuts in \(q_{1}^{2}\) and \(q_{2}^{2}\). Hence we are free to ignore the residue on this spurious pole since it would cancel at the end of the computation (as we have checked explicitly).
In fact, we can further simplify the integral (5.24) using the following observations. First, the integral of one of the terms with a pole at infinity in (5.27) is actually zero,
\[\int_{-\infty}^{+\infty}\!dz_{v}\frac{z_{v}}{(z_{v}-iA)(z_{v}+iA)}=0\,. \tag{5.28}\]
This can be seen from the fact that the integrand is odd in \(z_{v}\), or that the residue at \(z_{v}=iA\) cancels with half the residue at infinity (recalling the principal value prescription mentioned earlier). The second term with a pole at infinity in (5.2) can also be simplified as
\[\begin{split}\frac{z_{v}^{2}}{(z_{v}{-}iA)(z_{v}{+}iA)(z_{v}{-}B)}& =\frac{((z_{v}{-}B){+}B)^{2}}{(z_{v}-iA)(z_{v}+iA)(z_{v}-B)}\\ &=\frac{B}{(z_{v}{-}iA)(z_{v}{+}iA)}+\frac{B^{2}}{(z_{v}{-}iA)(z_ {v}{+}iA)(z_{v}{-}B)}+\cdots\end{split} \tag{5.29}\]
where \(+\cdots\) are terms which vanish after integration due to (5.28). The remaining terms above are in the form of (5.25) and (5.26). Thus, after simplifications the only terms relevant to the waveform integral (5.24) are
\[\frac{1}{q_{1}^{2}}\sim\frac{1}{(z_{v}-iA)(z_{v}+iA)}\,,\qquad\frac{1}{q_{1}^{ 2}q_{1}{\cdot}k}\sim\frac{1}{(z_{v}-iA)(z_{v}+iA)(z_{v}-B)}\,, \tag{5.30}\]
for which we only compute the residue on the physical pole \(z_{v}=iA\). The computation for the second cut \(\mathcal{M}_{q_{2}^{2}}\) proceeds in an identical way, or alternatively we can obtain the second cut using the replacements (5.19).
We have thus learned that the computation of the waveform can be efficiently reduced to the evaluation of residues on physical poles. The same general principle will be used in the spinning case. The final expression for the scalar waveform is simply the sum of \(h_{q_{1}^{2}}^{\infty}\) and \(h_{q_{2}^{2}}^{\infty}\), and is included in the GitHub repository.
We can choose a frame such that the kinematics are given by
\[\begin{split} v_{1}&=(1,0,0,0),& v_{2}=(y,\sqrt{y^{2}-1},0,0)\\ k&=(1,\sin\theta\cos\phi,\sin\theta\sin\phi,\cos \theta),& v_{\perp}=(0,0,\sqrt{y^{2}-1},0)\\ \varepsilon&=\frac{1}{\sqrt{2}}\big{(}0,\cos\theta \cos\phi-i\sin\phi,\cos\theta\sin\phi+i\cos\phi,-\sin\theta\big{)},& b=(0,0,0,1)\,,\end{split} \tag{5.31}\]
and then in Figure 1 we present the scalar waveform at fixed angles \(\theta=\frac{\pi}{4}\) and \(\phi=\frac{\pi}{4}\) for various values of \(y\).
### General expression of the time-domain waveform for arbitrary spins
We now turn to the spinning case. The first observation to make is that, in principle, the Fourier transform to impact parameter space in (5.14) is ill-defined due to the large-\(q_{1}\) behaviour of the integrand giving rise to an ultraviolet (UV) divergence. An elegant way to regularise this is to leave the hyperbolic and exponential functions
in the Compton amplitudes unexpanded (in the spin vectors), introduce a new spin parameter as
\[\tilde{a}_{1,2}\coloneqq ia_{1,2}\,, \tag{108}\]
and temporarily take \(\tilde{a}_{1,2}\) to be real. Assuming that the final spinning waveform has an expansion around \(a_{1,2}\to 0\), this analytic continuation should not change the expansion coefficients. In support of this approach we mention that the \(a_{1}\to 0\) limit of our waveform gives the correct scalar result, and for \(a_{1}\neq 0\) has the correct gravitational memory (computed in Section 8); and finally, our results also agree with the recently derived waveform of [157], obtained by expanding in spin and then integrating, up to and including \(\mathcal{O}(a_{1}^{4})\). Indeed, one can expand the amplitude in the spin parameters before integration, and the amplitude's degree of divergence would grow with each additional order in the spin. However, as we see in Section 7, these divergences can be ignored since they only contribute to contact terms in \(q_{1}^{2}\) and \(q_{2}^{2}\), and both methods (the analytic continuation and expanding in spin before integration) give the same result.
Proceeding now with the analytic continuation in the spin (108), we observe that in the large-\(q_{1}\) limit, i.e. \(q_{1}{\rightarrow}\lambda\,q_{1}\) with \(\lambda{\rightarrow}\infty\), the scaling behaviour of the amplitude is now \(\mathcal{O}(\lambda^{-1})\) as \(\lambda\rightarrow\infty\). Pleasingly, this is precisely the same behaviour as that of the scalar amplitude. This logarithmic divergence will appear, identically to the scalar case, as a pole at infinity which we can again regulate with a principal value prescription. The waveform is therefore well-defined once we tame this logarithmic
Figure 1: Scalar waveforms \(h_{+}^{\infty}\) at various values of \(y\).
divergence,
\[h^{\infty}(u)=-i\kappa\int_{-\infty}^{+\infty}\!\frac{d\omega}{2 \pi}e^{-i\omega u}\int\frac{d^{4}\hat{q}_{1}}{(2\pi)^{2}}\delta(2p_{1}{\cdot}\hat {q}_{1})\delta(2p_{2}{\cdot}(\hat{k}-\hat{q}_{1}))e^{i\omega\hat{q}_{1}{\cdot} b}\;\omega^{2}(\mathcal{M}_{q_{1}^{2}}+\mathcal{M}_{q_{2}^{2}})\,, \tag{5.33}\]
where we have re-scaled \(q_{1}\) to \(\omega\,\hat{q}_{1}\) as discussed in the last section. The factor of \(\omega^{2}\) comes from the re-scaled measure while the amplitude itself depends on \(\omega\) in a manner which we now describe.
Writing the hyperbolic functions within the expression of the Compton amplitude (4.1) in terms of exponential functions, we find that the tree-level amplitude can be rewritten as a linear combination of at most eight exponential factors, with a very simple frequency dependence. Specifically, we find that only three different powers of the frequency \(\omega\) can appear for arbitrary classical spins,
\[\mathcal{M}_{q_{1}^{2}} =\sum_{\varrho_{1},\varrho_{2},\varrho_{3}}^{\pm,\pm,\pm}e^{ \varrho_{1}i\omega\tilde{a}_{1}{\cdot}\hat{q}_{1}+\varrho_{2}i\omega\tilde{a} _{2}{\cdot}\hat{q}_{1}+\varrho_{3}i\omega\tilde{a}_{2}{\cdot}\hat{k}}\frac{1 }{\omega^{2}}\Big{(}\sum_{i=0}^{2}\mathcal{M}_{q_{1}^{2}}^{(i)}(\varrho_{1}, \varrho_{2},\varrho_{3})\omega^{i}\Big{)}\,, \tag{5.34}\] \[\mathcal{M}_{q_{2}^{2}} =\sum_{\varrho_{1},\varrho_{2},\varrho_{3}}^{\pm,\pm,\pm}e^{ \varrho_{1}i\omega\tilde{a}_{2}{\cdot}\tilde{q}_{2}+\varrho_{2}i\omega\tilde{ a}_{1}{\cdot}\tilde{q}_{2}+\varrho_{3}i\omega\tilde{a}_{1}{\cdot}\hat{k}}\frac{1}{ \omega^{2}}\Big{(}\sum_{i=0}^{2}\mathcal{M}_{q_{2}^{2}}^{(i)}(\varrho_{1}, \varrho_{2},\varrho_{3})\omega^{i}\Big{)}.\]
Hence, the waveform integral has a simple general structure. In the remainder of this section we will focus on \(\mathcal{M}_{q_{1}^{2}}^{(i)}\), and the case of \(\mathcal{M}_{q_{2}^{2}}^{(i)}\) is similar. Similarly to the scalar case, the four-dimensional integration is immediately reduced to a two-dimensional one using the \(\delta\)-functions in (5.33). Furthermore, for each exponential factor, the Fourier transform to the time domain generates a third delta function, which constrains the integration over \(q_{1}\) to the hyperplane defined by5
Footnote 5: Henceforth, we relabel the integration variables dropping the hats on the \(q_{i}\) variables, and on \(k\).
\[b{\cdot}q_{1}+\varrho_{1}\tilde{a}_{1}{\cdot}q_{1}+\varrho_{2} \tilde{a}_{2}{\cdot}q_{1}+\varrho_{3}\tilde{a}_{2}{\cdot}k-u=0\,. \tag{5.35}\]
Following similar manipulations to (5.29) in the scalar case, the master integrands are of the form
\[\frac{1}{(q_{1}{\cdot}X+Y{\cdot}Z)q_{1}^{2}}, \frac{1}{q_{1}^{2}}, \frac{q_{1}{\cdot}W}{q_{1}^{2}}, \tag{5.36}\]
where \(W\) can be chosen to be orthogonal to the localising hyperplane and \(q_{1}{\cdot}X+Y{\cdot}Z\) denotes a generic _spurious_ pole linear in \(q_{1}\) and featuring external vectors \(X,Y\) and \(Z\) which may be the spins \(a_{i}\) or \(k\). The first two master integrals are UV convergent, while the last one is logarithmically divergent. However, the last master integral is an odd function of \(q_{1}\), and hence vanishes when integrated on a symmetric domain, identically to (5.28) in the scalar case. This corresponds to a principal value (PV) regularisation of the divergent integral, or equivalently a PV regularisation of the
pole at infinity. With this regularisation, the residue of the pole at \(q_{1}^{2}=0\) of the third term in (102) cancels the residue of the pole at infinity. Therefore, we can drop the last master integral altogether.
Now that the pole at infinity has been removed, we can perform the integration of the remaining terms using Cauchy's theorem on the finite poles. There is only one physical pole in this channel, namely \(q_{1}^{2}{=}0\). The residues on the spurious poles \(k{\cdot}q_{1}\) in the integrand can be discarded since they cancel when combining with the \(q_{2}^{2}\)-channel, a fact we have confirmed by explicit calculations. The residues of the spin-dependent spurious poles in the three-point and Compton amplitudes (coming from the entire functions \(G_{i}\)) cancel when performing an expansion in the spins \(a_{1}\) and \(a_{2}\), and so they can also be ignored. A similar statement holds for these poles in the final integrated waveform.
In summary, the closed-form expression of the time-domain waveform with arbitrary spin is then
\[\begin{split} h^{\infty}(u)=&-i\kappa\sum_{\varrho_ {1},\varrho_{2},\varrho_{3}}^{\pm,\pm}\sum_{j=0}^{2}(i\partial_{u})^{j}\Big{[} \oint_{(q_{1}^{2})^{+}=0}\frac{d^{4}q_{1}}{(2\pi)^{2}}\delta(2v_{1}{\cdot}q_{1 })\delta(2v_{2}{\cdot}(k-q_{1}))\\ &\delta(b{\cdot}q_{1}+\varrho_{1}\tilde{a}_{1}{\cdot}q_{1}+ \varrho_{2}\tilde{a}_{2}{\cdot}q_{1}+\varrho_{3}\tilde{a}_{2}{\cdot}k-u)\, \mathcal{M}^{(i)}_{q_{1}^{2},\text{fin}}(\varrho_{1},\varrho_{2},\varrho_{3} )\\ +&\oint_{(q_{2}^{2})^{+}=0}\frac{d^{4}q_{1}}{(2\pi)^ {2}}\delta(2v_{1}{\cdot}q_{1})\delta(2v_{2}{\cdot}(k-q_{1}))\\ &\delta(b{\cdot}q_{1}+\varrho_{1}\tilde{a}_{2}{\cdot}q_{2}+ \varrho_{2}\tilde{a}_{1}{\cdot}q_{2}+\varrho_{3}\tilde{a}_{1}{\cdot}k-u)\, \mathcal{M}^{(i)}_{q_{2}^{2},\text{fin}}(\varrho_{1},\varrho_{2},\varrho_{3} )\Big{]}\,,\end{split} \tag{103}\]
where \(\mathcal{M}^{(i)}_{q_{j}^{2},\text{fin}}\) denotes the UV-convergent part of the amplitude coming from the first two master integrals in (102). We denote as \((q_{1,2}^{2})^{+}=0\) the physical poles in the upper half plane.
## 6 The waveform from the scattering of a Schwarzschild and a Kerr black hole
In this paper, we will focus on the case where the first black hole is spinning while the second is spinless, that is \(a_{2}{=}0\).
### The \(q_{1}^{2}\)-channel
For the contribution to the amplitude in the \(q_{1}^{2}\)-channel, the waveform integrand is obtained from gluing a three-point spinning amplitude with a four-point spinless amplitude. The amplitude in this channel is very simple thanks to our restriction
\(a_{2}=0\), and from (4.10) we obtain, up to overall constant pre-factors
\[\begin{split}\omega^{2}e^{i\omega b\cdot q_{1}-i\omega u}\mathcal{M }_{q_{1}^{2}}&=\omega^{2}e^{i\omega b\cdot q_{1}-i\omega u}\Big{(} \frac{c_{1}\cosh\left(\omega a_{1}\cdot q_{1}\right)}{\omega^{2}q_{1}\cdot q _{1}k\cdot q_{1}}+\frac{c_{2}\left(q_{1}\cdot F_{k}\cdot v_{1}\right){}^{2} \cosh\left(\omega a_{1}\cdot q_{1}\right)}{\omega^{2}q_{1}\cdot q_{1}k\cdot q _{1}}\\ &+\frac{c_{3}\left(q_{1}\cdot F_{k}\cdot v_{1}\right){}^{2}q_{1} \cdot S_{1}\cdot v_{2}G_{1}\left(\omega a_{1}\cdot q_{1}\right)}{\omega q_{1} \cdot q_{1}k\cdot q_{1}}+\frac{c_{4}q_{1}\cdot F_{k}\cdot v_{1}k\cdot S_{1} \cdot q_{1}G_{1}\left(\omega a_{1}\cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}k \cdot q_{1}}\\ &+\frac{c_{5}G_{1}\left(\omega a_{1}\cdot q_{1}\right)q_{1} \cdot S_{1}\cdot F_{k}\cdot v_{1}}{\omega q_{1}\cdot q_{1}}+\frac{c_{6}q_{1} \cdot F_{k}\cdot v_{1}\cosh\left(\omega a_{1}\cdot q_{1}\right)}{\omega^{2}q_{ 1}\cdot q_{1}k\cdot q_{1}}\\ &+\frac{c_{7}q_{1}\cdot F_{k}\cdot v_{1}q_{1}\cdot S_{1}\cdot v_{ 2}G_{1}\left(\omega a_{1}\cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}k\cdot q _{1}}+\frac{c_{8}k\cdot S_{1}\cdot q_{1}G_{1}\left(\omega a_{1}\cdot q_{1} \right)}{\omega q_{1}\cdot q_{1}k\cdot q_{1}}\\ &+\frac{c_{9}k\cdot q_{1}\cosh\left(\omega a_{1}\cdot q_{1} \right)}{\omega^{2}q_{1}\cdot q_{1}}+\frac{c_{10}q_{1}\cdot F_{k}\cdot v_{1} \cosh\left(\omega a_{1}\cdot q_{1}\right)}{\omega^{2}q_{1}\cdot q_{1}}\\ &+\frac{c_{11}k\cdot q_{1}q_{1}\cdot S_{1}\cdot v_{2}G_{1}\left( \omega a_{1}\cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}}+\frac{c_{12}q_{1} \cdot F_{k}\cdot v_{1}q_{1}\cdot S_{1}\cdot v_{2}G_{1}\left(\omega a_{1} \cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}}\\ &+\frac{c_{13}k\cdot S_{1}\cdot q_{1}G_{1}\left(\omega a_{1} \cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}}+\frac{c_{14}\cosh\left(\omega a _{1}\cdot q_{1}\right)}{\omega^{2}q_{1}\cdot q_{1}}\\ &+\frac{c_{15}q_{1}\cdot S_{1}\cdot v_{2}G_{1}\left(\omega a_{1} \cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}}+\frac{c_{16}q_{1}\cdot F_{k} \cdot v_{1}G_{1}\left(\omega a_{1}\cdot q_{1}\right)q_{1}\cdot S_{1}\cdot F_{k }\cdot v_{1}}{\omega q_{1}\cdot q_{1}k\cdot q_{1}}\\ &+\frac{c_{17}G_{1}\left(\omega a_{1}\cdot q_{1}\right)q_{1} \cdot S_{1}\cdot F_{k}\cdot v_{1}}{\omega q_{1}\cdot q_{1}k\cdot q_{1}} \Big{)}\,,\end{split} \tag{6.1}\]
where the coefficients \(c_{i}\) are independent of \(q_{1}\) and \(\omega\),and hence can be factored out in the waveform integration, there precise form is given in Appendix B. For this channel the amplitude scales with \(\omega\) as \(\omega^{0}\) with the remaining \(\omega\) dependence exponentiating. In this channel, there are only two sectors from the exponential factors:
\[\begin{split}\text{(I)}:\quad e^{-i\omega(-\tilde{a}_{1}\cdot q _{1}-b\cdot q_{1}+u)},\qquad\qquad\text{(II)}:\quad e^{-i\omega(\tilde{a}_{1} \cdot q_{1}-b\cdot q_{1}+u)}.\end{split} \tag{6.2}\]
Again we have the parameterisation of \(q_{1}\) on the four-dimensional vector basis given by the vectors
\[v_{1},v_{2},b,v_{\perp}. \tag{6.3}\]
As in the scalar case, we temporarily set \(b^{2}=-1\) which means regarding the spins \(a_{i}\) and retarded time \(u\) as dimensionless and measured in units of \(\sqrt{-b^{2}}\). The overall dependence on \(b\) can then be reinstated by counting of mass dimension and gives simply a prefactor of \(\frac{1}{\sqrt{-b^{2}}}\).
However, the parameterisation (6.3) is not well suited to the particular sectors and does not cleanly identify the UV-divergent term in (5.36). It is more convenient to introduce a sector-dependent basis as
\[v_{1},v_{2},\tilde{b}_{j},\tilde{v}_{j}, \tag{6.4}\]
where in each sector we introduce an effective impact parameter
\[\begin{cases}\tilde{b}_{(\mathrm{I})}\coloneqq-b-\tilde{a}_{1}\\ \tilde{b}_{(\mathrm{II})}\coloneqq-b+\tilde{a}_{1}\end{cases}\,, \tag{100}\]
and correspondingly
\[\tilde{v}_{j}\coloneqq\epsilon(v_{1},v_{2},\tilde{b}_{j},\bullet),\quad j= \mathrm{I},\mathrm{II}\,. \tag{101}\]
We then parameterise \(q_{1}\) as
\[q_{1}=z_{1}v_{1}+z_{2}v_{2}+z_{v}\tilde{v}_{j}+z_{b}\tilde{b}_{j},\quad j= \mathrm{I},\mathrm{II}\,, \tag{102}\]
in terms of the basis vectors defined above. The divergent part in (100) is then of the form
\[\frac{z_{v}}{c-z_{v}^{2}}, \tag{103}\]
which vanishes once we perform the integration as in the scalar case; hence we drop such terms.
#### Examples with constrained spin:
In this paper we present results for the case where the Kerr black hole spin \(a_{1}\) satisfies the additional constraint
\[\tilde{a}_{1}\cdot v_{2}=0\,. \tag{104}\]
In this case \(b,\tilde{a}_{1}\) are both constrained to the hyperplane orthogonal to \(v_{1}\) and \(v_{2}\) as \(\tilde{a}_{1}\cdot v_{i}\)=\(b\cdot v_{i}\)=\(0\). The \(q_{1}\) variable is also constrained to another parallel hyperplane defined by \(q_{1}\cdot v_{1}\)=\(0,q_{1}\cdot v_{2}\)=\(k\cdot v_{2}\)=\(w_{2}\). Then the extra \(\delta\)-function after the time-domain Fourier transform is, in the two sectors,
\[\delta\left(-\tilde{a}_{1}\cdot q_{1}-b\cdot q_{1}+u\right)= \delta(\tilde{b}_{(\mathrm{I})}\cdot q_{1}+u),\qquad\delta\left(\tilde{a}_{1} \cdot q_{1}-b\cdot q_{1}+u\right)=\delta(\tilde{b}_{(\mathrm{II})}\cdot q_{1} +u) \tag{105}\]
and the \(q_{1}\) integral localises to the line as shown in the following figure,
(106)
The plane depicted here is the one orthogonal to \(v_{1}\) and \(v_{2}\) which corresponds to the integrals over \(z_{v}\) and \(z_{b}\) in each sector (6.7). The variable \(\tilde{b}\!\cdot\!q_{1}=\tilde{b}^{2}z_{b}\) is localised to \(-u\) using (6.10) and the integral over \(z_{v}\) is taken along the red line orthogonal to the basis vector \(\tilde{b}\). In the following we use \(\tilde{b}_{\rm(I)},\tilde{b}_{\rm(II)}\) to denote the shifted impact parameters in the two sectors, and \(\tilde{v}_{\rm(I)},\tilde{v}_{\rm(II)}\) to denote the corresponding orthogonal directions. We also note that when we replace back \(\tilde{a}_{1}=ia_{1}\) in terms of the physical spin the quantities \(\tilde{b}_{\rm(I)},\tilde{b}_{\rm(II)}\) and \(\tilde{v}_{\rm(I)},\tilde{v}_{\rm(II)}\) are complex conjugates of each other.
We now go into some explicit examples. First, consider the term
\[\frac{c_{14}\cosh\left(\omega a_{1}\!\cdot\!q_{1}\right)e^{i\omega b\cdot q_{1 }-i\omega u}}{q_{1}\!\cdot\!q_{1}}. \tag{6.12}\]
Then according to (5.37), we need to sum over the two sectors and get
\[\frac{c_{14}}{4\sqrt{w_{2}^{2}\tilde{b}_{\rm(I)}\cdot\tilde{b}_{\rm(I)}}-u^{2 }\left(y^{2}-1\right)}+\frac{c_{14}}{4\sqrt{w_{2}^{2}\tilde{b}_{\rm(II)}\cdot \tilde{b}_{\rm(II)}-u^{2}\left(y^{2}-1\right)}}, \tag{6.13}\]
which is a real result since \(\tilde{b}_{\rm(I)},\tilde{b}_{\rm(II)}\) are a complex conjugate pair. This is a general feature of the integrals encountered in the following calculation, namely when replacing \(\tilde{a}_{1}=ia_{1}\) the sector variables are complex but appear in combinations such that the resulting waveform is real (for a basis of real polarisations). A second example is
\[\frac{c_{13}\omega k\!\cdot\!S_{1}\!\cdot\!q_{1}e^{i\omega b\cdot q_{1}-i \omega u}G_{1}\left(-i\omega\tilde{a}_{1}\!\cdot\!q_{1}\right)}{q_{1}\!\cdot \!q_{1}}. \tag{6.14}\]
The contribution to the waveform is
\[\frac{ic_{13}}{4}\Bigg{(}\frac{-k\!\cdot\!S_{1}\!\cdot\!\tilde{v }_{\rm(I)}\sqrt{w_{2}^{2}\tilde{b}_{\rm(I)}\cdot\tilde{b}_{\rm(I)}+u^{2} \left(1-y^{2}\right)}+u\left(y^{2}-1\right)\tilde{b}_{\rm(I)}\!\cdot\!S_{1}\! \cdot\!k-w_{2}\tilde{b}_{\rm(I)}\!\cdot\!\tilde{b}_{\rm(I)}k\!\cdot\!S_{1}\! \cdot\!v_{2}}{u\left(y^{2}-1\right)\tilde{a}_{1}\!\cdot\!\tilde{b}_{\rm(I)} \sqrt{w_{2}^{2}\tilde{b}_{\rm(I)}\cdot\tilde{b}_{\rm(I)}-u^{2}\left(y^{2}-1 \right)-\tilde{a}_{1}\!\cdot\!\tilde{v}_{\rm(I)}\left(u^{2}\left(y^{2}-1 \right)-w_{2}^{2}\tilde{b}_{\rm(I)}\cdot\tilde{b}_{\rm(I)}\right)}}\] \[-\frac{-k\!\cdot\!S_{1}\!\cdot\!\tilde{v}_{\rm(II)}\sqrt{w_{2}^{ 2}\tilde{b}_{\rm(II)}\cdot\tilde{b}_{\rm(II)}+u^{2}\left(1-y^{2}\right)+u \left(y^{2}-1\right)\tilde{b}_{\rm(II)}\!\cdot\!S_{1}\!\cdot\!k-w_{2}\tilde{b }_{\rm(II)}\!\cdot\!\tilde{b}_{\rm(II)}k\!\cdot\!S_{1}\!\cdot\!v_{2}}{u\left(y^ {2}-1\right)\tilde{a}_{1}\!\cdot\!\tilde{b}_{\rm(II)}\sqrt{w_{2}^{2}\tilde{b }_{\rm(II)}\cdot\tilde{b}_{\rm(II)}-u^{2}\left(y^{2}-1\right)-\tilde{a}_{1}\! \cdot\!\tilde{v}_{\rm(II)}\left(u^{2}\left(y^{2}-1\right)-w_{2}^{2}\tilde{b}_{ \rm(II)}\cdot\tilde{b}_{\rm(II)}\right)}}\Bigg{)}. \tag{6.15}\]
In this form, the poles which depend on the spin vector \(a_{1}\) are due to the spurious pole in the \(G_{1}\) function. As with the \(G_{1}\) function itself, this pole explicitly cancels once we expand for \(|a_{1}|\ll 1\) giving
\[-\frac{c_{13}w_{2}\left(uw_{0}\left(y^{2}-1\right)\tilde{a}_{1} \!\cdot\!b+\left(-uw_{3}y^{2}+uw_{3}+w_{1}w_{2}y-w_{2}^{2}\right)\tilde{a}_{1 }\!\cdot\!v_{\perp}\right)}{2\left(y^{2}-1\right)\left(-u^{2}\left(y^{2}-1 \right)-w_{2}^{2}\right){}^{3/2}}\] \[+\frac{c_{13}w_{2}^{3}\left(uw_{0}\left(y^{2}-1\right)\tilde{a}_ {1}\!\cdot\!b+\left(-uw_{3}y^{2}+uw_{3}+w_{1}w_{2}y-w_{2}^{2}\right)\tilde{a}_{1 }\!\cdot\!v_{\perp}\right)}{4\left(y^{2}-1\right)^{2}\left(-u^{2}\left(y^{2}-1 \right)-w_{2}^{2}\right){}^{7/2}}\] \[\times\left(\left(y^{2}-1\right)\left(u^{2}\left(y^{2}-1\right)-4w _{2}^{2}\right)\left(\tilde{a}_{1}\!\cdot\!b\right){}^{2}+\left(u^{2}\left(y^{ 2}-1\right)+w_{2}^{2}\right)\left(\tilde{a}_{1}\!\cdot\!v_{\perp}\right)^{2} \right)+\cdots, \tag{6.16}\]
where \(w_{1}:=k\cdot v_{1},w_{2}:=k\cdot v_{2},w_{3}:=k\cdot b,w_{0}:=k\cdot v_{\perp}\).
A third example is
\[\frac{c_{3}\omega\left(q_{1}\cdot F_{k}\cdot v_{1}\right){}^{2}q_{1}\cdot S_{1} \cdot v_{2}e^{i\omega b\cdot q_{1}-iu\omega}G_{1}\left(-i\omega\tilde{a}_{1} \cdot q_{1}\right)}{q_{1}\cdot q_{1}k\cdot q_{1}}. \tag{6.17}\]
In this case, there is a trivial log-divergent term which we remove using the method described in Section 5.3. Thus the integral gives
\[\sum_{j=1}^{\rm II}\frac{\left(-1\right)^{j-1}}{4\left(y^{2}-1 \right)}\Bigg{[}\frac{ic_{3}\left(\tilde{v}_{j}\cdot S_{1}\cdot v_{2}\sqrt{w_{ 2}^{2}\tilde{b}_{j}\cdot\tilde{b}_{j}-u^{2}\left(y^{2}-1\right)}+u\left(y^{2} -1\right)\tilde{b}_{j}\cdot S_{1}\cdot v_{2}\right)}{k\cdot\tilde{v}_{j}\sqrt {w_{2}^{2}\tilde{b}_{j}\cdot\tilde{b}_{j}+u^{2}\left(1-y^{2}\right)}+u\left(y ^{2}-1\right)\tilde{b}_{j}\cdot k+w_{2}\left(w_{2}-w_{1}y\right)\tilde{b}_{j} \cdot\tilde{b}_{j}}\] \[\frac{\left(\tilde{v}_{j}\cdot F_{k}\cdot v_{1}\sqrt{w_{2}^{2} \tilde{b}_{j}\cdot\tilde{b}_{j}+u^{2}\left(1-y^{2}\right)}+u\left(y^{2}-1 \right)\tilde{b}_{j}\cdot F_{k}\cdot v_{1}-w_{2}\tilde{b}_{j}\cdot\tilde{b}_{j }v_{1}\cdot F_{k}\cdot v_{2}\right)^{2}}{\tilde{b}_{j}\cdot\tilde{b}_{j}\sqrt {w_{2}^{2}\tilde{b}_{j}\cdot\tilde{b}_{j}-u^{2}\left(y^{2}-1\right)}\left( \tilde{a}_{1}\cdot\tilde{v}_{j}\sqrt{w_{2}^{2}\tilde{b}_{j}\cdot\tilde{b}_{j}- u^{2}\left(y^{2}-1\right)}+u\left(y^{2}-1\right)\tilde{a}_{1}\cdot\tilde{b}_{j}\right)}\] \[-\frac{ic_{3}\left(\tilde{v}_{j}\cdot F_{k}\cdot v_{1}\right){}^ {2}\tilde{v}_{j}\cdot S_{1}\cdot v_{2}}{\tilde{b}_{j}\cdot\tilde{b}_{j} \tilde{a}_{1}\cdot\tilde{v}_{j}k\cdot\tilde{v}_{j}}\Bigg{]}. \tag{6.18}\]
The last term is the removed log-divergent term that can be removed trivially. One can also directly check that the spurious poles \(\frac{1}{\tilde{a}_{1}\cdot\tilde{v}_{j}}\) and \(\frac{1}{\left(\tilde{a}_{1}\cdot\tilde{v}_{j}\sqrt{w_{2}^{2}\tilde{b}_{j} \cdot\tilde{b}_{j}-u^{2}\left(y^{2}-1\right)+u\left(y^{2}-1\right)\tilde{a}_{1 }\cdot\tilde{b}_{j}\right)}}\) cancel among the sectors. Again, by expanding for \(|a_{1}|\ll 1\) we see that the spurious poles cancel,
\[\frac{-ic_{3}w_{2}^{2}\left(v_{1}\cdot F_{k}\cdot v_{2}\right){}^ {2}b\cdot S_{1}\cdot v_{2}}{2\left(y^{2}-1\right)\left(u^{2}\left(1-y^{2} \right)-w_{2}^{2}\right){}^{3/2}}\frac{1}{\left(w_{1}w_{2}y-w_{2}^{2}-w_{0} \sqrt{u^{2}\left(1-y^{2}\right)-w_{2}^{2}}-uw_{3}\left(y^{2}-1\right)\right)^{ 2}}\] \[\times\left(-w_{0}(u^{2}(y^{2}-1)-w_{2}^{2})\sqrt{u^{2}\left(1-y^ {2}\right)-w_{2}^{2}}-u^{3}\left(y^{2}-1\right)^{2}w_{3}-w_{1}w_{2}^{3}y+w_{2}^ {4}\right)\] \[+\cdots. \tag{6.19}\]
The spin-independent spurious pole is still present and will only cancel after summing with the corresponding terms in the \(q_{2}^{2}\)-channel.
### The \(q_{2}^{2}\)-channel
For the second graph in (4.1), the physical propagator is
\[\frac{1}{q_{2}^{2}}=\frac{1}{(k-q_{1})^{2}}. \tag{6.20}\]
It is convenient to shift the integration variable as \(q_{1}\to q_{1}+k\) and the physical propagator becomes simply \(\frac{1}{q_{1}^{2}}\), the same as in the \(q_{1}^{2}\)-channel. The spurious pole
is invariant under the shift due to the on-shell condition of the external graviton, while the spin dependent spurious poles become
\[\frac{1}{a\cdot q_{1}}, \frac{1}{a\cdot k}. \tag{6.21}\]
The delta functions coming from the definition of the waveform (5.14) are shifted correspondingly as
\[\delta(2v_{1}\cdot q_{1}+2w_{1}), \delta(2v_{2}\cdot q_{1}). \tag{6.22}\]
Applying the residue theorem to evaluate the integrals is then the exact same process as the \(q_{1}^{2}\)-channel with the following integrand
\[(\omega^{2}e^{i\omega(b\cdot k+b\cdot q_{1})-i\omega u})\mathcal{ M}_{q_{2}^{2}}=(\omega^{2}e^{i\omega(b\cdot k+b\cdot q_{1})-i\omega u})\times\] \[\left[\frac{\cosh\left(\omega a_{1}\cdot(k+q_{1})\right)\left(2w _{1}^{2}\left(v_{1}\cdot F_{k}\cdot v_{2}\right)^{2}-4w_{1}yv_{1}\cdot F_{k} \cdot v_{2}q_{1}\cdot F_{k}\cdot v_{1}+\left(2y^{2}-1\right)\left(q_{1}\cdot F _{k}\cdot v_{1}\right){}^{2}\right)}{4w_{1}^{2}\omega^{2}q_{1}\cdot q_{1}k \cdot q_{1}}\right.\] \[+\frac{iG_{1}\left(\omega a_{1}\cdot(k+q_{1})\right)}{2w_{1}^{2} \omega q_{1}\cdot q_{1}k\cdot q_{1}}\Big{(}w_{1}v_{1}\cdot F_{k}\cdot v_{2}- yq_{1}\cdot F_{k}\cdot v_{1}\Big{)}\Big{(}-w_{2}q_{1}\cdot S_{1}\cdot F_{k} \cdot v_{1}\] \[+k\cdot S_{1}\cdot v_{2}q_{1}\cdot F_{k}\cdot v_{1}+q_{1}\cdot S _{1}\cdot v_{2}q_{1}\cdot F_{k}\cdot v_{1}+v_{1}\cdot F_{k}\cdot v_{2}k\cdot S _{1}\cdot q_{1}-w_{2}k\cdot S_{1}\cdot F_{k}\cdot v_{1}\Big{)}\] \[+\frac{G_{2}\left(\omega a_{1}\cdot q_{1},\omega a_{1}\cdot k \right)\left(c_{33}q_{1}\cdot S_{1}\cdot F_{k}\cdot v_{1}+\left(c_{25}q_{1} \cdot S_{1}\cdot v_{2}+c_{48}\right)q_{1}\cdot F_{k}\cdot v_{1}+c_{41}q_{1} \cdot S_{1}\cdot v_{2}+c_{19}\right)}{q_{1}\cdot q_{1}}\] \[+\frac{G_{1}\left(\omega a_{1}\cdot(k+q_{1})\right)\left(c_{29}q _{1}\cdot F_{k}\cdot v_{1}+c_{43}\right)}{\omega q_{1}\cdot q_{1}}+\frac{G_{1 }\left(\omega a_{1}\cdot k\right)G_{1}\left(\omega a_{1}\cdot q_{1}\right)}{ q_{1}\cdot q_{1}}\Big{(}c_{2}+c_{68}q_{1}\cdot F_{k}\cdot v_{1}\] \[+a_{1}\cdot q_{1}\left(c_{63}q_{1}\cdot F_{k}\cdot v_{1}+c_{70} \right)+c_{62}\left(q_{1}\cdot F_{k}\cdot v_{1}\right){}^{2}+k\cdot q_{1} \left(c_{7}q_{1}\cdot F_{k}\cdot v_{1}+c_{11}\right)\Big{)}\] \[+\frac{G_{1}\left(\omega a_{1}\cdot k\right)\cosh\left(\omega a_ {1}\cdot q_{1}\right)}{\omega q_{1}\cdot q_{1}}\left(c_{12}q_{1}\cdot F_{k} \cdot v_{1}+c_{51}\right)+G_{e}^{\prime\prime}\left(\omega a_{1}\cdot q_{1}, \omega a_{1}\cdot k\right)\omega^{2}\Big{(}\frac{c_{4}k\cdot q_{1}}{q_{1}\cdot q _{1}}\] \[+\frac{c_{16}a_{1}\cdot q_{1}k\cdot q_{1}}{q_{1}\cdot q_{1}}+ \frac{c_{58}a_{1}\cdot q_{1}}{q_{1}\cdot q_{1}}+\frac{c_{74}\left(a_{1}\cdot q _{1}\right){}^{2}}{q_{1}\cdot q_{1}}+\frac{c_{74}\left(a_{1}\cdot q_{1}\right) {}^{2}}{q_{1}\cdot q_{1}}+\frac{c_{74}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q _{1}}+\frac{c_{72}a_{1}\cdot q_{1}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q_{1 }}\Big{)}\] \[+G_{e}^{\prime}\left(\omega a_{1}\cdot q_{1},\omega a_{1}\cdot k \right)\omega\Big{(}\frac{c_{66}a_{1}\cdot q_{1}\cdot P_{k}\cdot v_{1}}{q_{1} \cdot q_{1}}+\frac{c_{64}a_{1}\cdot q_{1}k\cdot q_{1}}{q_{1}\cdot q_{1}}+ \frac{c_{69}\left(a_{1}\cdot q_{1}\right){}^{2}}{q_{1}\cdot q_{1}}+\frac{c_{8 2}a_{1}\cdot q_{1}}{q_{1}\cdot q_{1}}\] \[+\frac{c_{10}k\cdot q_{1}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q _{1}}+\frac{c_{14}\left(q_{1}\cdot F_{k}\cdot v_{1}\right){}^{2}}{q_{1}\cdot q _{1}}+\frac{c_{71}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q_{1}}+\frac{c_{22} \left(k\cdot q_{1}\right){}^{2}}{q_{1}\cdot q_{1}}+\frac{c_{67}k\cdot q_{1}}{q _{1}\cdot q_{1}}+\frac{c_{59}}{q_{1}\cdot q_{1}}\Big{)}\] \[+G_{o}^{\prime\prime}\left(\omega a_{1}\cdot q_{1},\omega a_{1} \cdot k\right)\omega^{2}\Big{(}\frac{c_{44}a_{1}\cdot q_{1}\cdot S_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q _{1}}+\frac{c_{52}a_{1}\cdot q_{1}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q_{1 }}+\frac{c_{46}a_{1}\cdot q_{1}k\cdot S_{1}\cdot q_{1}}{q_{1}\cdot q_{1}}\] \[+\frac{c_{53}a_{1}\cdot q_{1}k\cdot q_{1}}{q_{1}\cdot q_{1}}+\frac{ c_{76}a_{1}\cdot q_{1}q_{1}\cdot S_{1}\cdot v_{2}}{q_{1}\cdot q_{1}}+\frac{c_{61}a_{1} \cdot q_{1}}{q_{1}\cdot q_{1}}+\frac{c_{81}\left(a_{1}\cdot q_{1}\right){}^{2}}{q _{1}\cdot q_{1}}+\frac{c_{45}q_{1}\cdot S_{1}\cdot v_{2}q_{1}\cdot F_{k}\cdot v _{1}}{q_{1}\cdot q_{1}}\] \[+\frac{c_{5}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q_{1}}+\frac{ c_{47}k\cdot q_{1}q_{1}\cdot S_{1}\cdot v_{2}}{q_{1}\cdot q_{1}}+\frac{c_{6}k \cdot q_{1}}{q_{1}\cdot q_{1}}\Big{)}+G_{o}^{\prime}\left(\omega a_{1} \cdot q_{1},\omega a_{1}\cdot k\right)\omega\Big{(}\frac{c_{32}a_{1}\cdot q_{1}q_{1 }\cdot S_{1}\cdot F_{k}\cdot v_{1}}{q_{1}\cdot q_{1}}\] \[+\frac{c_{20}a_{1}\cdot q_{1}q_{1}\cdot F_{k}\cdot v_{1}}{q_{1} \cdot q_{1}}+\frac{c_{13}a_{1}\cdot q_{1}k\cdot q_{1}}{q_{1}\cdot q_{1}}+ \frac{c_{40}a_{1}\cdot q_{1}q_{1}\cdot S_{1}\cdot v_{2}}{q_{1}\cdot q_{1}}+ \frac{c_{50}\left(a_{1}\cdot q_{1}\right){}^{2}}{q_{1}\cdot q_{1}}+\frac{c_{8 0}a_{1}\cdot q_{1}}{q_{1}\cdot q_{1}}\]
\[+\frac{c_{15}q_{1}\!\cdot\!S_{1}\!\cdot\!v_{2}q_{1}\!\cdot\!F_{k}\! \cdot\!v_{1}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{34}q_{1}\!\cdot\!S_{1}\!\cdot\!F_{k} \!\cdot\!v_{1}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{21}q_{1}\!\cdot\!F_{k}\!\cdot\!v _{1}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{26}k\!\cdot\!q_{1}q_{1}\!\cdot\!S_{1}\! \cdot\!v_{2}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{35}k\!\cdot\!S_{1}\!\cdot\!q_{1}}{ q_{1}\!\cdot\!q_{1}}\] \[+\frac{c_{49}k\!\cdot\!q_{1}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{73}q _{1}\!\cdot\!S_{1}\!\cdot\!v_{2}}{q_{1}\!\cdot\!q_{1}}+\frac{c_{60}}{q_{1}\! \cdot\!q_{1}}\Bigg{)}\Bigg{]}, \tag{6.23}\]
where
\[G^{{}^{\prime}}_{o}(x_{1},x_{2}):=(\partial_{x_{1}}-\partial_{x_ {2}})G_{2}(x_{1},x_{2}),\hskip 14.226378ptG^{{}^{\prime}}_{e}(x_{1},x_{2}):=( \partial_{x_{1}}-\partial_{x_{2}})G_{1}(x_{1})G_{1}(x_{2})\] \[G^{{}^{\prime\prime}}_{o}(x_{1},x_{2}):=\frac{(\partial_{x_{1}}- \partial_{x_{2}})^{2}}{2}G_{2}(x_{1},x_{2}),\hskip 14.226378ptG^{{}^{\prime\prime}}_{e}(x_{1},x _{2}):=\frac{(\partial_{x_{1}}-\partial_{x_{2}})^{2}}{2}G_{1}(x_{1})G_{1}(x_{ 2})\,. \tag{6.24}\]
The coefficients are listed in Appendix B. The integrand is composed of four parts:
* terms including the functions \(G_{1}\) and \(\cosh\) and with spurious pole \(\frac{1}{k\cdot q_{1}}\): In this part, the entire function are \(G_{1}(\omega a_{1}\!\cdot\!(k+q_{1})),\cosh(\omega a_{1}\!\cdot\!(k+q_{1}))\). All the terms are of \(\mathcal{O}(\omega^{0})\). It is easy to see that the spurious pole is cancelled when adding the corresponding terms in the \(q_{1}^{2}\)-channel.
* terms with the functions \(\cosh,G_{1},G_{2}\) and without the spurious pole \(\frac{1}{k\cdot q_{1}}\): They are all of \(\mathcal{O}(\omega^{0})\).
* terms with the functions \(G^{\prime}_{o},G^{\prime}_{e}\): They are of \(\mathcal{O}(\omega^{0})\) and \(\mathcal{O}(\omega^{1})\). All of them do not contain the spurious pole \(\frac{1}{k\cdot q_{1}}\)
* terms with the functions \(G^{{}^{\prime\prime}}_{o},G^{{}^{\prime\prime}}_{e}\): They are of \(\mathcal{O}(\omega^{0})\), \(\mathcal{O}(\omega^{1})\) and \(\mathcal{O}(\omega^{2})\). All of them do not contain the spurious pole \(\frac{1}{k\cdot q_{1}}\). They also do not contain the physical massive pole \(\frac{1}{k\cdot v_{1}}=\frac{1}{w_{1}}\).
Unlike in the \(q_{1}^{2}\)-channel, here we have more general entire functions coming from the compton amplitude for the particle with spin \(a_{1}\) and consequently we now have four sectors with different exponential factors
\[\text{(I)}: \quad\exp\left(-i\omega\left(-\tilde{a}_{1}\!\cdot\!k-\tilde{a}_{ 1}\!\cdot\!q_{1}-b\!\cdot\!k-b\!\cdot\!q_{1}+u\right)\right),\] \[\text{(II)}: \quad\exp\left(-i\omega\left(-\tilde{a}_{1}\!\cdot\!k+\tilde{a}_{ 1}\!\cdot\!q_{1}-b\!\cdot\!k-b\!\cdot\!q_{1}+u\right)\right),\] \[\text{(III)}: \quad\exp\left(-i\omega\left(\tilde{a}_{1}\!\cdot\!k-\tilde{a}_{ 1}\!\cdot\!q_{1}-b\!\cdot\!k-b\!\cdot\!q_{1}+u\right)\right),\] \[\text{(IV)}: \quad\exp\left(-i\omega\left(\tilde{a}_{1}\!\cdot\!k+\tilde{a}_{ 1}\!\cdot\!q_{1}-b\!\cdot\!k-b\!\cdot\!q_{1}+u\right)\right). \tag{6.25}\]
In each sector, we still use the sector-dependent basis in (6.4) and parameterise the \(q_{1}\) variable of (6.7) with
\[\begin{cases}\tilde{b}_{\text{(I)}}=\tilde{b}_{\text{(III)}}=-b-\tilde{a}_{1}\\ \tilde{b}_{\text{(II)}}=\tilde{b}_{\text{(IV)}}=-b+\tilde{a}_{1}\end{cases}, \quad\tilde{v}_{j}=\epsilon(v_{1},v_{2},\tilde{b}_{j},\bullet)\,,\quad\quad j= \text{I},\text{II},\text{III},\text{IV}. \tag{6.26}\]
Using this, the extra \(\delta\)-functions in each sector are
\[\text{(I)}: \delta\left(-\tilde{a}_{1}\!\cdot\!k-b\!\cdot\!k+\tilde{b}_{(\rm I) }\!\cdot\!q_{1}+u\right),\ \ \ \ \ \text{(III)}: \delta\left(\tilde{a}_{1}\!\cdot\!k-b\!\cdot\!k+\tilde{b}_{(\rm III) }\!\cdot\!q_{1}+u\right),\] \[\text{(II)}: \delta\left(-\tilde{a}_{1}\!\cdot\!k-b\!\cdot\!k+\tilde{b}_{(\rm II )}\!\cdot\!q_{1}+u\right),\ \ \ \ \text{(IV)}: \delta\left(\tilde{a}_{1}\!\cdot\!k-b\!\cdot\!k+\tilde{b}_{(\rm IV) }\!\cdot\!q_{1}+u\right)\,.\]
Then the integration localises onto a hyperplane for each sector and the method is exactly the same as in the last section. The new feature is the appearance of the entire functions \(G^{\prime}_{o},G^{\prime}_{e},G^{{}^{\prime\prime}}_{o},G^{{}^{\prime\prime}}_ {e}\). The derivatives will lead to entire functions that are not homogeneous with respect to \(\omega\) even while ignoring the exponential factors. Hence the integrand has three different powers of \(\omega\), schematically
\[1\times(\bullet)e^{\bullet}+\omega\times(\bullet)e^{\bullet}+ \omega^{2}\times(\bullet)e^{\bullet}. \tag{6.28}\]
This leads to an integral result with the structure
\[1\times(\bullet\bullet)+i\partial_{u}(\bullet\bullet)-\partial_{ u}^{2}(\bullet\bullet)\,. \tag{6.29}\]
In practice, our result is obtained from evaluating the \(\delta\)-functions as usual and replacing \(\omega\) by \(i\partial_{u}\) at the end, as shown in (5.37).
We now perform a numerical check that the result is free of spin-dependent spurious poles. After a random numerical replacement, the spin-dependent spurious pole is located at
\[\tilde{a}_{1}\cdot v_{\perp}-\frac{42\tilde{a}_{1}\cdot b}{5}= \xi\to 0\,. \tag{6.30}\]
We extract the singular terms at the spurious pole, finding
\[-\frac{1323\sqrt{3}u}{640\sqrt{-25u^{2}-700u-17444}\xi^{3}}+ \frac{5245317u}{1100800\sqrt{-75u^{2}-2100u-58732}\xi^{3}}\] \[-\frac{9261\sqrt{3}}{320\sqrt{-25u^{2}-700u-17444}\xi^{3}}+ \frac{36717219}{550400\sqrt{-75u^{2}-2100u-58732}\xi^{3}}-\frac{277641}{110080 0\xi^{3}}\] \[+\cdots 1345\text{ more terms}\cdots \tag{6.31}\]
After applying the derivative operators and setting \(u\)=0 we get
\[\frac{0.0123034\,+0.140702i}{\xi^{2}}+\frac{0.0205579\,-0.00156661 i}{\xi}\] \[-\frac{0.0123034\,+0.140702i}{\xi^{2}}-\frac{0.0222046\,-0.003331 2i}{\xi}\] \[+\frac{0.00164663\,-0.00176459i}{\xi}=0. \tag{6.32}\]
We have also tested that for several other values of \(u\) and find the singular term is always vanishing. This indicates that the final result is free of spurious poles to any spin order.
### Discussion of the resummed spin waveform
The final result of the waveform has three contributions coming from terms each with up to two \(u\)-derivatives acting on them
\[h^{\infty}(u)=\Big{(}h_{0}^{\infty}(u)+\omega h_{1}^{\infty}(u)+\omega^{2}h_{2}^{ \infty}(u)\Big{)}\Big{|}_{\omega\to i\partial_{u}}\,, \tag{108}\]
and in terms of \(\tilde{a},b,v_{\perp},k,v_{1},v_{2}\) takes the following schematic form
\[\frac{(2y^{2}-1)\,v_{1}\!\cdot\!F_{2}\!\cdot\!v_{2}\,(v_{\perp}\! \cdot\!F_{2}\!\cdot\!v_{1}\,(\tilde{a}_{1}\!\cdot\!b-1)-\tilde{a}_{1}\!\cdot\! v_{\perp}b\!\cdot\!F_{2}\!\cdot\!v_{1})}{8w_{1}^{2}w_{2}\,((y^{2}-1)\,(\tilde{a}_{1} \!\cdot\!b-1)\,^{2}+(\tilde{a}_{1}\!\cdot\!v_{\perp})\,^{2})}\] \[+\frac{(2y^{2}-1)\,v_{1}\!\cdot\!F_{2}\!\cdot\!v_{2}\,(\tilde{a} _{1}\!\cdot\!v_{\perp}b\!\cdot\!F_{2}\!\cdot\!v_{1}-v_{\perp}\!\cdot\!F_{2}\! \cdot\!v_{1}\,(\tilde{a}_{1}\!\cdot\!b+1))}{8w_{1}^{2}w_{2}\,((y^{2}-1)\,( \tilde{a}_{1}\!\cdot\!b+1)\,^{2}+(\tilde{a}_{1}\!\cdot\!v_{\perp})\,^{2})}\] \[+\cdots\ 336\ {\rm more\ terms}\ \cdots \tag{109}\]
We note that the poles in \(w_{1},w_{2}\) correspond to the physical massive poles \(\frac{1}{v_{1}\cdot k_{2}}\) and \(\frac{1}{v_{2}\cdot k_{2}}\). The singular behaviour on these poles does not depend on the contact terms present in the Compton amplitude, which by definition are free of such poles, and so this behaviour is exact up to any spin order. The explicit result in the case of \(a_{1}\!\cdot\!v_{2}\!\!=\!\!0\) can be found in the GitHub repository. In the remainder of this subsection, we focus on the properties of the waveform by plotting its numerical values as a function of the retarded time \(u\) and the spin parameter. As in the scalar case, we can choose a frame such that the kinematics are given by
\[v_{1} =(1,0,0,0), v_{2}=(y,\sqrt{y^{2}-1},0,0)\] \[k =(1,\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta), v_{\perp}=(0,0,\sqrt{y^{2}-1},0)\] \[\varepsilon =\frac{1}{\sqrt{2}}\big{(}0,\cos\theta\cos\phi-i\sin\phi,\cos \theta\sin\phi+i\cos\phi,-\sin\theta\big{)}\,,\ \ b=(0,0,0,1). \tag{110}\]
Then we can further parameterise the constrained spin \(a_{1}\) such that \(a_{1}\!\cdot\!v_{2}=0\) as
\[a_{1}=(0,0,|a|\cos\psi,|a|\sin\psi), \tag{111}\]
where \(|a|\) is the magnitude of the spin and \(\psi\) the angle of the spin's direction in the plane orthogonal to \(v_{1}\) and \(v_{2}\).
In Figures 2 and 3 we show the time-domain waveform \(h_{+}\) at \(y=\frac{3}{2},\theta=\frac{\pi}{4},\phi=\frac{\pi}{4}\). In all of our graphs, we set \(\kappa\!\!=\!\!m_{1}\!\!=\!\!m_{2}\!\!=\!\!1\), so each graph is missing a factor of \(\kappa^{4}m_{1}m_{2}\). Figure 2 shows the waveform dependence on the retarded time \(u\) and angle \(\psi\). When the magnitude of the spin is equal to \(0.2\), a small spin parameter compared to the magnitude of the impact parameter \(|b|\), the time-domain waveform is similar to the scalar case. The spin effect on the waveform can then be taken as a perturbation on top of the spinless cases. However, for a larger magnitudes, for example \(0.65\), the time-domain waveform is modified greatly due to the effects of spin. To highlight
**Figure 2**: Waveform of \(h_{+}^{\infty}\) at \(|a|=0.0\), \(|a|=0.2\), \(|a|=0.65\). Note that here \(|a|,u\) are in units of the impact parameter \(|b|\) as we have set \(|b|=1\). For general impact parameter, one can simply replace \(|a|,u\) with \(|a|/|b|,u/|b|\). Similarly in Figure 3.
**Figure 3**: Waveform of \(h_{+}^{\infty}\) at \(|a|=0.0\), \(|a|=0.2\), \(|a|=0.65\) with spin angle \(\psi=\frac{\pi}{4}\).
the effect of changing the magnitude of the spin, in the Figure 3 we plot the various spinning waveforms at fixed spin angle \(\psi=\frac{\pi}{4}\).
From the waveform, we can extract the gravitational memory effect using
\[\Delta h^{\infty}=h^{\infty}(+\infty)-h^{\infty}(-\infty). \tag{108}\]
We first study the Taylor expansion around \(u\to\infty\) of the individual pieces \(h_{i}^{\infty}(u)\) which contribute to the waveform in (104) and find they all have similar behaviour
\[h_{i}^{\infty}(u)\sim c_{i}+\mathcal{O}\Big{(}\frac{1}{u}\Big{)}. \tag{109}\]
The contributions \(h_{1}^{\infty}\) and \(h_{2}^{\infty}\) have the derivative \(i\partial_{u}\) acting on them as such their behaviour in the large \(u\) limit is sub-leading and they do not contribute to the memory. The memory can then by computed from the contribution \(h_{0}^{\infty}\) and we find
\[\Delta h^{\infty} =\frac{\kappa^{4}m_{1}m_{2}}{8\pi}\frac{v_{1}\!\cdot\!F_{k}\!\cdot \!v_{2}}{4w_{1}^{2}w_{2}^{2}\sqrt{y^{2}-1}\left(4\left(a_{1}\!\cdot\!b\right){} ^{2}+\left(a_{1}\!\cdot\!a_{1}+1\right){}^{2}\right)}\] \[\times\Big{(}2a_{1}\!\cdot\!bv_{1}\!\cdot\!F_{k}\!\cdot\!v_{2} \left(\left(1-2y^{2}\right)a_{1}\!\cdot\!k+w_{0}y\left(a_{1}\!\cdot\!a_{1}-1 \right)\right)\] \[+4w_{2}a_{1}\!\cdot\!b\left(\left(1-2y^{2}\right)a_{1}\!\cdot\!F _{k}\!\cdot\!v_{1}+y\left(a_{1}\!\cdot\!a_{1}-1\right)v_{\perp}\!\cdot\!F_{k} \!\cdot\!v_{1}\right)\] \[-\left(a_{1}\!\cdot\!a_{1}+1\right)\left(-2w_{2}b\!\cdot\!F_{k} \!\cdot\!v_{1}+w_{3}v_{1}\!\cdot\!F_{k}\!\cdot\!v_{2}\right)\left(2yv_{2}\! \cdot\!S_{1}\!\cdot\!b+2y^{2}-1\right)\Big{)}. \tag{110}\]
In this compact formula, we notice that all the terms contain at least one pole of \(w_{1}\) and \(w_{2}\). This indicates that contact terms in the Compton amplitude do not contribute to the memory at any order in spin. As such we should expect that the waveform we have computed fully captures all orders in the spin memory. In addition, we compute a formula for the tree-level gravitational memory at all orders in spin (107) in Section 8 below using a classical soft factor. The two formulae are indeed in agreement. We also mention again that we have compared our results to those of [157], finding agreement (see also [165]).
A graph of the memory, for the same kinematics as before and various values of the magnitude of the spin and direction, is presented in Figure 4. When \(|a|\) tends to 1, there are two singular points at \(\psi=0,\pi\) corresponding to when the spin vector and impact parameter are orthogonal.
## 7 Comparison with the spin-expanded integrand
If the spin parameter is small with respect to the impact parameter \(|a|\ll|b|\) then we can evaluate the waveform integration order by order in a spin expansion. When we perform such an expansion the tree-level five-point amplitude is free of the spin-dependent spurious poles. One can still work in the \(q_{1}^{2}\) and \(q_{2}^{2}\) channels separately,
which only contain one spurious pole \(\frac{1}{q_{1}\cdot k}\). After the usual re-scaling \(q_{i}=\omega\hat{q}_{i}\), the waveform integrand is given by
\[\mathcal{M}_{q_{1}^{2}}=\frac{1}{\omega^{2}}\Big{(}\sum_{i=0}^{ \infty}\mathcal{M}_{q_{1}^{2}}^{(i)}\omega^{i}\Big{)}, \mathcal{M}_{q_{2}^{2}}=\frac{1}{\omega^{2}}\Big{(}\sum_{i=0}^{ \infty}\mathcal{M}_{q_{2}^{2}}^{(i)}\omega^{i}\Big{)}. \tag{7.1}\]
We still integrate over the frequency first but now after expanding in the spin parameter there is only one sector per cut. Thus the integrand contains the same delta functions as in the scalar case
\[\begin{cases}&\delta\left(-b\cdotq_{1}+u\right)\quad q_{1}^{2}\text{- channel}\\ &\delta\left(-b\cdotq_{2}+u\right)\quad q_{2}^{2}\text{-channel}.\end{cases} \tag{7.2}\]
The extra powers of \(\omega\) become derivatives in the time, \(i\partial_{u}\), as before. Now using the original parameterisation (5.20), after we localise \(z_{b}\) each term in the integrand belongs to one of the following general expressions
\[\frac{c_{0}+c_{1}z_{v}+c_{2}z_{v}^{2}+c_{3}z_{v}^{3}\cdots}{(z_{v }+\bar{c})(z_{v}^{2}+\bar{c})}, \frac{\bar{c}_{0}+\bar{c}_{1}z_{v}+\bar{c}_{2}z_{v}^{2}+\bar{c}_{3}z _{v}^{3}\cdots}{(z_{v}^{2}+\bar{c})}\,, \tag{7.3}\]
where the \(c\)'s are functions of the external kinematics. The \(\frac{1}{z_{v}^{2}+c_{2}^{\prime}}\) is the physical \(q_{1}^{2}\) (or \(q_{2}^{2}\)) pole and \(\frac{1}{z_{v}+c_{1}^{\prime}}\) is the spurious pole at \(q_{1}\cdot k\). Since the waveform only receives contributions from the physical pole, we can use polynomial division to reduce the numerators. Explicitly, we perform polynomial division over the physical pole, and obtain
\[\frac{c_{0}^{\prime}+c_{1}^{\prime}z_{v}}{(z_{v}+\bar{c})(z_{v}^{ 2}+\bar{c})}+(\text{terms without physical poles}),\]
Figure 4: Gravitational memory: the top graph (blue) is the imaginary part, corresponding to \(h_{\times}^{\infty}\), and the bottom graph (orange) is the real part, corresponding to \(h_{+}^{\infty}\).
\[\frac{\bar{c}_{0}^{\prime}+\bar{c}_{1}^{\prime}z_{v}}{(z_{v}^{2}+\bar{c})}+(\text{ terms without physical poles}). \tag{100}\]
Terms without physical poles correspond to contributions that are proportional to delta functions in \(b\) (and derivatives thereof) and hence do not contribute to the long-range waveform. Thus we only have the following two types of master integrals after performing partial fractions over the spurious pole
\[\frac{c_{0}^{{}^{\prime\prime}}}{(z_{v}+\bar{c})(z_{v}^{2}+\bar{ c})}\,,\] \[\frac{\bar{c}_{0}^{{}^{\prime\prime}}+\bar{c}_{1}^{{}^{\prime \prime}}z_{v}}{(z_{v}^{2}+\bar{c})}=\frac{\bar{c}_{0}^{{}^{\prime\prime}}}{(z_ {v}^{2}+\bar{c})}+(\text{terms that integrate to zero}). \tag{101}\]
The two master integrals can then be evaluated by calculating the residue on the physical pole. The final result of the \(q_{1}\) integral is of the form
\[h_{\text{expanded}}^{\infty}(u) =\Big{(}\sum_{i=0}^{\infty}\omega^{i}h_{i}^{\infty}(u)\Big{)}|_{ \omega\to i\partial_{u}}\] \[\sim\left[\frac{\omega^{4}a_{1}{\cdot}a_{1}a_{1}{\cdot}kv_{\perp} {\cdot}F_{2}{\cdot}v_{1}a_{1}{\cdot}F_{2}{\cdot}v_{1}}{24\left(y^{2}-1\right) }-\frac{i\omega^{3}a_{1}{\cdot}a_{1}v_{\perp}{\cdot}F_{2}{\cdot}v_{1}\text{tr }\left(F_{2},S_{1}\right)}{48\left(y^{2}-1\right)}\right.\] \[-\left.\frac{w_{0}\left(2y^{2}-1\right)\left(v_{1}{\cdot}F_{2}{ \cdot}v_{2}\right){}^{2}}{8w_{1}^{2}w_{2}^{2}\left(y^{2}-1\right)}-\frac{ \left(2y^{2}-1\right)\left(v_{1}{\cdot}F_{2}{\cdot}v_{2}\right){}^{2}}{8w_{1} ^{2}w_{2}^{2}\left(y^{2}-1\right)\sqrt{-u^{2}\left(y^{2}-1\right)-w_{2}^{2}}}\right.\] \[\left.\times\left(-w_{0}\sqrt{u^{2}\left(-y^{2}\right)+u^{2}-w_{ 2}^{2}}+uw_{3}y^{2}-uw_{3}+w_{1}w_{2}y-w_{2}^{2}\right)\right.\] \[\left.+\cdots\text{more terms}\cdots\right]\Bigg{|}_{\omega\to i \partial_{u}}. \tag{102}\]
The full waveform result expanded in the spin parameter up to \(a^{4}\) order is included in the GitHub repository. Our result contains contributions at orders beyond \(a^{4}\) but these will in general be incomplete until possible additional contact terms are included in the Compton amplitude.
We now comment on the difference between the resummed spinning waveform versus the spin-expanded waveform truncated at \(\mathcal{O}(a^{4})\). To do so, we illustrate the spin-expanded waveform at \(|a|=0.2\) and \(|a|=0.65\) in Figure 5. Comparing with the resummed result shown for the same values in Figure 2, we see that for \(|a|=0.2\) the spin expanded result at \(\mathcal{O}(a^{4})\) is accurate. However, at \(|a|=0.65\) the spin expansion breaks down and the perturbative result is no longer valid. To see more clearly the difference between the resummed spin result and the perturbative spin result truncated at \(\mathcal{O}(a^{4})\), we also fix \(\psi=\frac{\pi}{4}\). For lower values of spin, for example \(|a|=0.2\), the expanded and resummed waveforms are nearly identical, as shown in the right-hand side of Figure 6. Conversely, for large values of the spin, for example \(|a|=0.65\), the expanded and resummed results are markedly different, although their limiting values as \(u\to\pm\infty\) are similar.
## 8 Gravitational memory
### General strategy
An elegant way to compute the memory was discussed in [114] for the spinless case, and we adapt it to the case of spinning heavy particles. Given a function
\[f(u)\coloneqq\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}e^{-i\omega u}\tilde{f }(\omega) \tag{8.1}\]
of the retarded time \(u\), the memory is defined as
\[\begin{split}\Delta f&\coloneqq f(u\to+\infty)-f(u \to-\infty)=\int_{-\infty}^{+\infty}du\,\frac{d}{du}f(u)\\ &=-i\int_{-\infty}^{+\infty}\!\!d\omega\,\delta(\omega)\big{[} \omega\tilde{f}(\omega)\big{]}\,,\end{split} \tag{8.2}\]
showing that it is determined by the pole at \(\omega{=}0\), i.e. its soft limit, as observed by [166].
We now apply (8.2) to (5.13) to compute the gravitational memory, getting
\[\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})=-i\frac{\kappa}{2} \left[\lim_{\omega\to 0^{+}}\big{[}\omega W\big{(}b,k^{\pm}\big{)}\big{]}_{k= \omega(1,\hat{\mathbf{x}})}+\lim_{\omega\to 0^{-}}\big{[}\omega W^{*} \big{(}b,k^{\mp}\big{)}\big{]}_{k=-\omega(1,\hat{\mathbf{x}})}\right]\,. \tag{8.3}\]
From this relation we see that the memory effect arises from the leading soft behaviour of the five-point amplitude, which factorises into a soft factor times a four-point amplitude, schematically
\[\mathcal{M}_{5}\to\mathrm{Soft}\times\mathcal{M}_{4}. \tag{8.4}\]
Correspondingly, as \(\omega\to 0\) the waveform tends to its leading soft limit,
\[W_{\mathrm{soft}}\big{(}b,k^{h}\big{)}=-i\int\!d\mu^{(D)}e^{iq\cdot b}S_{ \mathrm{W}}^{\mathrm{HEFT}}(k,q;h)\mathcal{M}_{4}^{\mathrm{HEFT}}(q)\,, \tag{8.5}\]
where [62]
\[\begin{split} S_{\mathrm{W}}^{\mathrm{HEFT}}&=- \frac{\kappa}{2}\,\varepsilon_{\mu\nu}^{(h)}(k)\left[\frac{p_{1}^{\mu}q^{\nu}+ p_{1}^{\nu}q^{\mu}}{p_{1}\!\cdot\!k}-p_{1}^{\mu}p_{1}^{\nu}\frac{q\!\cdot\!k}{(p_{1} \!\cdot\!k)^{2}}\,-\,1\leftrightarrow 2\right]\\ &=-\frac{\kappa}{2}\,\frac{1}{\omega}\varepsilon_{\mu\nu}^{(h)}(k )\left[\frac{p_{1}^{\mu}q^{\nu}+p_{1}^{\nu}q^{\mu}}{p_{1}\!\cdot\!\hat{k}}-p_{ 1}^{\mu}p_{1}^{\nu}\frac{q\!\cdot\!\hat{k}}{(p_{1}\!\cdot\!\hat{k})^{2}}\,-\,1 \leftrightarrow 2\right]\,,\end{split} \tag{8.6}\]
is the classical Weinberg soft factor for the emission of a graviton with momentum \(k{=}\omega\hat{k}\) and helicity \(h\), with \(q{=}q_{1}{=}-q_{2}\) in the soft limit and \(\hat{k}{=}(1,\hat{\mathbf{x}})\) (see [62] for a derivation of the classical soft factor and a discussion of classical limits in the HEFT context).
We then change integration variables \(q{\to}-q\), and use
\[S_{\mathrm{W}}^{\mathrm{HEFT}}(-k,-q;h)=S_{\mathrm{W}}^{\mathrm{HEFT}}(k,q;h)\,, \tag{8.7}\]
\[\big{[}S_{\mathrm{W}}^{\mathrm{HEFT}}(k,q,-h)\big{]}^{*}=S_{\mathrm{W}}^{ \mathrm{HEFT}}(k,q,h)\,, \tag{8.8}\]
also noting that, at tree level in the spinning (and spinless) case,6
Footnote 6: In the spinless case we further have \(\mathcal{M}_{4}^{\mathrm{HEFT}}(-q)=\mathcal{M}_{4}^{\mathrm{HEFT}}(q)\). This is no longer true in the presence of spin.
\[-i\mathcal{M}_{4}^{\mathrm{HEFT}}(q)=\big{[}-i\mathcal{M}_{4}^{\mathrm{HEFT}} (-q)\big{]}^{*}\,, \tag{8.9}\]
which can be checked from the explicit expression derived later in (8.25). With these observations, we get
\[\begin{split} W^{*}_{\text{soft}}\left(b,k^{-h}\right)\big{|}_{k=- \omega(1,\hat{\mathbf{x}})}&=\int\!d\mu^{(D)}e^{-iq\cdot b} \big{[}S^{\text{HEFT}}_{\text{W}}(-k,q;-h)\big{]}^{*}(-i\mathcal{M}^{\text{ HEFT}}_{4})^{*}(q)\\ &=\int\!d\mu^{(D)}e^{iq\cdot b}S^{\text{HEFT}}_{\text{W}}(-k,-q;h) (-i\mathcal{M}^{\text{HEFT}}_{4})^{*}(-q)\\ &=\int\!d\mu^{(D)}e^{iq\cdot b}S^{\text{HEFT}}_{\text{W}}(k,q;h)( -i\mathcal{M}^{\text{HEFT}}_{4})(q)\,.\end{split} \tag{8.10}\]
Hence we can write
\[\begin{split}&\lim_{\omega\to 0^{+}}\big{[}\omega W\big{(}b,k^{h} \big{)}\big{]}_{k=\omega(1,\hat{\mathbf{x}})}+\lim_{\omega\to 0^{-}}\big{[} \omega W^{*}\big{(}b,k^{-h}\big{)}\big{]}_{k=-\omega(1,\hat{\mathbf{x}})}\\ &=\int\!d\mu^{(D)}e^{iq\cdot b}S^{\text{HEFT}}_{\text{W}}(\hat{k},q;h)(-i\mathcal{M}^{\text{HEFT}}_{4})\,.\end{split} \tag{8.11}\]
In conclusion
\[\begin{split}\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})& =-i\,\kappa\int\!d\mu^{(D)}e^{iq\cdot b}S^{\text{HEFT}}_{\text{W} }(\hat{k},q;\pm)\,\big{(}-i\mathcal{M}^{\text{HEFT}}_{4})(q)\\ &=-i\,\kappa S^{\text{HEFT}}_{\text{W}}\Big{(}\hat{k},-i\frac{ \partial}{\partial b};\pm\Big{)}\int\!d\mu^{(D)}e^{iq\cdot b}\,\big{(}-i \mathcal{M}^{\text{HEFT}}_{4}\big{)}(q)\\ &=-i\,\kappa S^{\text{HEFT}}_{\text{W}}\Big{(}\hat{k},-i\frac{ \partial}{\partial b};\pm\Big{)}\,\delta_{\text{HEFT}}\,,\end{split} \tag{8.12}\]
or
\[\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})=-i\,\kappa S^{\text{HEFT}}_{ \text{W}}\Big{(}\hat{k},-i\frac{\partial}{\partial b};\pm\Big{)}\,\delta_{ \text{HEFT}}\,, \tag{8.13}\]
where
\[\delta_{\text{HEFT}}\coloneqq\int\!d\mu^{(D)}e^{iq\cdot b}\,\big{(}-i \mathcal{M}^{\text{HEFT}}_{4}\big{)}(q)\,, \tag{8.14}\]
and \(S^{\text{HEFT}}_{\text{W}}\) is given in (8.6), and we also recall that \(k{=}\omega\hat{k}\). Note that \(\delta_{\text{HEFT}}\) is real because of the property (8.8).
In the spinless case, one can further simplify this result by noticing that
\[\frac{\partial}{\partial b_{\mu}}{=}-P\hat{b}^{\mu}\frac{\partial}{\partial J }\,, \tag{8.15}\]
where \(J=P\sqrt{-b^{2}}\), and the relation between the scattering angle and the real part of the HEFT phase
\[-\frac{\partial}{\partial J}\operatorname{Re}\delta_{\text{HEFT}}=\chi, \tag{8.16}\]
which itself is already a real quantity at tree level. Using these one finds
\[S_{\rm W}^{\rm HEFT}\Big{(}\hat{k},-i\frac{\partial}{\partial b_{\mu}};h\Big{)} \,\delta_{\rm HEFT}=-iPS_{\rm W}^{\rm HEFT}(\hat{k},\hat{b}^{\mu};h)\chi\,, \tag{111}\]
leading to the compact relation, valid in the spinless case,
\[\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})=-\frac{\kappa^{2}}{2}P\varepsilon _{\rho\lambda}^{\pm\pm}s^{\rho\lambda}(\hat{k},\hat{b})\,\chi\,, \tag{112}\]
where we have set
\[\begin{split} S_{\rm W}^{\rm HEFT}&\coloneqq\frac{ \kappa}{2}\varepsilon_{\rho\lambda}^{\pm\pm}s^{\rho\lambda}(\hat{k},\hat{b}) \\ s^{\mu\nu}(\hat{k},\hat{b})&=-\Big{[}\frac{p_{1}^{ \mu}\hat{b}^{\nu}+p_{1}^{\nu}\hat{b}^{\mu}}{p_{1}\!\cdot\!\hat{k}}-p_{1}^{\mu}p _{1}^{\nu}\frac{\hat{b}\!\cdot\!\hat{k}}{(p_{1}\!\cdot\!\hat{k})^{2}}\,-\,1 \leftrightarrow 2\Big{]}\,,\end{split} \tag{113}\]
and we recall that \(\hat{k}=(1,\hat{\bf x})\) and \(\hat{b}=b/\sqrt{-b^{2}}\). In the spinning case we do not have a simple relation such as (110) and we will instead make use of (108). To compute the gravitational memory in the spinning case we will then use (108) and (109).
We now move on to compute the tree-level-four-point amplitude that features in (109).
### Four-point two-to-two spinning amplitude
In this section we derive the tree-level amplitude for the two-to-two scattering of two heavy objects with spin vectors \(a_{1}\) and \(a_{2}\) to all orders in the spin. We will then compute its Fourier transform to impact parameter space needed in (109).
We can derive the four-point amplitude using the HEFT BCFW method first described in [62], to which we refer the reader for further details. There is a single diagram in the \(q^{2}\)-channel for which we glue two of the three point amplitudes (100) with the BCFW-shifted momenta described in [62]. We find that the four-point tree-level amplitude \(\mathcal{M}_{4}\) is then
\[\begin{split}\mathcal{M}_{4}&=-i\frac{\kappa^{2}}{ q^{2}}m_{1}^{2}m_{2}^{2}\Big{[}\Big{(}y^{2}-\frac{1}{2}\Big{)}\cosh{(a\!\cdot\!q)} \\ &+\,i\,y\Big{(}\sinh{(a_{2}\!\cdot\!q)}\cosh{(a_{1}\!\cdot\!q)} \,\frac{\epsilon\,(a_{2}qv_{1}v_{2})}{a_{2}\!\cdot\!q}+\sinh{(a_{1}\!\cdot\!q) }\cosh{(a_{2}\!\cdot\!q)}\,\frac{\epsilon\,(a_{1}qv_{1}v_{2})}{a_{1}\!\cdot\!q }\Big{)}\Big{]}\\ &+\mathcal{M}_{4,c}\,,\end{split} \tag{114}\]
where \(a\!\!:=\!\!a_{1}+a_{2}\), and the contact term \(\mathcal{M}_{4,c}\) is
\[\mathcal{M}_{4,c}\coloneqq-i\kappa^{2}m_{1}^{2}m_{2}^{2}\Big{[}y\,a_{1}\!\cdot \!v_{2}\,a_{2}\!\cdot\!v_{1}-a_{1}\!\cdot\!a_{2}\Big{(}y^{2}-\frac{1}{2}\Big{)} \Big{]}\frac{\sinh(a_{1}\!\cdot\!q)}{a_{1}\!\cdot\!q}\frac{\sinh(a_{2}\!\cdot \!q)}{a_{2}\!\cdot\!q}\,. \tag{115}\]
We note however that contact terms play no role for the computation of the memory, since they only contribute delta-function supported terms after Fourier transforming to impact parameter space. We will then drop them from now on (denoting the contact terms as \(\mathcal{O}(1)\)).
We now simplify the expression (8.20) for the four-point amplitude making use of the new spin vectors [43; 123]
\[\mathfrak{a}_{i}^{\mu}\coloneqq\frac{\epsilon(a_{i}\mu v_{1}v_{2})}{\sqrt{y^{2 }-1}}\,,\quad i=1,2, \tag{8.22}\]
which are orthogonal to both \(v_{1}\) and \(v_{2}\). These quantities also satisfy the following Gram determinant relations
\[(a_{i}\cdot q)^{2}=(i\mathfrak{a}_{i}\cdot q)^{2}+\mathcal{O}(q^{2})\,, \tag{8.23}\]
which are proven in Appendix A, and their "square rooted" form
\[a_{i}\cdot q=\pm i\mathfrak{a}_{i}\cdot q\,, \tag{8.24}\]
valid up to terms of \(\mathcal{O}(q^{2})\), that is \(q\) on-shell and so necessarily complex. Furthermore, as both \(\cosh(a_{i}\cdot q)\) and \(\frac{\sinh(a_{i}\cdot q)}{a_{i}\cdot q}\) are parity-even functions of \(a_{i}\cdot q\), the sign ambiguity drops out and the amplitude can be simplified to
\[\mathcal{M}_{4} =-\frac{i\kappa^{2}}{q^{2}}m_{1}^{2}m_{2}^{2}\Big{\{}\Big{(}y^{2 }-\frac{1}{2}\Big{)}\cosh(i\mathfrak{a}\cdot q)+y\sqrt{y^{2}-1}\sinh(i \mathfrak{a}\cdot q)\Big{\}}+\mathcal{O}(1)\] \[=-\frac{i\kappa^{2}}{q^{2}}\frac{m_{1}^{2}m_{2}^{2}}{2}\Big{\{} \Big{(}y^{2}-\frac{1}{2}+y\sqrt{y^{2}-1}\Big{)}e^{i\mathfrak{a}\cdot q}+\Big{(} y^{2}-\frac{1}{2}-y\sqrt{y^{2}-1}\Big{)}e^{-i\mathfrak{a}\cdot q}\Big{\}}\] \[+\mathcal{O}(1)\,, \tag{8.25}\]
where
\[\mathfrak{a}\coloneqq \mathfrak{a}_{1}+\mathfrak{a}_{2}. \tag{8.26}\]
Note the nontrivial fact that at tree level the pole part of the amplitude that we have considered so far depends only on the sum \(\mathfrak{a}\) of the spins of the two heavy objects. We also remark that the contact term (8.21) does not have this property.
### Fourier transform to impact parameter space
Having cast the amplitude (up to contact terms) in the form (8.25), we can perform the Fourier transform to impact parameter space to all orders in the spin, which will trivially shift \(b^{\mu}\to b^{\mu}\pm\mathfrak{a}\), as was seen in [123]. We have
\[\widetilde{\mathcal{M}}_{4}=\int\!\frac{d^{D}q}{(2\pi)^{D-2}} \delta(2p_{1}\cdot q)\delta(2p_{2}\cdot q)\,e^{iq\cdot b}\,\mathcal{M}_{4}= \frac{1}{4m_{1}m_{2}\sqrt{y^{2}-1}}\int\!\frac{d^{D-2}q_{\perp}}{(2\pi)^{D-2}} e^{-i\vec{q}_{\perp}\cdot\vec{b}}\,\mathcal{M}_{4}\,, \tag{8.27}\]
where \(q_{\perp}\cdotp_{1,2}\)=0 and
\[\mathcal{M}_{4}=f_{+}(y)\frac{e^{i\mathfrak{a}\cdot q}}{q^{2}}+f_{-}(y)\frac{e^{- i\mathfrak{a}\cdot q}}{q^{2}}\,, \tag{100}\]
with
\[f_{\pm}(y)\coloneqq\ -i\,\frac{\kappa^{2}m_{1}^{2}m_{2}^{2}}{2}\left(y^{2}-\frac{1}{ 2}\pm y\sqrt{y^{2}-1}\right)\,. \tag{101}\]
Thus, we have to compute the Fourier transform
\[\widetilde{\mathcal{M}}_{4}=\int\!\frac{d^{D}q}{(2\pi)^{D-2}} \delta(2p_{1}\cdotq)\delta(2p_{2}\cdotq)\,\left[e^{iq\cdot(b+\mathfrak{a})}f_{ +}(y)+e^{iq\cdot(b-\mathfrak{a})}f_{-}(y)\right]\,. \tag{102}\]
We use
\[\int\!\frac{d^{d}q}{(2\pi)^{d}}e^{-i\vec{q}\cdot\vec{b}}|\vec{q} \,|^{p}=\frac{2^{p}\pi^{-\frac{d}{2}}\Gamma\left(\frac{d+p}{2}\right)}{\Gamma \left(-\frac{p}{2}\right)}\frac{1}{|\vec{b}\,|^{d+p}}\,, \tag{103}\]
which in our case gives the result, as \(D\to 4\),
\[\int\!\frac{d^{D-2}q}{(2\pi)^{D-2}}\frac{e^{-i\vec{q}\cdot\vec{b} }}{\vec{q}^{\,2}}=\frac{\Gamma\left(\frac{D-4}{2}\right)}{4\pi^{\frac{D-2}{2} }|\vec{b}\,|^{D-4}}\xrightarrow[D\to 4]{}-\frac{1}{2\pi}\log(|\vec{b}\,|)+\cdots\,, \tag{104}\]
where the dots stand for \(b\)-independent terms. This leads to
\[\widetilde{\mathcal{M}}_{4}=\frac{1}{8\pi m_{1}m_{2}\sqrt{y^{2}- 1}}\Big{[}f_{+}(y)\log(|b+\mathfrak{a}|)+f_{-}(y)\log(|b-\mathfrak{a}|)\Big{]} +\cdots\,, \tag{105}\]
where we observe that the vector \(\mathfrak{a}\)=\(\mathfrak{a}_{1}+\mathfrak{a}_{2}\) lives in the same two-dimensional subspace orthogonal to \(p_{1}\) and \(p_{2}\) as \(b\). (105) agrees with (51) of [123].
### Result for the gravitational memory
Finally, to compute the gravitational memory we use (102) and (103). Introducing the two vectors
\[b_{\pm}\coloneqq b\pm\mathfrak{a}\,,\qquad\text{with}\qquad\hat{b}_{\pm} \coloneqq\frac{b_{\pm}}{|b_{\pm}|}\,, \tag{106}\]
we have at once
\[\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})=-\frac{i\kappa}{8 \pi m_{1}m_{2}\sqrt{y^{2}-1}}\Big{[}S_{\text{W}}^{\text{HEFT}}(\hat{k},\hat{b} _{+};\pm)\frac{f_{+}(y)}{|b_{+}|}+S_{\text{W}}^{\text{HEFT}}(\hat{k},\hat{b}_{ -};\pm)\frac{f_{-}(y)}{|b_{-}|}\Big{]}, \tag{107}\]
which is the final result for the memory, with \(S_{\text{W}}^{\text{HEFT}}\) defined in (100) and \(f_{\pm}\) in (101). This result is exact to all orders in the spin vector \(a\). One can expand it to
various order in \(a\), and doing so one finds perfect agreement with the result of [151] for the memory in the aligned spin case up to \(\mathcal{O}(a^{2})\).
We also note that in the spinless case, the previous formula becomes
\[\Delta(h_{+}^{\infty}\pm ih_{\times}^{\infty})\big{|}_{a=0} =-\frac{i\kappa}{8\pi m_{1}m_{2}\sqrt{y^{2}-1}}(-i\kappa^{2}m_{1}^ {2}m_{2}^{2})\frac{1}{|b|}\Big{(}y^{2}-\frac{1}{2}\Big{)}S_{\rm W}^{\rm HEFT}( \hat{k},\hat{b};\pm)\] \[=-\frac{\kappa^{3}}{8\pi}\frac{m_{1}m_{2}}{\sqrt{y^{2}-1}}\frac{1 }{|b|}\Big{(}y^{2}-\frac{1}{2}\Big{)}S_{\rm W}^{\rm HEFT}(\hat{k},\hat{b};\pm )\,, \tag{8.36}\]
in agreement with known results (see e.g. [151]).
## Acknowledgements
We would like to thank Massimo Bianchi, N. Emil J. Bjerrum-Bohr, Stefano De Angelis, Thibault Damour, Claudio Gambino, Fabio Riccioni and Marcos Skowronek for several interesting conversations. GT thanks the Physics Department at the University of Rome "Tor Vergata" for their warm hospitality and support. This work was supported by the Science and Technology Facilities Council (STFC) Consolidated Grants ST/P000754/1 _"String theory, gauge theory & duality"_ and ST/T000686/1 _"Amplitudes, strings & duality"_. The work of GRB and JG is supported by an STFC quota studentship. GC has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 847523 "INTERACTIONS". No new data were generated or analysed during this study.
## Appendix A Simplifying the four-point amplitude
In the main text we have defined a new spin vector (8.22), and \(\mathfrak{a}\coloneqq\mathfrak{a}_{1}+\mathfrak{a}_{2}\) which are all orthogonal to both \(v_{1}\) and \(v_{2}\). Now from the square of the Levi-Civita tensor we obtain a Gram determinant, for \(i=1,2\),
\[\big{[}\epsilon(a_{i}qv_{1}v_{2})\big{]}^{2}=\frac{(a_{i}{\cdot}q)^{2}}{1-y^{2 }}+q^{2}\Big{[}(a_{i}{\cdot}v_{1})^{2}+(a_{i}{\cdot}v_{2})^{2}-2y\,a_{i}{\cdot }v_{1}a_{i}{\cdot}v_{2}+a_{i}^{2}(y^{2}-1)\Big{]}\,,\] (A.1)
valid with the HEFT constraints \(v_{1,2}{\cdot}q{=}0\). Using that \(v_{i}{\cdot}a_{i}{=}0\) we then find
\[\begin{split}\big{[}\epsilon\left(a_{1}qv_{1}v_{2}\right)\big{]} ^{2}&=-\frac{(a_{1}{\cdot}q)^{2}}{y^{2}-1}+q^{2}\Big{(}(a_{1}{ \cdot}v_{2})^{2}+a_{1}^{2}(y^{2}-1)\Big{)}\,,\\ \big{[}\epsilon\left(a_{2}qv_{1}v_{2}\right)\big{]}^{2}& =-\frac{(a_{2}{\cdot}q)^{2}}{y^{2}-1}+q^{2}\Big{(}(a_{2}{\cdot}v_{1})^{2}+a_{ 2}^{2}(y^{2}-1)\Big{)}\,.\end{split}\] (A.2)
In the calculation in impact parameter space, \(\mathcal{O}(q^{2})\) terms do not contribute, giving (8.23) in the main text.
To simplify the four-point amplitude we actually used the square root of the above relations, that is
\[\frac{\epsilon\left(a_{2}qv_{1}v_{2}\right)}{a_{2}{\cdot}q}=\frac{\epsilon \left(a_{1}qv_{1}v_{2}\right)}{a_{1}{\cdot}q}=\mp i\sqrt{y^{2}-1}+\mathcal{O} (q^{2})\,,\] (A.3)
which are again valid up to terms order \(\mathcal{O}(q^{2})\). Since the amplitude is parity even, the sign ambiguity in these relations drops out.
One might ask what determines the sign on the right-hand side of (A.3). A simple way to answer this question is to go to the rest frame of particle one, and
show that the \(\mp\) sign in (A.3) follows the particular choice of the on-shell momentum \(q\). We can set
\[v_{1}=(1,0,0,0)\,,\qquad v_{2}=(y,0,0,\sqrt{y^{2}-1})\,.\] (A.4)
Now, \(q\) is on-shell, \(q^{2}\)=0, and satisfies the usual constraints \(q\cdot v_{1}\)=\(q\cdot v_{2}=0\), hence it must have the form
\[q=(0,r,\pm ir,0)\,.\] (A.5)
Finally, using \(a_{1}\cdot v_{1}=0\) and \(a_{2}\cdot v_{2}=0\), the spin vectors are of the form
\[\begin{split} a_{1}&=\left(0,a_{1x},a_{1y},a_{1z} \right),\\ a_{2}&=\left(\frac{s}{y},a_{2x},a_{2y},\frac{s}{ \sqrt{y^{2}-1}}\right),\end{split}\] (A.6)
and hence we can evaluate, for \(i=1,2\),
\[\begin{split}\epsilon(v_{1}v_{2}a_{i}q)&=\epsilon( \vec{v}_{2}\vec{a}_{i}\vec{q})=\pm\sqrt{y^{2}-1}\,r\left(ia_{ix}\mp a_{iy} \right),\\ a_{i}\cdot q&=ir(ia_{ix}\mp a_{iy})\,.\end{split}\] (A.7)
In conclusion
\[\frac{\epsilon(v_{1}v_{2}qa_{i})}{a_{2}\cdot q}=\mp i\sqrt{y^{2}-1}\,,\] (A.8)
with the same plus or minus sign appearing for \(a_{1}\) or \(a_{2}\) and following from the solution (A.5) chosen for the on-shell momentum \(q\).
## Appendix B More on the integrand
The coefficients in the \(q_{1}^{2}\)-channel are listed below:
\[\begin{split} c_{1}&\to-\frac{1}{2}\left(v_{1}\cdot F _{k}\cdot v_{2}\right){}^{2},\ c_{2}\to\frac{1-2y^{2}}{4w_{1}^{2}},\ c_{3}\to \frac{iy}{2w_{1}^{2}},\ c_{4}\to\frac{iyv_{1}\cdot F_{k}\cdot v_{2}}{2w_{1}^{ 2}},\\ c_{5}&\to-\frac{iyv_{1}\cdot F_{k}\cdot v_{2}}{2w_ {1}^{2}},\ c_{6}\to\frac{yv_{1}\cdot F_{k}\cdot v_{2}}{w_{1}},\ c_{7}\to- \frac{iy_{1}\cdot F_{k}\cdot v_{2}}{2w_{1}},\ c_{8}\to-\frac{i\left(v_{1} \cdot F_{k}\cdot v_{2}\right){}^{2}}{2w_{1}},\\ c_{9}&\to-\frac{(2y^{2}-1)\left(v_{1}\cdot F_{k} \cdot v_{2}\right){}^{2}}{4w_{1}^{2}w_{2}^{2}},\ c_{10}\to-\frac{(2y^{2}-1)\,v_ {1}\cdot F_{k}\cdot v_{2}}{2w_{1}^{2}w_{2}},\ c_{11}\to\frac{iy\left(v_{1} \cdot F_{k}\cdot v_{2}\right){}^{2}}{2w_{1}^{2}w_{2}^{2}},\\ c_{12}&\to\frac{iyv_{1}\cdot F_{k}\cdot v_{2}}{w_ {1}^{2}w_{2}},\ c_{13}\to\frac{iy\left(v_{1}\cdot F_{k}\cdot v_{2}\right){}^{ 2}}{2w_{1}^{2}w_{2}},\ c_{14}\to\frac{y\left(v_{1}\cdot F_{k}\cdot v_{2} \right){}^{2}}{w_{1}w_{2}},\\ c_{15}&\to-\frac{i\left(v_{1}\cdot F_{k}\cdot v_{2} \right){}^{2}}{2w_{1}w_{2}},\ c_{16}\to-\frac{iw_{2}y}{2w_{1}^{2}},\ c_{17} \to\frac{iw_{2}v_{1}\cdot F_{k}\cdot v_{2}}{2w_{1}}\,.\end{split}\] (B.1)
The coefficients in the \(q_{2}^{2}\)-channel are listed below:
\[c_{2} \to-\frac{1}{2}a_{1}{\cdot}a_{1}\left(v_{1}{\cdot}F_{k}{\cdot}v_{2} \right)^{2}\] \[c_{4} \to\left(a_{1}{\cdot}v_{2}\right){}^{2}\left(a_{1}{\cdot}F_{k}{ \cdot}v_{1}\right){}^{2}-\frac{1}{4}a_{1}{\cdot}a_{1}a_{1}{\cdot}F_{k}{\cdot}v _{1}\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}-2a_{1}{\cdot}v_{2}v_{1}{\cdot}F_{k}{ \cdot}v_{2}\right)\] \[c_{5} \to\frac{1}{16}(-3)ia_{1}{\cdot}k\left(a_{1}{\cdot}a_{1}-2\left(a _{1}{\cdot}v_{2}\right){}^{2}\right)\text{tr}\left(F_{k}{\cdot}S_{1}\right),\] \[c_{6} \to\frac{3}{16}i\left(a_{1}{\cdot}a_{1}-2\left(a_{1}{\cdot}v_{2} \right){}^{2}\right)\text{tr}\left(F_{k}{\cdot}S_{1}\right)a_{1}{\cdot}F_{k}{ \cdot}v_{1},\] \[c_{7} \to\frac{y\left(a_{1}{\cdot}v_{2}a_{1}{\cdot}F_{k}{\cdot}v_{1}+a _{1}{\cdot}a_{1}v_{1}{\cdot}F_{k}{\cdot}v_{2}\right)}{2w_{1}^{3}},\] \[c_{10} \to\frac{a_{1}{\cdot}F_{k}{\cdot}v_{1}\left(2ya_{1}{\cdot}ka_{1}{ \cdot}v_{2}+w_{1}\left(\left(a_{1}{\cdot}v_{2}\right){}^{2}+\left(y^{2}-1 \right)a_{1}{\cdot}a_{1}\right)\right)}{4w_{1}^{3}},\] \[c_{11} \to-\frac{\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}\right){}^{2}+2a_{1}{ \cdot}v_{2}v_{1}{\cdot}F_{k}{\cdot}v_{2}a_{1}{\cdot}F_{k}{\cdot}v_{1}+2a_{1}{ \cdot}a_{1}\left(v_{1}{\cdot}F_{k}{\cdot}v_{2}\right){}^{2}}{4w_{1}^{2}},\] \[c_{12} \to\frac{i\left(2y^{2}-1\right)\text{tr}\left(F_{k}{\cdot}S_{1} \right)}{8w_{1}^{2}},\hskip 14.226378ptc_{13}\to\frac{\left(1-2y^{2}\right) \text{tr}\left(F_{k}{\cdot}S_{1}\right)a_{1}{\cdot}F_{k}{\cdot}v_{1}}{16w_{1 }^{2}},\] \[c_{14} \to-\frac{w_{1}a_{1}{\cdot}k\left(\left(a_{1}{\cdot}v_{2}\right){ }^{2}+\left(y^{2}-1\right)a_{1}{\cdot}a_{1}\right)+y\left(a_{1}{\cdot}k\right) {}^{2}a_{1}{\cdot}v_{2}+w_{1}^{2}ya_{1}{\cdot}a_{1}a_{1}{\cdot}v_{2}}{4w_{1}^{ 3}},\] \[c_{15} \to\frac{a_{1}{\cdot}F_{k}{\cdot}v_{1}\left(ya_{1}{\cdot}k+w_{1} a_{1}{\cdot}v_{2}\right)}{4w_{1}^{2}},\] \[c_{16} \to\frac{ya_{1}{\cdot}F_{k}{\cdot}v_{1}\left(2a_{1}{\cdot}v_{2}a _{1}{\cdot}F_{k}{\cdot}v_{1}+a_{1}{\cdot}a_{1}v_{1}{\cdot}F_{k}{\cdot}v_{2} \right)}{2w_{1}},\] \[c_{19} \to-\frac{i\left(w_{1}a_{1}{\cdot}v_{2}v_{1}{\cdot}F_{k}{\cdot}v _{2}\text{tr}\left(F_{k}{\cdot}S_{1}\right)+a_{1}{\cdot}F_{k}{\cdot}v_{1}k{ \cdot}S_{1}{\cdot}F_{k}{\cdot}v_{1}\right)}{4w_{1}},\] \[c_{20} \to\frac{\text{tr}\left(F_{k}{\cdot}S_{1}\right)\left(\left(2y^{ 2}-1\right)a_{1}{\cdot}k+2w_{1}ya_{1}{\cdot}v_{2}\right)}{16w_{1}^{2}},\] \[c_{21} \to\frac{\text{tr}\left(F_{k}{\cdot}S_{1}\right)\left(2ya_{1}{ \cdot}ka_{1}{\cdot}v_{2}-w_{1}\left(a_{1}{\cdot}a_{1}-2\left(a_{1}{\cdot}v_{2 }\right){}^{2}\right)\right)}{16w_{1}},\] \[c_{22} \to-\frac{ya_{1}{\cdot}v_{2}\left(a_{1}{\cdot}F_{k}{\cdot}v_{1} \right){}^{2}}{4w_{1}^{3}},c_{25}\to\frac{iya_{1}{\cdot}F_{k}{\cdot}v_{1}}{2w_{ 1}^{2}},c_{26}\to-\frac{y\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}\right){}^{2}}{4w_ {1}^{2}},\] \[c_{29} \to-\frac{iyv_{1}{\cdot}F_{k}{\cdot}S_{1}{\cdot}v_{2}}{2w_{1}^{2}},c_{32}\to\frac{a_{1}{\cdot}F_{k}{\cdot}v_{1}}{8w_{1}},c_{33}\to-\frac{ia_{1}{ \cdot}F_{k}{\cdot}v_{1}}{4w_{1}},c_{34}\to-\frac{a_{1}{\cdot}ka_{1}{\cdot}F_{k}{ \cdot}v_{1}}{8w_{1}},\] \[c_{35} \to-\frac{\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}\right){}^{2}}{8w_{1 }},c_{40}\to\frac{v_{1}{\cdot}F_{k}{\cdot}v_{2}a_{1}{\cdot}F_{k}{\cdot}v_{1}}{4w _{1}},c_{41}\to-\frac{iv_{1}{\cdot}F_{k}{\cdot}v_{2}a_{1}{\cdot}F_{k}{\cdot}v_{ 1}}{2w_{1}},\] \[c_{43} \to\frac{iv_{1}{\cdot}F_{k}{\cdot}v_{2}v_{1}{\cdot}F_{k}{\cdot}S_{1 }{\cdot}v_{2}}{2w_{1}},c_{44}\to\frac{3ia_{1}{\cdot}ka_{1}{\cdot}F_{k}{\cdot}v_{ 1}}{8w_{1}},c_{45}\to\frac{3ia_{1}{\cdot}ka_{1}{\cdot}v_{2}a_{1}{\cdot}F_{k}{ \cdot}v_{1}}{4w_{1}},\] \[c_{46} \to\frac{3i\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}\right){}^{2}}{8w_{1 }},c_{47}\to-\frac{3ia_{1}{\cdot}v_{2}\left(a_{1}{\cdot}F_{k}{\cdot}v_{1}\right){}^{2 }}{4w_{1}},c_{48}\to\frac{iya_{1}{\cdot}v_{2}\text{tr}\left(F_{k}{\cdot}S_{1} \right)}{4w_{1}},\] \[c_{49} \to-\frac{ya_{1}{\cdot}v_{2}\text{tr}\left(F_{k}{\cdot}S_{1} \right)a_{1}{\cdot}F_{k}{\cdot}v_{1}}{8w_{1}},c_{50}\to\frac{yv_{1}{\cdot}F_{k}{ \cdot}v_{2}\text{tr}\left(F_{k}{\cdot}S_{1}\right)}{8w_{1}},c_{51}\to-\frac{iyv_{1}{ \cdot}F_{k}{\cdot}v_{2}\text{tr}\left(F_{k}{\cdot}S_{1}\right)}{4w_{1}},\]
\[c_{52} \rightarrow\frac{3iya_{1}\cdot\!ka_{1}\cdot\!v_{2}\mathrm{tr}\left(F_{k} \cdot\!S_{1}\right)}{8w_{1}},c_{53}\rightarrow-\frac{3iya_{1}\cdot\!v_{2} \mathrm{tr}\left(F_{k}\cdot\!S_{1}\right)a_{1}\cdot\!F_{k}\cdot\!v_{1}}{8w_{1}},\] \[c_{57} \rightarrow\frac{1}{4}a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(a_{1} \cdot\!a_{1}\left(a_{1}\cdot k+2w_{2}a_{1}\cdot\!v_{2}\right)-4a_{1}\cdot\!k \left(a_{1}\cdot\!v_{2}\right)^{2}\right),\] \[c_{58} \rightarrow-\left(a_{1}\cdot\!v_{2}\right)a_{1}\cdot\!F_{k}\cdot \!v_{1}\left(w_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k} \cdot\!v_{2}\right),\] \[c_{59} \rightarrow-\frac{1}{4}a_{1}\cdot\!a_{1}v_{1}\cdot\!F_{k}\cdot \!v_{2}\left(w_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k }\cdot\!v_{2}\right),\] \[c_{60} \rightarrow\frac{1}{16}\Big{(}a_{1}\cdot\!k\left(a_{1}\cdot\!F_{k} \cdot\!v_{1}\left(\mathrm{tr}\left(F_{k}\cdot\!S_{1}\right)-\frac{2k\cdot\!S _{1}\cdot\!F_{k}\cdot\!v_{1}}{w_{1}}\right)-2a_{1}\cdot\!v_{2}v_{1}\cdot\!F_{k }\cdot\!v_{2}\mathrm{tr}\left(F_{k}\cdot\!S_{1}\right)\right)\] \[-2w_{2}a_{1}\cdot\!v_{2}\mathrm{tr}\left(F_{k}\cdot\!S_{1} \right)a_{1}\cdot\!F_{k}\cdot\!v_{1}\Big{)},\] \[c_{61} \rightarrow\frac{3i\left(\mathrm{tr}\left(F_{k}\cdot\!S_{1} \right)\left(2w_{2}a_{1}\cdot\!v_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1}-a_{1}\cdot\! k\left(a_{1}\cdot\!F_{k}\cdot\!v_{1}-2a_{1}\cdot\!v_{2}v_{1}\cdot\!F_{k}\cdot\!v_{2} \right)\right)\right)}{16}\] \[+\frac{3i\mathrm{tr}\left(F_{k}\cdot\!S_{1}\right)\left(2a_{1} \cdot\!ka_{1}\cdot\!F_{k}\cdot\!v_{1}k\cdot\!S_{1}\cdot\!F_{k}\cdot\!v_{1} \right)}{16w_{1}},\] \[c_{62} \rightarrow-\frac{y\left(a_{1}\cdot\!ka_{1}\cdot\!v_{2}+a_{1} \cdot\!a_{1}\left(w_{1}y-w_{2}\right)\right)}{2w_{1}^{3}},c_{64}\rightarrow\frac {\left(w_{1}-2w_{2}y\right)a_{1}\cdot\!F_{k}\cdot\!v_{1}-2ya_{1}\cdot\!kv_{1} \cdot\!F_{k}\cdot\!v_{2}}{4w_{1}^{3}},\] \[c_{63} \rightarrow\frac{a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(v_{1}\cdot \!F_{k}\cdot\!v_{2}\left(ya_{1}\cdot\!k+w_{1}a_{1}\cdot\!v_{2}\right)+w_{2}ya_ {1}\cdot\!F_{k}\cdot\!v_{1}\right)}{4w_{1}^{3}},\] \[c_{66} \rightarrow-\frac{v_{1}\cdot\!F_{k}\cdot\!v_{2}\left(2w_{1}a_{1} \cdot\!ka_{1}\cdot\!v_{2}+y\left(a_{1}\cdot\!k\right){}^{2}+w_{1}^{2}ya_{1} \cdot\!a_{1}\right)+w_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(ya_{1}\cdot\!k+w_{ 1}a_{1}\cdot\!v_{2}\right)}{4w_{1}^{3}},\] \[c_{67} \rightarrow\frac{a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(-\left(v_{1} \cdot\!F_{k}\cdot\!v_{2}\right)\left(a_{1}\cdot\!ka_{1}\cdot\!v_{2}+w_{1}ya_ {1}\cdot\!a_{1}\right)-w_{2}a_{1}\cdot\!v_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1} \right)}{4w_{1}^{2}},\] \[c_{68} \rightarrow\frac{2a_{1}\cdot\!a_{1}\left(2w_{1}y-w_{2}\right)v_{1} \cdot\!F_{k}\cdot\!v_{2}+a_{1}\cdot\!k\left(a_{1}\cdot\!F_{k}\cdot\!v_{1}+2a_{ 1}\cdot\!v_{2}v_{1}\cdot\!F_{k}\cdot\!v_{2}\right)}{4w_{1}^{2}},\] \[c_{69} \rightarrow-\frac{v_{1}\cdot\!F_{k}\cdot\!v_{2}\left(w_{2}a_{1} \cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k}\cdot\!v_{2}\right)}{4 w_{1}^{2}},\] \[c_{70} \rightarrow\frac{v_{1}\cdot\!F_{k}\cdot\!v_{2}\left(w_{2}a_{1} \cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k}\cdot\!v_{2}\right)}{2w_ {1}^{2}},\] \[c_{71} \rightarrow\frac{v_{1}\cdot\!F_{k}\cdot\!v_{2}\left((a_{1}\cdot\!k ){}^{2}a_{1}\cdot\!v_{2}+2w_{1}ya_{1}\cdot\!a_{1}\cdot\!k+w_{1}^{2}a_{1} \cdot\!a_{1}\cdot\!v_{2}\right)+w_{2}a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(a_{1} \cdot ka_{1}\cdot\!v_{2}+w_{1}ya_{1}\cdot\!a_{1}\right)}{4w_{1}^{2}},\] \[c_{72} \rightarrow-\frac{a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(4ya_{1} \cdot ka_{1}\cdot\!v_{2}+a_{1}\cdot\!a_{1}\left(w_{1}-2w_{2}y\right)\right)}{4 w_{1}},\] \[c_{73} \rightarrow-\frac{a_{1}\cdot\!F_{k}\cdot\!v_{1}\left(w_{2}a_{1} \cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k}\cdot\!v_{2}\right)}{4 w_{1}},\] \[c_{74} \rightarrow\frac{a_{1}\cdot\!F_{k}\cdot\!v_{1}\left((w_{1}-2w_{2}y \right)a_{1}\cdot\!F_{k}\cdot\!v_{1}-2ya_{1}\cdot\!kv_{1}\cdot\!F_{k}\cdot\!v_{2} \right)}{2w_{1}},\] \[c_{76} \rightarrow\frac{3ia_{1}\cdot\!F_{k}\cdot\!v_{1}\left(w_{2}a_{1} \cdot\!F_{k}\cdot\!v_{1}+a_{1}\cdot\!kv_{1}\cdot\!F_{k}\cdot\!v_{2}\right)}{4 w_{1}},\] \[c_{80} \rightarrow\frac{2v_{1}\cdot\!F_{k}\cdot\!v_{2}\mathrm{tr}\left(F_{k} \cdot\!S_{1}\right)\left(w_{1}a_{1}\cdot\!v_{2}-ya_{1}\cdot\!k\right)+a_{1} \cdot\!F_{k}\cdot\!v_{1}\left((w_{1}-2w_{2}y)\,\mathrm{tr}\left(F_{k} \cdot\!S_{1}\right)+2k\cdot\!S_{1}\cdot\!F_{k}\cdot\!v_{1}\right)}{16w_{1}},\]
\[c_{81} \rightarrow-\frac{3i\text{tr}\left(F_{k}\!\cdot\!S_{1}\right)\left( \left(w_{1}-2w_{2}y\right)a_{1}\!\cdot\!F_{k}\!\cdot\!v_{1}-2ya_{1}\!\cdot\!kv_ {1}\!\cdot\!F_{k}\!\cdot\!v_{2}\right)}{16w_{1}},\] \[c_{82} \rightarrow\frac{\left(\left(a_{1}\!\cdot\!k\right){}^{2}+w_{1}^{2 }a_{1}\!\cdot\!a_{1}\right)\left(v_{1}\!\cdot\!F_{k}\!\cdot\!v_{2}\right){}^{2 }+2w_{2}a_{1}\!\cdot\!kv_{1}\!\cdot\!F_{k}\!\cdot\!v_{2}a_{1}\!\cdot\!F_{k}\! \cdot\!v_{1}+w_{2}^{2}\left(a_{1}\!\cdot\!F_{k}\!\cdot\!v_{1}\right){}^{2}}{4w _{1}^{2}}\,. \tag{111}\]
|
2303.03252 | Extremely Large Area (88 mm X 88 mm) Superconducting Integrated Circuit
(ELASIC) | Superconducting integrated circuit (SIC) is a promising "beyond-CMOS" device
technology enables speed-of-light, nearly lossless communications to advance
cryogenic (4 K or lower) computing. However, the lack of large-area
superconducting IC has hindered the development of scalable practical systems.
Herein, we describe a novel approach to interconnect 16 high-resolution deep UV
(DUV EX4, 248 nm lithography) full reticle circuits to fabricate an extremely
large (88mm X 88 mm) area superconducting integrated circuit (ELASIC). The
fabrication process starts by interconnecting four high-resolution DUV EX4 (22
mm X 22 mm) full reticles using a single large-field (44 mm X 44 mm) I-line
(365 nm lithography) reticle, followed by I-line reticle stitching at the
boundaries of 44 mm X 44 mm fields to fabricate the complete ELASIC field (88
mm X 88 mm). The ELASIC demonstrated a 2X-12X reduction in circuit features and
maintained high-stitched line superconducting critical currents. We examined
quantum flux parametron (QFP) circuits to demonstrate the viability of common
active components used for data buffering and transmission. Considering that no
stitching requirement for high-resolution EX4 DUV reticles is employed, the
present fabrication process has the potential to advance the scaling of
superconducting quantum devices. | Rabindra N. Das, Vladimir Bolkhovsky, Alex Wynn, Jeffrey Birenbaum, Evan Golden, Ravi Rastogi, Scott Zarr, Brian Tyrrell, Leonard M. Johnson, Mollie E. Schwartz, Jonilyn L. Yoder, Paul W. Juodawlkis | 2023-03-06T16:11:44Z | http://arxiv.org/abs/2303.03252v2 | # Extremely Large Area (88 mm X 88 mm) Superconducting Integrated Circuit (ELASIC)
###### Abstract
Superconducting integrated circuit (SIC) is a promising "beyond-CMOS" device technology enables speed-of-light, nearly lossless communications to advance cryogenic (4 K or lower) computing. However, the lack of large-area superconducting IC has hindered the development of scalable practical systems. Herein, we describe a novel approach to interconnect 16 high-resolution deep UV (DUV EX4, 248 nm lithography) full reticle circuits to fabricate an extremely large (88mm \(\times\) 88 mm) area superconducting integrated circuit (ELASIC). The fabrication process starts by interconnecting four high-resolution DUV EX4 (22 mm \(\times\) 22 mm) full reticles using a single large-field (44 mm \(\times\) 44 mm) I-line (365 nm lithography) reticle, followed by I-line reticle stitching at the boundaries of 44 mm \(\times\) 44 mm fields to fabricate the complete ELASIC field (88 mm \(\times\) 88 mm). The ELASIC demonstrated a 2X-12X reduction in circuit features and maintained high-stitched line superconducting critical currents. We examined quantum flux parametron (QFP) circuits to demonstrate the viability of common active components used for data buffering and transmission. Considering that no stitching requirement for high-resolution EX4 DUV reticles is employed, the present fabrication process has the potential to advance the scaling of superconducting quantum devices.
## Introduction
Superconducting integrated circuits (SIC), such as single-flux-quantum-based (SFQ) digital integrated circuits[1], use Josephson junctions (JJs) as switching devices with an extremely high switching speed (\(\sim\)1 ps), ultralow switching power (\(\sim\) 2 aJ/bit), and nearly lossless signal propagation to encode, process, and transport data with a significantly increased clock rate (10x) and power efficiency (100x) relative to advanced-node CMOS at scale[2]. The existing SIC technology is practically limited to 10 mm \(\times\) 10 mm, or more typically, 5 mm \(\times\) 5 mm, for single-chip systems. However, because of their relatively low integration scale, systems based on superconducting technology require a large number of interconnected SIC chips for practical applications, and a scalable approach is necessary to achieve this goal.
The maximum size of an electronic integrated circuit (EIC) chip is typically limited by the reticle area of the lithographic stepper tool used to pattern the integrated circuit. This limitation is compounded for superconducting integrated circuits (SICs), in which the basic switching element, the Josephson Junctions (JJs), is orders of magnitude larger than that of the state-of-the-art CMOS transistors.
High-density JJs fabrication processes have produced chips with a maximum area[3] of \(\sim\)100 mm[2] and maximum circuit densities[4] of 7.4\(\cdot\)10\({}^{6}\) JJ/cm[2] which limits the total number of JJs in a large (22 mm \(\times\) 22 mm) reticle to 3.5 \(\times\) 10\({}^{7}\) in an ideal case. Methods to increase the number of JJs in a SIC beyond this limit include the introduction of niobium nitride (NbN) to increase kinetic inductance[5], niobium titanium nitride (NbTiN) with a short coherence length (\(\sim\)5 nm) for narrow line widths and shorter wire lengths for high-density circuits[6], introduction of higher Jc JJ layers, and introduction of multiple JJ layers within the process stack[4]. The challenges in applying these methods include composition variation[7] for multicomponent systems, limited critical current densities, variability of inductors and JJs, and mutual inductances leading to low isolation[8, 9]. In addition to increasing the circuit density at the chip level, we propose the integration of a number of high-density SIC by flip-chip interconnection in a large format, but with a lower-density active chip carrier, the ELASIC. We believe that the ELASIC concept provides a significant advance in SIC scaling and has the potential to transform a wide variety of SIC applications, including sensors[10]-[13], cryogenic digital control[14, 15, 16, 17, 18] circuits, amplifiers[19, 20], and classical cryogenic computing[21, 22, 23]. Superconducting multilayer circuit[24], a passive chip carrier, technology is the key to building a scalable superconducting system owing to its large area of integration and the ability to preselect and rework individual component chips (chiplets) within the carrier, bypassing single-chip yield constraints.
MIT Lincoln Laboratory (MIT LL) has developed several passive superconducting circuit-based chip carrier fabrication and integration processes [24, 25, 26, 27, 28, 29, 30, 31, 32] for cryogenic computing. For example, passive large-area superconducting circuits (chip carriers) with low passive transmission line (PTL)losses afforded by superconducting materials enable a record-high 10 GHz serial chip-to-chip communication bandwidth covering a distance of over a meter [28]. The technology further demonstrated [28] synchronous communications between eight superconducting Reciprocal Quantum Logic (RQL) chips powered by a passive large-area (32 mm \(\times\) 32 mm) circuit with a resonant clock distribution network at a data rate of up to 8 GHz with 3 fJ/bit dissipation. Furthermore, the passive superconducting circuit technology extended [29] isochronous data links across RQL-passive carrier-superconducting (niobium) flex operated with a clock margin of 3 dB @ 3.6 GHz with 5 fJ/bit dissipation. Heterogeneous integration [30, 31, 32] used various superconducting interconnect materials for microbumps to integrate up to 20 mm \(\times\) 20 mm SICs. In all these cases, a large-area (32 mm \(\times\) 32 mm ) passive superconducting circuit-based chip carrier was used for chip-to-chip communications, and the PTL-based interconnection potentially limits their ability to provide sufficient bandwidth and low latency for next-generation superconducting electronics. A large-area active superconducting chip carrier with a Josephson transmission line ( JTL), passive transmission line ( PTL), and driver-receiver circuits to distribute information without loss between superconducting integrated circuits is highly desirable for realizing complex hybrid computing architectures. However, this is yet to be demonstrated. In this paper, we present an implementation of such a large cryogenic active superconducting chip carrier.
This study demonstrates an active chip carrier known as an extremely large-area superconducting integrated circuit (ELASIC), fabricated by interconnecting 16 EX4 reticles with larger field-size I-line reticles. A traditional stitching approach for interconnecting 16 reticles uses 24 stitch boundaries between Deep UV EX4 reticles per layer; this large number of interconnect masks increases the number and complexity of process steps relative to our approach, resulting in a significant risk of yield loss and limiting design flexibility. In this study, we used a novel fabrication technique for implementing ELASIC with the intent of achieving maximal flexibility and complexity with minimum yield loss. ELASIC (active carrier) fabrication uses Deep UV EX4 reticles (22 mm \(\times\) 22 mm) to create individual building blocks interconnected by four EX4 reticles with larger field i-line reticles (44 mm \(\times\) 44 mm), followed by i-line reticle stitching to fabricate a full ELASIC field of 88 mm \(\times\) 88 mm (7,744 mm\({}^{2}\)). We used a high-resolution Deep UV stepper (Canon EX4 reticles) for the junction layer and a large-field i-line (365 nm) stepper for the interconnection and stitching. Reticle stitching is ubiquitous in the semiconductor industry in high-performance computing applications. TSMC reported 2500 mm\({}^{2}\) stitched passive interposer circuits [33, 34] for chip-on-wafer-on-substrate-based multi-chip integration technology for high-performance computing (HPC). Although reticle stitching is not new, interconnecting 16 high-resolution DUV stepper (Canon EX4) reticles (7,744 mm\({}^{2}\)) without the introduction of stitching at individual EX4 reticles, instead of stitching with a large-field i-line reticle, as a method to significantly reduce the number of fabrication steps and improve yield, has not been demonstrated before.
## Results
An ELASIC is a large (88 mm \(\times\) 88 mm) active superconducting chip carrier technology that leverages standard SFQ5ee [35, 37] processes to integrate junction devices into the chip carrier, featuring active and passive superconducting transmission lines, driver-receiver circuits to distribute information without loss of signal integrity between widely spaced integrated circuits, and the potential for data buffering or memory within the ELASIC. The development of ELASIC for integrating a large number of superconducting chips into a single system could have impacts on a range of important technological areas. This approach improves the functionality (number of JJs, latency, and bandwidth) within a single interconnected system, enabling larger and more capable system designs. Additionally, the ELASIC platform, when used as an active chip carrier, enables the integration of a wide variety of cryogenic components [24, 32]. This enables system designers to move buffering, synchronization, local caches, or other standard circuit elements to the ELASIC for chip-to-chip communication. By adding an active junction layer within the carrier, this approach favors active-to-active bonding with short distances between active elements such as logic and memory (see **Supplemental Materials S1**), enabling higher bandwidth and lower latency communications than an equivalent active-to-passive platform [28, 29].
Previous studies at MIT LL on passive superconducting chip carrier fabrication used I-line (365 nm) lithography and reticle stitching. However, the addition of a Josephson junction layer to the chip carrier interconnect
layer requires high-resolution deep UV (DUV) EX4 (248 nm) lithography to ensure sufficiently low process variability. Component variation with standard deviations below approximately 5% is desirable to achieve high yield and low timing variation at an integration scale greater than \(10^{6}\) Junctions [35, 36, 37]. Beyond the addition of active transmission lines and amplifiers, active chip carrier functional circuits can be designed using Josephson junctions, inductors, resistors, transformers, and transmission lines that are compatible with the requirement of less than 5% variation in key parametric margins (critical current, self-and mutual inductance, and resistance). ELASIC offers several advantages over conventional passive superconducting chip carriers.
Our interconnection approach to join 16 Canon EX4 DUV reticles is enabled by first interconnecting four DUV reticles using large-field I-line reticles, connecting adjacent blocks of four EX4 reticles with one I-line mask per block, followed by four additional I-line stitching masks to interconnect the four blocks along their edges. **Figure 1** shows the GDS layout artwork and optical images of the key steps during interconnection with I-line reticles, which include DUV reticles for junction layers, interconnection of four DUV reticles with a single large-format I-line reticle, and reticle stitching to create a complete 88 mm \(\times\) 88 mm ELASIC. The design consists of 16 identical reticles interconnected using i-line reticles, with each reticle performing as an individual computing module with the same or different functions. The DUV reticle contains JJs, which are the basic functional active elements of the ELASIC. The I-line reticle has DC and microwave lines to interconnect individual DUV reticles, with superconducting niobium (Nb) vias providing connectivity between the two reticles (DUV to I-line). The I-line reticle uses a stitching process to extend the active Josephson junction functionalities and wire routing to the entire 88mm \(\times\) 88 mm field. The traditional approach for interconnecting 16 reticles involves stitching between the reticle boundaries [31, 32]. This approach requires 24 stitch boundaries between the reticles per layer, and has a significant risk of yield loss and limited flexibility. The yield and cost are both proportional to the number of masks and the processing steps. Therefore, it is highly
Figure 1: ELASIC fabrication process flow (A)16 individual as designed GDS image of 22 mm x 22 mm deep UV EX4 reticle building blocks, (B) As fabricated optical image of 44 mm x 44 mm I-line reticle interconnecting 4 EX4 DUV reticles, and (C) As fabricated optical image of extremely large area (88 mm x 88 mm) superconducting integrated circuit (ELASIC) interconnecting 16 DUV Canon EX4 reticles. Four EX4 reticle based circuit layers are interconnected with an I-line reticle based circuit layer using sub-micron vias. Four 1-line reticles (as shown in Figure B) are connected via stitching, ultimately allowing full connectivity of 16 EX4 reticles through the I-line circuit layer to create ELASIC.
desirable to reduce these values to a minimum. The current fabrication approach has two significant advantages over the traditional approaches. First, the approach enables the extension of narrow linewidths and low variability achievable with DUV lithography to a large-format 88 mm \(\times\) 88 mm field, without individual DUV reticle stitching. This is particularly advantageous for simplifying the fabrication process, which is expected to improve the yield, and is typically a limiting factor for large-format ICs. Process simplification can be achieved because deep UV lithography with a small overlay (~50 nm) can be used where necessary on JJ-based building blocks within the reticle, and less precise I-line lithography with an overlay of less than 100 nm can be used for the remaining passive interconnections between the reticles. Therefore, the mixing of I-line lithography with DUV allows for manageable stitching along with accurate features of the key layers. Second, ELASIC has a surface compatible with micro-bump fabrication [28, 29, 30, 31, 32] which provides the flexibility to use 2-stack and 3-stack integration with known good dies for heterogeneous integration. Furthermore, microbumps on the ELASIC allow superconducting flex integration [29] to distribute signals between ELASICs within a cryogenic system (mK to 4 K). Deep UV reticles further help create small micro-bumps (see **Supplemental Materials S1**), enabling a chip-like wiring density.
We characterized an 88 mm \(\times\) 88 mm ELIAC at room temperature using an automated wafer probe. Approximately 150 test structures were measured for rapid feedback **(Supplemental Materials S2)** regarding the new fabrication process. We compared the results with the MIT LL standard SFQ5ee [35, 36, 37] fabrication process, which uses all the DUV reticles to gauge the parametric variation and yield. **Figure 2** shows representative results of the 1 \(\upmu\)m Cross Bridge Kelvin Resistance (CBKR) junction resistance across the wafer for various SFQ5ee runs and compares it with the new combined DUV-I-line (ELASIC) fabrication process. The JJ resistance across the wafer for the previous SFQ5ee runs is comparable to that of the new approach, which indicates that the new fabrication approach to implementing the ELASIC platform has a comparable junction uniformity.
An ELASIC was attached to a printed circuit board (PCB) and wire bonded to enable measurement of the I-V characteristics of 0.8-\(\upmu\)m-linewidth stitched "snake & combs" test structures at 4.2 K. **Figures 3A and 3B** show 0.8 \(\upmu\)m stitched snake & comb lines going back and forth across the stitch boundary. We measured multiple 0.8 \(\upmu\)m stitched snake and comb lines, going back and forth for approximately 20 times in a series of 5 mm wire lengths, which had critical currents in the range of 30-40 mA at 4.2 K, which is comparable to typical measurements obtained in the SFQ5ee process. **Figure 3D** shows the I-V curve of the 0.8 \(\upmu\)m snake/combs stitched line as a representative example. From the I-V data shown in **(Figure 3C)** the Nb-stitched lines had a critical current of approximately 38 mA at 4.2 K. The large number of stitch boundaries for a long narrow line with 0.8 \(\upmu\)m width and 2 \(\upmu\)m space, and the high critical current/high current-carrying capacity of the stitched Nb
Figure 3: A) ELASIC GDS image, (B-C) Enlarged 0.8 \(\upmu\)m stitched snake & combs lines, and (D) critical current (I\({}_{c}\)) of a 0.8 \(\upmu\)m stitched snake & comb Nb lines at 4.2K.
Figure 2: Comparison of room-temperature Junction resistance variability data for the ELASIC platform and MIT LL’s standard DUV-reticle-based SIC fabrication processes (SFQ5A3, SF5A4)
lines make this process suitable for building extremely large-area integrated circuits (ELASICs) and capable of sustaining interconnect requirements for superconducting computing systems for heterogeneous integration.
We characterized the quantum flux parametron (QFP) circuits at cryogenic temperatures to validate the fabrication process. Ultra-low-energy superconducting logic gates based on QFPs[38, 39, 40] are promising, in part owing to the use of identical unit cells with relatively wide parametric operating margins. QFP data transmission was used to test the process, which is limited in communication distance by the inductance between cells, and therefore requires multiple send/receive inverters or buffer pairs on large-area circuits to cover the distance between circuit elements. The objective of this measurement was to test the operation of the QFP inverters fabricated in this process and to demonstrate the use of inverter pairs to send and receive data across an on-chip transmission line. The QFP circuit generates a larger output current in response to a small input current flowing into the QFP during the clock arrival. This mechanism reamplifies data at each inverter or buffer stage. **Figure 4** shows an example of a QFP inverter in series, demonstrating data communication across two pairs of inverters.
Various test circuits are designed to demonstrate the benefits of the proposed fabrication scheme. Test circuits composed of pairs of identical QFP inverters as driver and receiver elements, with strip-line and microstrip connections between active elements, were fabricated and tested to evaluate circuit functionality and evaluate data transmission across stitch boundaries. An example layout is shown in **Figure 5** along with the measured results for structures separated by a transmission distance of 200 \(\mu\)m. Overall, circuits functioned as designed for a typical QFP data signal level of 5-10 \(\mu\)A; minor DC offsets limited the minimum transmissible current amplitude to approximately 2 \(\mu\)A, equivalent to roughly a 300 pH transmission inductor for \(\Phi_{0}\):0.3, where 0.3 is the approximate coupling constant of the mutual inductor at the output of an inverter.
Figure 4: Demonstration of QFP data transmission at 4.2 K. (a) Layout of a QFP circuit composed of two inverters to transmit the input data, followed by a 200-\(\mu\)m-long transmission line with two receiving inverters and a final DC SQUID that is used for readout of the final QFP state (b) Measurement results show operation of all elements, including correct data transmission between pairs of inverters across the transmission line.
Figure 5: (a) Layout of a QFP driver-receiver circuit, composed of two pairs of QFP inverters, one acting as a driver and one acting as a receiver, followed by a DC-SQUID for isolation and amplification of the QFP data-level signal (~5 \(\mu\)A) to a ~50 \(\mu\)V output signal. (b) Measurement results show a consistent DC offset from readout SQUID in the data output, with output data patterns matching the transmitted data pattern (1,0,1,0,1,0), indicating a successful transmission and correct circuit operation. A tendency to skew towards positive values in the output data pattern can be seen as overlapping red bars (data “1”) in a plot of readout SQUID voltage for data levels below about 2 \(\mu\)A.
Consistent with this DC-offset-limited communication, driver-receiver pairs functioned for test structures with 200-\(\upmu\)m spacing and inductances below 100 pH and failed for significantly higher inductance connections. From tests of similar structures in the SFQ5ee process [35, 36, 37], it is anticipated that the addition of a third inverter element in the driver and receiver circuits would reduce the DC overlap region and would improve the maximum transmission inductance.
## 4 Discussion
In summary, we demonstrated a new approach for fabricating extremely large-area superconducting integrated circuits (ELASICs) on a 200-mm-diameter silicon wafer interconnecting 16 high-resolution DUV (248 nm lithography) reticles (each of 22 mm \(\times\) 22 mm) into a large superconducting system. Our approach uses I-line reticle stitching with only four stitch boundaries per layer to interconnect 16 DUV reticles. This helps minimize the yield loss, which is roughly proportional to the number of mask steps, and consequently allows for the creation of larger format systems. We believe that this is the first demonstration of such a large (88 mm \(\times\) 88 mm) superconducting integrated circuit produced by interconnecting high-resolution DUV reticles without stitching individual reticles.
The ELASIC provides a platform for interconnecting a large number of discrete SICs. Room-temperature electrical measurements indicated that the present fabrication approach is comparable to standard SICs fabricated using traditional DUV lithography, and maintains run-to-run consistency with unstitched circuits fabricated using the MIT LL SFQ5ee process. The ELASIC-stitched I-line test structures exhibited critical currents in the range of 30-40 mA at 4.2 K for 0.8-\(\upmu\)m-wide stitched lines. Cryogenic measurements of the QFP circuits showed data transmission through the QFP blocks, further supporting the ELASIC fabrication approach. The new fabrication approach is capable of transmitting QFP data signals in the range-5-10 \(\upmu\)A. Overall, we developed a versatile interconnection approach to fabricate very large superconducting integrated circuits for heterogeneous integration suitable for computing scalability beyond arrays of a few chips. The ELASIC fabrication process has achieved an important milestone towards large-area active superconducting circuit fabrication and has the potential to advance the scaling of superconducting qubits [41] and other tri-layer Josephson junction based devices [15].
We believe that the current interconnection scheme can be extended to CMOS circuits for fabricating large-area active interposers. High-performance exascale computing [42] uses an active interposer for active-to-active bonding, which is necessary to increase the bandwidth and reduce latency. The current approach for creating large-area active interposers can overcome the existing interposer size limitations for advanced high-performance computing [43].
Figure 6: 3D view of the ELAIC (active superconducting chip carrier) fabrication process starting from individual EX4 reticle based circuits and their interconnection schemes to fabricate the ELASIC. Four EX4 reticle based circuit layers (22mmX22mm)are interconnected with an I-line reticle based circuit layer (44mmX44mm) using submicron DUV bias (yellow). I line reticles (44mmX44mm) are connected via stitching, ultimately allowing full connectivity of the 16 EX4 reticles through the I-line circuit layer.
## Methods
To fabricate the ELASIC, we utilized the Lincoln Laboratory's SFQ5ee[35, 36, 37] process to fabricate niobium-based integrated circuits using Nb/Al-AlO\({}_{\text{w}}\)/Nb tri-layer Josephson junctions with a Jc of 10 kA/cm\({}^{2}\) and junction diameters down to 500 nm. This process utilizes high-resolution deep UV (248-nm photolithography) for multilayer Nb wiring with minimum circuit feature sizes down to 350 nm, Mo-based shunt resistors, and Nb-based superconducting via interconnects between all metal layers separated by a silica-based dielectric. **Figure 1 and Figure 6** show the design schemes for ELASIC and the corresponding images of the fabricated devices. We use a 200 mm wafer fabrication process consisting of 13 photomasks:8 (tiled) deep DUV photomasks with a 22 mm \(\times\) 22 mm field size were used to create junctions, resistors, interconnects, etc., and 4 (tiled) I-line photomasks with a field size of 44 mm \(\times\) 44 mm were used to interconnect the DUV reticles and reticle stitching. Circuit wiring on individual I-line reticles with a field size of 44 mm \(\times\) 44 mm was stitched/ joined[32] together at a stitch boundary to create an ELASIC field of 88 mm \(\times\) 88 mm. In summary, four EX4 reticles were interconnected with each other using a single I-line reticle, and four I-line reticles used reticle stitching to interconnect with each other to create a complete ELASIC from 16 interconnected EX4 reticles. The detailed stitching process has been described in our previous paper[32].
Individual test structures were initially tested in liquid He to evaluate the fabrication approach. The ELASIC sample was then diced to 88 x 88 mm\({}^{2}\) and mounted on a custom copper plate using silver paint and Apiezon N grease. A custom FR4 printed circuit board (PCB) was attached to the copper plate with screws, and wire-bond connections were made from the PCB to the silicon sample below (achieved with cutouts in the PCB to allow access to the silicon part below). The packaged sample assembly was mounted on a custom motherboard printed circuit board (PCB) mounted on a 4 K cryocooler. A total of 300 signals were passed to the motherboard via two Ardent compression mount connectors and carried to room temperature on six 51-pin micro-d flex cables. The cryocooler was equipped with a high-permeability shield, and the materials used near the sample (including PCBs, connectors, and cables) were carefully selected to avoid any residual magnetic field. A full ELASIC (88 mm \(\times\) 88 mm) sample was assembled as shown in **Figure 7**. The ELASIC devices were thermally cycled multiple times in a cryocooler to check their integration stability, wiring reliability, and I-V characteristics.
## Characterization
Details of the various characterization methods are provided in the supplementary section (S1).
## Measurements
Detailed room-temperature measurement data from the wafer probe are provided in supplementary section (S2).
|
2310.02074 | ACE: A fast, skillful learned global atmospheric model for climate
prediction | Existing ML-based atmospheric models are not suitable for climate prediction,
which requires long-term stability and physical consistency. We present ACE
(AI2 Climate Emulator), a 200M-parameter, autoregressive machine learning
emulator of an existing comprehensive 100-km resolution global atmospheric
model. The formulation of ACE allows evaluation of physical laws such as the
conservation of mass and moisture. The emulator is stable for 100 years, nearly
conserves column moisture without explicit constraints and faithfully
reproduces the reference model's climate, outperforming a challenging baseline
on over 90% of tracked variables. ACE requires nearly 100x less wall clock time
and is 100x more energy efficient than the reference model using typically
available resources. Without fine-tuning, ACE can stably generalize to a
previously unseen historical sea surface temperature dataset. | Oliver Watt-Meyer, Gideon Dresdner, Jeremy McGibbon, Spencer K. Clark, Brian Henn, James Duncan, Noah D. Brenowitz, Karthik Kashinath, Michael S. Pritchard, Boris Bonev, Matthew E. Peters, Christopher S. Bretherton | 2023-10-03T14:15:06Z | http://arxiv.org/abs/2310.02074v2 | # ACE: A fast, skillful learned global atmospheric model for climate prediction
###### Abstract
Existing ML-based atmospheric models are not suitable for climate prediction, which requires long-term stability and physical consistency. We present ACE (AI2 Climate Emulator), a 200M-parameter, autoregressive machine learning emulator of an existing comprehensive 100-km resolution global atmospheric model. The formulation of ACE allows evaluation of physical laws such as the conservation of mass and moisture. The emulator is stable for 10 years, nearly conserves column moisture without explicit constraints and faithfully reproduces the reference model's climate, outperforming a challenging baseline on over 80% of tracked variables. ACE requires nearly 100x less wall clock time and is 100x more energy efficient than the reference model using typically available resources.
## 1 Introduction
The last year has seen a revolution in the field of numerical weather prediction. Multiple groups have shown improvements in key metrics over the state of the art physics-based medium-range weather prediction system using deep learning methods [2; 5; 13]. However, the applicability of these methods to climate modeling is unclear. Nearly all machine learning based weather prediction systems [2; 5; 11; 13; 16] have reported results for forecasts up to only 14 days -- with notable exceptions of [3; 23] -- and instabilities or unphysical artifacts often occur for longer simulations.
We claim the requirements of an ML-based atmospheric model for climate prediction are as follows. Such a model should maintain realistic weather variability and be stable for indefinite periods. Conservation of mass, moisture and energy is key. Surface and top-of-atmosphere fluxes of energy, moisture and momentum must be predicted to enable assessment of climate sensitivity and coupling with other components such as the ocean. Appropriate forcings should be used -- e.g. sea surface temperature (SST) in the case of an atmosphere-only model. Its long-term averages should be unbiased compared to a reference dataset. Finally, the model's performance must generalize across a broad range of plausible SST distributions and CO\({}_{2}\) concentrations.
Here we present ACE (AI2 Climate Emulator), a neural network based atmospheric model which satisfies many of the criteria listed above. ACE uses the Spherical Fourier Neural Operator (SFNO)
architecture [3] and is trained to emulate an existing physics-based atmospheric model with 6-hour temporal resolution. ACE runs stably for at least a decade and can do such a simulation in one hour of wall clock time, nearly 100 times faster than the reference atmospheric model and 100 times more energy efficiently. ACE predicts diagnostics such as the fluxes of energy and moisture through the top of atmosphere and Earth surface (e.g. precipitation). The model is framed so that the precise evaluation of conservation of mass and moisture is possible and we find that column moisture is very nearly conserved across individual timesteps. External forcings, such as incoming solar radiation and sea surface temperature, are used as inputs. Finally, ACE replicates the near-surface climatology of the reference model better than a 2x coarser but otherwise identical version of that model.
Related work includes ClimateBench [22] which proposes directly predicting climate metrics such as annual mean precipitation from input forcing variables like CO\({}_{2}\). The disadvantage of such an approach is the limited physical interpretability: for example what sub-annual variability gives rise to the annual mean? Another study [15] trains on climate model output but makes forecasts with 14- or 30-day lead times, leading to smooth predictions near the climatological mean (e.g. Fig. 20 of [15]).
## 2 Methods
### Dataset
Most ML-based weather prediction systems have been trained on the ERA5 reanalysis dataset [10]. While appealing due to its relatively accurate representation of historical atmospheric conditions, reanalysis data has downsides for the development of machine learning based climate models: it has a limited number of samples restricted to the recent past; models trained on reanalysis may not be reliable for future climates [1; 6]; and analysis increment terms (adjustments from observations) have no clear physical interpretation [19]. Therefore, we generate training data with an existing global atmospheric model (FV3GFS, the atmospheric component of the United States weather model [20; 24]).
The training data are an 11-member initial condition ensemble of 10-year simulations (hereafter the "reference" simulation; 10 years is length after discarding 3-month spinup time) perfomed on NOAA/GFDL's GAEA computer. Ten ensemble members are used for training and the eleventh for validation. For simplicity, we use annually repeating climatological sea surface temperature (1982-2012 average) and fixed greenhouse gas and aerosol concentrations. The resolution of the reference simulation is about 100 km on the cubed sphere [17] with 63 vertical layers. Model state is saved every 6 hours, with a combination of snapshot and interval-mean variables. See Table 1 for complete description of variables used for training. For compatibility with SFNO we regrid conservatively from the cubed-sphere geometry of FV3GFS to a 1\({}^{\circ}\) Gaussian grid [21], additionally filtering the data with a spherical harmonic transform round-trip to remove artifacts in the high latitudes. We coarsen the vertical coordinate to 8 layers while conserving moisture and energy (Appendix A).
### Training
ArchitectureWe use the SFNO architecture [3] to predict the state of the atmosphere at time \(t+6\mathrm{hr}\) using the state at time \(t\) as input. SFNO is a Fourier Neural Operator-based architecture which uses spherical harmonic transforms to enable efficient global convolutions on the sphere. Hyperparameters are described in Appendix B; the number of parameters is about 200M. Unlike many prior ML atmospheric prediction systems, we use prognostic variables \(P\) which are both inputs and outputs, forcing variables \(F\) which are inputs only and diagnostic variables \(D\) which are outputs only (Table 1). Explicitly, with \(t\) representing the time index: \([P_{t+1},D_{t+1}]=f(P_{t},F_{t})\) where \(f\) represents the SFNO module and forcing variables \(F_{t}\) are read from an external dataset (Figure 4). The diagnostic variables do not inform the next step, which is typical for physics-based atmospheric models and has the important benefit that fields such as precipitation are not necessary to initialize a simulation. However, because they are predicted by the same architecture that predicts the prognostic variables one can enforce physical constraints such as moisture conservation which affect prognostic model weights. This is different than some previous approaches which use a separate model to predict precipitation [16].
Data NormalizationVariables are normalized using a "residual scaling" approach such that predicting outputs equal to input would result in each variable contributing equally to the loss function (similar to [11]). See Appendix F for details and Appendix G.1 for an ablation of this choice.
This "residual scaling" approach has the largest impact on the surface pressure, which ends up having a normalized standard deviation about 20 times larger than otherwise (Figure 9).
Loss FunctionThe loss function is a relative mean squared error. Given \(\mathbf{x}_{i}\) representing the normalized target for the \(i\)'th sample of a batch for all spatial points and channels, and \(\hat{\mathbf{x}}_{i}\) as the corresponding prediction, the loss for a batch of size \(B\) is \(\frac{1}{B}\sum_{i=1}^{B}\frac{\|\hat{\mathbf{x}}_{i}-\mathbf{x}_{i}\|_{2}^{2 }}{\|\hat{\mathbf{x}}_{i}\|_{2}^{2}}\). The loss is computed after a single forward step. Optimization hyperparameters are listed in Table 4.
### Evaluation
We are not aware of any existing purely machine learning based system that allows for long (at least 10-year) forecasts and includes a vertically resolved view of the entire atmosphere. Therefore, we formulate baselines using our physics-based model. Two cases will be considered: first a "reference" perfect emulator in which we compare members of the initial condition ensemble against each other, giving an upper bound on model skill. Because of the chaotic nature of the atmosphere and the limited duration (10 years) of our validation dataset, even a perfect emulator will have non-zero errors. Second, we run a difficult-to-beat 200 km "baseline": the same physics-based FV3GFS model but using both horizontal resolution and dynamics time step that are 2x coarser. This mimics the typical climate modeling strategy for faster simulation: use coarser resolution at the cost of accuracy.
Three metrics are used for evaluation: the time-dependent area-weighted global mean and the area-weighted global mean bias and RMSE of time-mean fields. These are defined in Appendix C.
## 3 Results
Long-term stabilityInitializing from the start of the validation dataset, ACE is able to maintain a stable simulation and an unbiased global mean evolution of temperature and total water path for at least 10 years (Figure 1). While there is some year-to-year variability, the seasonal cycle of both of these fields is well represented by ACE. This result is dependent on choice of forcing variables, the normalization strategy (Appendix F) and using the SFNO architecture which has been shown to have favorable stability properties compared to Fourier Transform-based FourCastNet [3]. The one prognostic variable whose global mean drifts unrealistically is the upper-stratospheric water \(q_{0}^{T}\) (Figure 7).
terms of total water path:
\[\frac{\partial TWP}{\partial t}=E-P+\left.\frac{\partial TWP}{\partial t}\right|_{adv} \tag{1}\]
where \(TWP=\frac{1}{g}\sum_{k}q_{k}^{T}\,dp_{k}\) is the amount of water in an atmospheric column and \(\left.\frac{\partial TWP}{\partial t}\right|_{adv}\) is the tendency of the total water path due to advection. \(E\) and \(P\) are the surface evaporation (\(E=LHF/L_{v}\)) and precipitation rate respectively. The physical model exactly satisfies Equation 1 by design but an ML emulator may not unless explicitly designed to do so. Nonetheless, ACE very nearly obeys the column-wise conservation of moisture (Eq. 1). Figure 3 shows the magnitude of the violation of the budget is very small compared to the individual terms in the budget: standard deviation of total water path tendency is \(\sim\)35 times that of the column budget violation term. This is true even one year into the inference simulation. Global mean budgets are described in Appendix E.
Computational expenseTraining time was 45 hours on four NVIDIA-A100s. Running inference on a single A100 requires about one second of wall clock time per simulated day. For comparison, the reference simulation ran on 96 cores of AMD EPYC 7H12 processors and took \(\sim\)77 seconds per simulated day. The 2x coarser resolution baseline ran on 24 cores in \(\sim\)45 seconds per simulated day.
## 4 Conclusions and future work
This work demonstrates the potential of deep learning for skillful and fast climate model emulation. 100x speed-up in run time and 100x greater energy efficiency would democratize the use of climate models, open new research avenues and potentially reduce energy usage. However, there are additional steps before this system is a useful climate model [8]. Generalizability is a key challenge. To reduce confounding factors, in this study we focused on a training dataset with simplified (annually-repeating) forcing. It may be necessary to expand the training and input variable set to be able to handle a changing climate. Our suggested approach is to train on a broad range of simulation data that covers the regimes of interest [6]. However, simulated data are not the real world and have their share of biases. Potential solutions are to fine-tune on reanalysis data (e.g. [9]) or on a smaller amount of high-resolution simulation data that has smaller biases [4]. Further improvements to our current training regime that are possible, e.g. using appropriate constraints in the loss function [18] to reduce global non-physical sources of moisture and mass. Finally, coupling to other components (ocean, sea-ice, land) of the climate system is necessary. Tackling these challenges is an exciting opportunity for the growing field of machine learning based climate modeling.
Figure 3: Snapshot of the terms in the column moisture budget (Equation 1) one year into simulation for (top) reference data and (bottom) ACE simulation. Given chaotic nature of atmosphere, we do not expect details to match between the reference and ACE simulations. If column-integrated moisture is exactly conserved, the rightmost column should equal zero, as it is for the reference data.
Figure 2: 10-year mean bias in surface precipitation rate. Titles show global and time-mean RMSE and bias in units of mm/day (Equations 6 and 7).
## Acknowledgments and Disclosure of Funding
Thanks to NOAA's Geophysical Fluid Dynamics Laboratory for HPC resources used to create ML training/testing data.
|
2304.00924 | Limit theorems for random Motzkin paths near boundary | We consider Motzkin paths of length $L$, not fixed at zero at both end
points, with constant weights on the edges and general weights on the end
points. We investigate, as the length $L$ tends to infinity, the limit
behaviors of (a) boundary measures induced by the weights on both end points
and (b) the segments of the sampled Motzkin path viewed as a process starting
from each of the two end points, referred to as boundary processes.
Our first result concerns the case when the induced boundary measures have
finite first moments. Our second result concerns when the boundary measure on
the right end point is a generalized geometric measure with parameter
$\rho_1\ge 1$, so that this is an infinite measure and yet it induces a
probability measure for random Motzkin path when $\rho_1$ is not too large. The
two cases under investigation reveal a phase transition. In particular, we show
that the limit left boundary processes in the two cases have the same
transition probabilities as random walks conditioned to stay non-negative. | Wlodzimierz Bryc, Yizao Wang | 2023-04-03T12:28:41Z | http://arxiv.org/abs/2304.00924v2 | # Limit theorems for random Motzkin paths near boundary
###### Abstract.
We consider Motzkin paths of length \(L\), not fixed at zero at both end points, with constant weights on the edges and general weights on the end points. We investigate, as the length \(L\) tends to infinity, the limit behaviors of (a) boundary measures induced by the weights on both end points and (b) the segments of the sampled Motzkin path viewed as a process starting from each of the two end points, referred to as boundary processes. Our first result concerns the case when the induced boundary measures have finite first moments. Our second result concerns when the boundary measure on the right end point is a generalized geometric measure with parameter \(\rho_{1}\geq 1\), so that this is an infinite measure and yet it induces a probability measure for random Motzkin path when \(\rho_{1}\) is not too large. The two cases under investigation reveal a phase transition. In particular, we show that the limit left boundary processes in the two cases have the same transition probabilities as random walks conditioned to stay non-negative.
Key words and phrases:Random Motzkin paths 2010 Mathematics Subject Classification: 60C05
## 1. Introduction
### Weighted Motzkin paths without fixed boundary points
A Motzkin path of length \(L\in\mathds{Z}_{\geq 1}\) is a sequence of lattice points \((\mathbf{x}_{0},\ldots,\mathbf{x}_{L})\) in \(\mathds{Z}_{\geq 0}\times\mathds{Z}_{\geq 0}\), such that \(\mathbf{x}_{j}=(j,n_{j})\) with \(|n_{j}-n_{j-1}|\leq 1\). An edge \((\mathbf{x}_{j-1},\mathbf{x}_{j})\) is called an up step if \(n_{j}-n_{j-1}=1\), a down step if \(n_{j}-n_{j-1}=-1\) and a horizontal step if \(n_{j}-n_{j-1}=0\). Each such path can be identified with a sequence of non-negative integers that specify the starting point \((0,n_{0})\) and consecutive values \(n_{j}\) along the vertical axis at step \(j\geq 1\). We shall write \(\mathbf{\gamma}=(\gamma_{0},\gamma_{1},\ldots,\gamma_{L})\) with \(\gamma_{j}=n_{j}\) for such a sequence and refer to \(\mathbf{\gamma}\) as a Motzkin path. By \(\mathcal{M}_{i,j}^{(L)}\) we denote the family of all Motzkin paths of length \(L\) with the initial altitude \(\gamma_{0}=i\) and the final altitude \(\gamma_{L}=j\). We also refer to \(\gamma_{0}\) and \(\gamma_{L}\) as the boundary/end points of the path. Here, we follow the standard terminology; see Flajolet and Sedgewick (2009, Definition V.4, page 319) or Viennot (1985).
To introduce random Motzkin paths, we start by assign weights to the edges and to the endpoints of a Motzkin path. In general, weights for the edges arise from three sequences \(\mathbf{a}=(a_{j})_{j\geq 0}\), \(\mathbf{b}=(b_{j})_{j\geq 0}\), \(\mathbf{c}=(c_{j})_{j\geq 0}\) of positive real numbers. For a path \(\mathbf{\gamma}=(\gamma_{0}=i,\gamma_{1},\ldots,\gamma_{L-1},\gamma_{L}=j)\in \mathcal{M}_{i,j}^{(L)}\) we define its (edge) weight
\[w(\mathbf{\gamma})=\prod_{k=1}^{L}a_{\gamma_{k-1}}^{\varepsilon_{k}^{+}}b_{\gamma _{k-1}}^{\varepsilon_{k}^{0}}c_{\gamma_{k-1}}^{\varepsilon_{k}^{-}}, \tag{1.1}\]
where
\[\varepsilon_{k}^{+}(\mathbf{\gamma}):=\mathbf{1}_{\gamma_{k}>\gamma_{k-1}},\ \varepsilon_{k}^{-}(\mathbf{\gamma}):=\mathbf{1}_{\gamma_{k}<\gamma_{k-1}},\ \varepsilon_{k}^{0}(\mathbf{\gamma}):=\mathbf{1}_{\gamma_{k}=\gamma_{k-1}},k=1, \ldots,L.\]
That is, the edge weight is multiplicative in the edges, we take \(\mathbf{a}\), \(\mathbf{b}\) and \(\mathbf{c}\) as the weights of the up steps, horizontal steps and down steps, and the weight of a step depends on the altitude of the
## 1. Introduction
Let \(\boldsymbol{\gamma}\) be a random vector with \(\boldsymbol{\gamma}\in\mathcal{M}^{(L)}\) and \(\boldsymbol{\gamma}\in\mathcal{M}^{(L)}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the random vector with \(\boldsymbol{\gamma}\) the _random vector_\(\boldsymbol{\gamma}\). We denote by \(\boldsymbol{\gamma}\) the random vector with \(
Our contributions are limit theorems for the boundary processes \(\mathbf{\gamma}^{(L)}\) and \(\widetilde{\mathbf{\gamma}}^{(L)}\). Our limit theorems concern two cases. The first is when the boundary measures are both finite and have finite first moments. In the second one, we consider _general_ geometric weights at the right end point with parameter \(\rho_{1}\) which we allow to be larger than one. So in this latter case the right boundary measure may become infinite and we reveal a phase transition when parameter \(\rho_{1}\) crosses \(1\). In the limit, the left boundary processes have the same transition probabilities as the random walks conditioned to stay non-negative (Bertoin and Doney, 1994).
### First result: when the boundary measures have finite first moments
Consider the following transition probabilities for a Markov chain on \(\mathds{Z}_{\geq 0}\) from \(n\) to \(m\):
\[\mathsf{P}_{n,m}:=\begin{cases}\dfrac{1}{2+\sigma}\dfrac{n+2}{n+1},&\text{ if }m=n+1,\\ \dfrac{\sigma}{2+\sigma},&\text{ if }m=n,\\ \dfrac{1}{2+\sigma}\dfrac{n}{n+1},&\text{ if }m=n-1\geq 0,\\ 0,&\text{ otherwise},\end{cases}\qquad n\geq 0. \tag{1.5}\]
With \(\sigma=0\), this corresponds to the discrete \(3\)-dimensional Bessel process introduced in Pitman (1975).
**Theorem 1.1**.: _Assume (1.4) with \(\sigma>0\) and_
\[\sum_{n=0}^{\infty}n\alpha_{n}<\infty\quad\text{ and }\quad\sum_{n=0}^{\infty}n \beta_{n}<\infty. \tag{1.6}\]
_Then,_
\[\left(\left\{\gamma_{k}^{(L)}\right\}_{k\geq 0},\left\{\widetilde{\gamma}_{k}^ {(L)}\right\}_{k\geq 0}\right)\Rightarrow\left(\left\{X_{k}\right\}_{k\geq 0}, \left\{X_{k}^{\prime}\right\}_{k\geq 0}\right),\]
_as \(L\to\infty\), where the right-hand side are two independent Markov chains with transition probabilities \(\{\mathsf{P}_{n,m}\}_{n,m\geq 0}\) in (1.5), and initial laws of \(X_{0}\) and \(X_{0}^{\prime}\) respectively as_
\[\mathds{P}(X_{0}=n)=\frac{1}{C_{\mathbf{\alpha}}}(n+1)\alpha_{n},\quad\mathds{P}( X_{0}^{\prime}=n)=\frac{1}{C_{\mathbf{\beta}}}(n+1)\beta_{n},\;n\geq 0, \tag{1.7}\]
_with the normalization constants_
\[C_{\mathbf{\alpha}}=\sum_{m=0}^{\infty}(m+1)\alpha_{m},\quad C_{\mathbf{\beta}}=\sum_ {m=0}^{\infty}(m+1)\beta_{m}.\]
_Remark 1.2_.: We assume \(\sigma>0\). We note that when \(\sigma=0\), even for the marginal law of the end points of \(\gamma_{0}^{(L)},\gamma_{L}^{(L)}\) the asymptotic independence does not hold. For example, with \(\alpha_{n}=\beta_{n}=0\) for \(n\geq 2\) and \(L=2N\), the end-points are dependent, \(\gamma_{0}=\gamma_{2N}\in\{0,1\}\), and the limiting law is different and depends on both sequences \(\mathbf{\alpha},\mathbf{\beta}\).
\[\lim_{N\to\infty}\mathds{P}(\gamma_{2N}^{(2N)}=1)=\frac{4\alpha_{1}\beta_{1}}{ \alpha_{0}\beta_{0}+4\alpha_{1}\beta_{1}}.\]
This implies that Theorem 1.1 does not hold for \(\sigma=0\).
_Remark 1.3_.: Theorem 1.1 implies that, as expected, the end-points of a random Motzkin path become asymptotically independent as the length \(L\) of the path tends to infinity. The Markov chain with kernel \(\mathsf{P}\) and \(\sigma=0\), the discrete \(3\)-dimensional Bessel process, has shown up in limit theorems for simple random walks conditioned to stay non-negative, see Bertoin and Doney (1994); Keener (1992). However, for the limit marginal laws, each can be interpreted as a size-biased sampling from the initial law. We find the appearance of this specific bias in the limit somewhat unexpected.
_Remark 1.4_.: Note that we do not scale in neither time nor the altitudes, and our limit theorems should be considered as _microscopic_ and _local_: both limit processes, despite that they are of infinite length, characterize asymptotic behaviors near the end-points of a random Motzkin path as the distance between the end-points goes to infinity. Our results should be compared to limit theorems for random walks conditioned to stay non-negative, which concern _initial behaviors of the random walk_(Bertoin and Doney, 1994).
Another type of limit theorems concern _macroscopic_ limit, with scaling both in time and altitudes. See for example Kaigh (1976) and Bryc and Wang (2019).
### Second result: phase transition with general geometric boundary measures
The assumption (1.6) says that the normalized boundary measures on both end points have finite mean. Our next result concerns the situation when the boundary measure on the right end point is an infinite measure. In fact, we focus on choosing geometric weights \(\boldsymbol{\beta}\) as
\[\beta_{n}=\rho_{1}^{n},\quad n\geq 0,\]
with \(\rho_{1}\geq 1\). For \(\mathds{P}_{L}\equiv\mathds{P}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}\) to be a well-defined probability measure, we need
\[\sum_{m=0}^{\infty}m\alpha_{m}\rho_{1}^{m}<\infty, \tag{1.8}\]
to guarantee \(\mathfrak{C}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}<\infty\).
In this regime, we obtain a different Markov chain in the limit when \(\rho_{1}>1\). Its transition probabilities are,
\[\mathsf{Q}_{n,m}^{(\rho)}:=\begin{cases}\frac{1}{\rho+1/\rho+ \sigma}\;\frac{\rho^{n+2}-1/\rho^{n+2}}{\rho^{n+1}-1/\rho^{n+1}},&\text{ if }m=n+1,\\ \\ \frac{1}{\rho+1/\rho+\sigma}\sigma,&\text{ if }m=n,\\ \\ \frac{1}{\rho+1/\rho+\sigma}\;\frac{\rho^{n}-1/\rho^{n}}{\rho^{n+1}-1/\rho^{n+ 1}},&\text{ if }m=n-1,\\ \\ 0&\text{ otherwise},\end{cases} \tag{1.9}\]
where \(\rho\neq 1\). We note that \(\rho=1\) is a removable singularity: and in a more compact form one can write
\[\mathsf{Q}_{n,m}^{(\rho)}:=\frac{1}{\rho^{2}+1+\rho\sigma}\frac{[m+1]_{\rho^{ 2}}}{[n+1]_{\rho^{2}}}\left(\mathbf{1}_{\{m-n=1\}}+\rho\sigma\mathbf{1}_{\{m -n=0\}}+\rho^{2}\mathbf{1}_{\{m-n=-1\}}\right),\quad\rho\geq 0,\]
with \([n]_{\rho^{2}}:=1+\rho^{2}+\cdots+\rho^{2n}\), and in particular \(\mathsf{P}_{n,m}=\mathsf{Q}_{n,m}^{(1)}\).
_Remark 1.5_.: When \(\rho=1\) (\(\rho>1\) resp.), \(\mathsf{Q}^{(\rho)}\) with \(\sigma=0\) correspond to the conditional law of a simple random walk (a biased random walk drifting to \(\infty\) resp.) staying non-negative, as summarized in Bertoin and Doney (1994). For \(\sigma=0\), the Markov process appeared also previously in Miyazaki and Tanaka (1989, (2.a)).
**Theorem 1.6**.: _Consider weighted random Motzkin paths with general weights \(\boldsymbol{\alpha}\) on the left end points and geometric weights \(\beta_{n}=\rho_{1}^{n},n\geq 0\) on the right end points as above. Suppose \(\rho_{1}\geq 1\) and (1.8) holds. Then \(\gamma_{L}^{(L)}\) is not tight as \(L\to\infty\), and_
\[\left(\left\{\gamma_{k}^{(L)}\right\}_{k\geq 0},\left\{\widetilde{\gamma}_{k} ^{(L)}-\widetilde{\gamma}_{0}^{(L)}\right\}_{k\geq 1}\right)\Rightarrow\left( \left\{Z_{k}\right\}_{k\geq 0},\left\{\xi_{k}\right\}_{k\geq 1}\right),\]
_where on the right-hand side, \(\{Z_{k}\}_{k\geq 0}\) is the Markov chain with transition probabilities \(\{\mathsf{Q}_{n,m}^{(\rho_{1})}\}_{n,m\geq 0}\) in (1.9) and initial law \(Z_{0}\) given by_
\[\mathds{P}(Z_{0}=n)=\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1}}} \alpha_{n}\frac{\rho_{1}^{n+1}-\rho_{1}^{-(n+1)}}{\rho_{1}-\rho_{1}^{-1}},\;n \geq 0, \tag{1.10}\]
_with \(\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1}}=\sum_{n=0}^{\infty}\alpha_{n}( \rho_{1}^{n+1}-\rho_{1}^{-(n+1)})/(\rho_{1}-\rho_{1}^{-1})\), the family \(\{\xi_{k}\}_{k\geq 1}\) are i.i.d. random variables with_
\[\mathds{P}(\xi_{1}=\epsilon)=\frac{1}{\rho_{1}+1/\rho_{1}+\sigma}\begin{cases} 1/\rho_{1},&\text{ if }\epsilon=1,\\ \sigma,&\text{ if }\epsilon=0,\\ \rho_{1},&\text{ if }\epsilon=-1,\end{cases}\]
_and the two families are independent._
_Remark 1.7_.: There are two reasons that we work with the general geometric measure on the right end point. First, it seems that unlike when under first-moment assumption (1.6) we could have a unified approach, based on a formula by Viennot (1985), when (1.6) is violated one has to go through model-specific calculations. Second, the choice of geometric boundary measures is motivated by recent advances on asymmetric simple exclusion processes with open boundary. See Section 1.4. The choice of geometric boundary measures is crucial at two steps. See Remark 4.3. It would be interesting to work out another example of infinite boundary measures. It is not clear to us whether other types of limit boundary processes may arise.
_Remark 1.8_.: It is remarkable that in the result above, in contrast to Theorem 1.1, the boundary measure on the right end point has a non-negligible influence on the initial law and the transitional law of boundary process near the left end point.
### Motivation from physics literature: geometric boundary measures on both ends
Our original motivation came from recent results in the mathematical physics literature, where generalized geometric measures are assumed for both end points in a representation for the stationary measure of an asymmetric simple exclusion process in Derrida et al. (2004) and Barraquand and Doussal (2022). Namely, consider \(\alpha_{n}=\rho_{0}^{n},\beta_{n}=\rho_{1}^{n},n\geq 0\) with \(\rho_{0}\in(0,1)\). Then, the condition (1.8) becomes
\[\rho_{1}<\frac{1}{\rho_{0}}.\]
Our limit theorems then state a phase transition regarding the boundary processes. Let \(G_{p}\) denote a geometric random variable with parameter \(0<p<1\) (i.e., \(\mathds{P}(G_{p}=n)=(1-p)p^{n}\), \(n\geq 0\)).
**Corollary 1.9**.: _Consider the random Motzkin paths with \(\alpha_{n}=\rho_{0}^{n},\beta_{n}=\rho_{1}^{n},n\geq 0\), with \(\rho_{0}\in(0,1)\) and \(\rho_{0}\rho_{1}<1\). Then with \(\hat{\rho}:=\max\{1,\rho_{1}\}\),_
\[\left\{\gamma_{k}^{(L)}\right\}_{k\geq 0}\Rightarrow\left\{Z_{k}\right\}_{k\geq 0 },\]
_as \(L\to\infty\), where on the right-hand side, \(\{Z_{k}\}_{k\geq 0}\) is the Markov chain with transition probabilities \(\{\mathsf{Q}_{n,m}^{(\hat{\rho})}\}_{n,m\geq 0}\) in (1.9) and \(Z_{0}=G_{\rho_{0}\hat{\rho}}+\widetilde{G}_{\rho_{0}/\hat{\rho}}\) is the sum of two independent geometric random variables._
Proof.: If \(\rho_{1}<1\), then \(\hat{\rho}=1\) and the conclusion follows from Theorem 1.1, with (1.7) giving the negative binomial law for \(Z_{0}\). If \(\rho_{1}\in[1,1/\rho_{0})\), then \(\hat{\rho}=\rho_{1}\) and the result follows from Theorem 1.6. In this case,
\[\mathds{E}z^{Z_{0}}=\frac{\sum_{n=0}^{\infty}\rho_{0}^{n}z^{n}\left(\rho_{1}^ {n+1}-1/\rho_{1}^{n+1}\right)/(\rho_{1}-1/\rho_{1})}{\sum_{n=0}^{\infty}\rho_ {0}^{n}\left(\rho_{1}^{n+1}-1/\rho_{1}^{n+1}\right)/(\rho_{1}-1/\rho_{1})}= \frac{(1-\rho_{0}\rho_{1})(1-\rho_{0}/\rho_{1})}{(1-z\rho_{0}\rho_{1})(1-z\rho _{0}/\rho_{1})},\]
identifying the law as desired. The expression (1.10) gives the so-called \(q\)-negative binomial law, see (Charalambides, 2016, Theorem 3.1 with parameters \(k=2\), \(q=\rho_{1}^{2}\), \(\theta=\rho_{0}/\rho_{1}\)).
The limit process \(\{Z_{k}\}_{k\geq 0}\) can be viewed as a discrete version of a Bessel process, with random initial position. Furthermore, by choosing the parameters \(\rho_{0},\rho_{1}\) appropriately, the increment process scales to the process corresponding to the non-gaussian components of the conjectured _stationary measure of half-line KPZ fixed point_(Barraquand and Le Doussal, 2022; Bryc and Kuznetsov, 2022), represented by the right-hand side of (1.11) below. In particular, in the forthcoming paper by Bryc and Wesolowski (2023) the following is shown. With \(\rho_{0}^{(n)}=1-u/\sqrt{n}\), \(\rho_{1}^{(n)}=1-v/\sqrt{n}\), for fixed \(u>0\), \(u+v>0\), and letting \(\{Z_{k}^{(n)}\}_{k\geq 0}\) denote the process \(Z\) with parameters \(\rho_{0}^{(n)},\rho_{1}^{(n)}\), we have
\[\left\{Z_{\lfloor nt\rfloor}^{(n)}-Z_{0}^{(n)}\right\}_{t\geq 0}\Rightarrow \left\{2\left(\sup_{0\leq s\leq t}B_{s}^{(v)}-\frac{1}{u+v}\gamma\right)_{+}-B _{t}^{(v)}\right\}_{t\geq 0}, \tag{1.11}\]
as \(n\to\infty\) in the space of \(D([0,\infty))\), where on the right-hand side \(B_{t}^{(v)}:=\mathbb{B}_{2t/(2+\sigma)}+vt/(2+\sigma),t\geq 0\) with \(\{\mathbb{B}_{t}\}_{t\geq 0}\) a standard Brownian motion, and \(\gamma\) a standard exponential random variable independent from \(\{B_{t}^{(v)}\}_{t\geq 0}\).
The paper is organized as follows. In Section 2 we recall Viennot's formula. In Sections 3 and 4 we prove Theorems 1.1 and 1.6 respectively.
## 2. Viennot's formula
A key ingredient of our proof is a formula in Viennot (1985). Given three sequences \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c}\) of the edge-weights for the Motzkin paths, following Viennot (1985) and Flajolet and Sedgewick (2009) we define real polynomials \(p_{-1}(x)=0\), \(p_{0}(x)=1,p_{1}(x),\dots\) by the three step recurrence
\[xp_{n}(x)=a_{n}p_{n+1}(x)+b_{n}p_{n}(x)+c_{n}p_{n-1}(x),\quad n=0,1,2,\dots \tag{2.1}\]
(With the usual conventions that \(p_{-1}(x)=0\), \(p_{0}(x):=1\) and \(a_{n}>0\), the recursion determines polynomials \(\{p_{n}(x)\}\) uniquely, with \(p_{1}(x)=(x-b_{0})/a_{0}\).) By Favard's theorem (Ismail, 2009), polynomials \(\{p_{n}(x)\}\) are orthogonal with respect to a (possibly non-unique) probability measure \(\nu\) on the real line. It is well known that the \(L_{2}\) norm \(\|p_{n}\|_{2}^{2}:=\int_{\mathds{R}}(p_{n}(x))^{2}\nu(dx)\) is given by the formula
\[\|p_{n}\|_{2}^{2}=\prod_{k=1}^{n}\frac{c_{k}}{a_{k-1}}.\]
**Proposition 2.1** (Viennot's formula).: _Assume that_
\[\phi_{\boldsymbol{\alpha}}(x,z)=\sum_{n=0}^{\infty}\alpha_{n}z^{n}p_{n}(x),\;\psi _{\boldsymbol{\beta}}(x,z)=\sum_{n=0}^{\infty}\beta_{n}z^{n}\frac{p_{n}(x)}{ \|p_{n}\|_{2}^{2}} \tag{2.2}\]
_converge absolutely on the support of \(\nu\) for all \(z\in\mathds{C},|z|\leq 1\), the function \(x^{L}\phi_{\alpha}(x,1)\psi_{\beta}(x,1)\) is integrable with respect to the measure \(\nu\), and_
\[\int_{\mathds{R}}\sum_{m,n=0}^{\infty}\alpha_{n}\frac{\beta_{m}}{\|p_{m}\|_{2 }^{2}}\left|x^{L}p_{n}(x)p_{m}(x)\right|\nu(dx)<\infty. \tag{2.3}\]
_Then,_
\[\int_{\mathds{R}}p_{m}(x)p_{n}(x)x^{L}\nu(dx)=\|p_{n}\|_{2}^{2}\sum_{\gamma \in\mathcal{M}^{(L)}_{m,n}}w(\boldsymbol{\gamma}). \tag{2.4}\]
Proof.: The identity (2.4) can be found in Viennot (1985, (5)). Since the weights in Viennot (1985) are slightly less general and the proof there is only sketched, we provide a self-contained proof for (2.4). We give a proof by induction on \(L\). Recall \(\mathfrak{W}^{(L)}_{m,n}\) in (1.2). It is easy to see that for \(L=1\) the only non-zero weights are \(\mathfrak{W}^{(1)}_{m,m-1}=c_{m}\), \(\mathfrak{W}^{(1)}_{m,m}=b_{m}\) and \(\mathfrak{W}^{(1)}_{m,m+1}=a_{m}\). On the other hand, from the three step recursion (2.1) we see that the integrals \(\int xp_{m}(x)p_{n}(x)\nu(dx)\) are zero, except for the following three cases: \(\int xp_{m}(x)p_{m-1}(x)\nu(dx)=c_{m}\|p_{m-1}\|_{2}^{2}\), \(\int xp_{m}(x)p_{m}(x)\nu(dx)=b_{m}\|p_{m}\|_{2}^{2}\) and \(\int xp_{m}(x)p_{m+1}(x)\nu(dx)=a_{m}\|p_{m+1}\|_{2}^{2}\). Thus \(M^{(1)}_{m,n}=\mathfrak{W}^{(1)}_{m,n}\|p_{n}\|_{2}^{2}\) for all \(m,n\geq 0\), i.e. (2.4) holds for \(L=1\).
Next, we use (1.1) to observe that
\[\mathfrak{W}^{(L+1)}_{m,n}=a_{m}\mathfrak{W}^{(L)}_{m+1,n}+b_{m}\mathfrak{W}^ {(L)}_{m,n}+c_{m}\mathfrak{W}^{(L)}_{m-1,n}.\]
From the three step recursion (2.1) we see that the same recursion holds for \(M^{(L+1)}_{m,n}\). This proves (2.4) by mathematical induction.
## 3. Proof of Theorem 1.1
We first specify Viennot's formula to the setting of Theorem 1.1. In this case, recursion (2.1) becomes
\[xp_{n}(x)=p_{n+1}(x)+\sigma p_{n}(x)+p_{n-1}(x),\]
so polynomials \(\{p_{n}\}\) are just the monic Chebyshev polynomials of the second kind with shifted argument, \(p_{n}(x)=u_{n}(x-\sigma)\) where \(u_{n}(2\cos\theta)=\sin((n+1)\theta)/\sin\theta\) satisfy recursion
\[xu_{n}(x)=u_{n+1}(x)+u_{n-1}(x),\quad n=0,1,\ldots\]
with \(u_{-1}(x)\equiv 0,u_{0}(x)\equiv 1\). One readily checks from the above that
\[u_{n}\left(z+z^{-1}\right)=z^{-n}\sum_{k=0}^{n}z^{2k}=\begin{cases}\frac{z^{n +1}-z^{-(n+1)}}{z-z^{-1}},&\text{ if }z\neq 1,\\ n+1,&\text{ if }z=1.\end{cases} \tag{3.1}\]
In particular, polynomials \(\{p_{n}\}_{n\geq 0}\) are orthogonal with respect to probability measure
\[\nu(dx)=\frac{\sqrt{4-(x-\sigma)^{2}}}{2\pi}\mathbf{1}_{\{|x-\sigma|<2\}}dx.\]
Introduce the probability generating function for the end-points of a random Motzkin path of length \(L\),
\[\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]:=\sum_{ \gamma\in\mathcal{M}^{(L)}}z_{0}^{\gamma_{0}}z_{1}^{\gamma_{L}}\mathds{P}_{L}( \gamma),\ L\geq 1.\]
**Lemma 3.1**.: _Under the assumptions of Theorem 1.1, we have_
\[\frac{1}{2\pi}\int_{-2}^{2}u_{m}(x)u_{n}(x)(x+\sigma)^{L}\sqrt{4-x^{2}}dx=\sum _{\boldsymbol{\gamma}\in\mathcal{M}_{m,n}^{(L)}}w(\boldsymbol{\gamma}), \tag{3.2}\]
_and_
\[\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]= \frac{\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(z_{0},z_{1})}{ \mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(1,1)}, \tag{3.3}\]
_where_
\[\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(z_{0},z_{1}):=\frac{1}{ 2\pi}\int_{-2}^{2}\Phi_{\boldsymbol{\alpha}}(z_{0},x)\Phi_{\boldsymbol{\beta} }(z_{1},x)(x+\sigma)^{L}\sqrt{4-x^{2}}dx. \tag{3.4}\]
Proof.: It is clear that \(\|p_{n}\|_{2}^{2}=1\). Also under the assumptions (1.6) and \(z\leq 1\), (2.2) and (2.3) are satisfied as \(|u_{n}(x)|\leq u_{n}(2)=n+1\) for \(|x|\leq 2\). So the series
\[\Phi_{\boldsymbol{\alpha}}(z,x)=\sum_{n=0}^{\infty}\alpha_{n}z^{n}u_{n}(x), \quad\Phi_{\boldsymbol{\beta}}(z,x)=\sum_{n=0}^{\infty}\beta_{n}z^{n}u_{n}(x) \tag{3.5}\]
converge uniformly in \(x\in[-2,2]\). Then, (3.2) follows from (2.4) in Proposition 2.1 and a change of variables. For (3.3), we have
\[\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]=\frac {\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\alpha_{m}z_{0}^{m}\beta_{n}z_{1}^{n} \sum_{\boldsymbol{\gamma}\in\mathcal{M}_{m,n}^{(L)}}w(\boldsymbol{\gamma})}{ \sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\alpha_{m}\beta_{n}\sum_{\boldsymbol{ \gamma}\in\mathcal{M}_{m,n}^{(L)}}w(\boldsymbol{\gamma})}.\]
So it suffices to show
\[\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(z_{0},z_{1})=\sum_{m=0} ^{\infty}\sum_{n=0}^{\infty}\alpha_{m}z_{0}^{m}\beta_{n}z_{1}^{n}\sum_{ \boldsymbol{\gamma}\in\mathcal{M}_{m,n}^{(L)}}w(\boldsymbol{\gamma}).\]
To see this, it suffices to start from the right-hand side above and apply (3.2) and Fubini's theorem.
We also need the following.
**Lemma 3.2**.: _If \(\sigma>0\) and \(F\) is a continuous function on \([-2,2]\) then_
\[\lim_{L\to\infty}\frac{\int_{-2}^{2}F(x)(x+\sigma)^{L}\sqrt{4-x^{2}}dx}{\int_ {-2}^{2}(x+\sigma)^{L}\sqrt{4-x^{2}}dx}=F(2). \tag{3.6}\]
This is Lemma A.1 applied to the semicircle law \(\mu\) with \(R=2\).
Proof of Theorem 1.1.: Recall that weak convergence of discrete-time processes means convergence of finite-dimensional distributions (Billingsley, 1999). For integer-valued random variables, the
latter follows from convergence of probability generating functions. We will therefore show that for every fixed \(K\), \(z_{0},z_{1}\in(0,1]\) and \(t_{1},\dots,t_{K},s_{1},\dots,s_{K}>0\)
\[\lim_{L\to\infty}\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}\prod_{j=1 }^{K}t_{j}^{\gamma_{j}^{(L)}-\gamma_{j-1}^{(L)}}\prod_{j=1}^{K}s_{j}^{\gamma_{L -j}^{(L)}-\gamma_{L+1-j}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]\\ =\mathds{E}\left[z_{0}^{X_{0}}\prod_{j=1}^{K}t_{j}^{X_{j}-X_{j-1} }\right]\mathds{E}\left[z_{1}^{Y_{0}}\prod_{j=1}^{K}s_{j}^{Y_{j}-Y_{j-1}}\right]. \tag{3.7}\]
Indeed, the above expressions determine uniquely the corresponding probability generating functions for small enough arguments. For example,
\[\mathds{E}\left[\prod_{j=0}^{K}v_{j}^{X_{j}}\right]=\mathds{E}\left[z_{0}^{X _{0}}\prod_{j=1}^{K}t_{j}^{X_{j}-X_{j-1}}\right]\]
with \(z_{0}=v_{0}\dots v_{K}\) and \(t_{j}=v_{j}v_{j+1}\dots v_{K}\).
We introduce a tri-diagonal matrix
\[\boldsymbol{M}_{t}:=\begin{bmatrix}\sigma&t&0&0&\cdots\\ 1/t&\sigma&t&0\\ 0&1/t&\sigma&t\\ 0&0&1/t&\ddots&\ddots\\ \vdots&&&\ddots&\ddots\end{bmatrix},\]
and column vectors
\[\vec{V}_{\boldsymbol{\alpha}}(z):=\begin{bmatrix}\alpha_{0}\\ \alpha_{1}z\\ \vdots\\ \alpha_{n}z^{n}\\ \vdots\end{bmatrix},\quad\vec{W}_{\boldsymbol{\beta}}(z):=\begin{bmatrix} \beta_{0}\\ \beta_{1}z\\ \vdots\\ \beta_{n}z^{n}\\ \vdots\end{bmatrix},\quad\vec{U}(x):=\begin{bmatrix}u_{0}(x)\\ u_{1}(x)\\ \vdots\\ u_{n}(x)\\ \vdots\end{bmatrix},\]
where \(\{u_{k}(x)\}_{k\geq 0}\) are the monic Chebyshev polynomials of the second kind that already appeared in (3.5). The key identity of the proof is the following:
\[\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}\prod_{j=1}^{K}t_{j}^{ \gamma_{j}^{(L)}-\gamma_{j-1}^{(L)}}\prod_{j=1}^{K}s_{j}^{\gamma_{L-j}^{(L)}- \gamma_{L+1-j}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]\\ =\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}} \vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M} _{t_{2}}\cdots\boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\boldsymbol{M}_{1 /s_{K}}\cdots\boldsymbol{M}_{1/s_{2}}\boldsymbol{M}_{1/s_{1}}\vec{W}_{ \boldsymbol{\beta}}(z_{1}). \tag{3.8}\]
Here \(\mathfrak{C}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}=\mathsf{M}_{ \boldsymbol{\alpha},\boldsymbol{\beta},L}(1,1)\) is given by (3.4) with \(z_{0}=z_{1}=1\) and
\[\boldsymbol{M}_{1}^{L-2K}=\frac{1}{2\pi}\int_{-2}^{2}\vec{U}(x)\vec{U}(x)^{T} (x+\sigma)^{L-2K}\sqrt{4-x^{2}}dx. \tag{3.9}\]
Indeed, the \((m,n)\)-entry of \(\boldsymbol{M}_{1}^{L-2K}\) is \(\mathfrak{M}_{m,n}^{(L-2K)}=\sum_{\boldsymbol{\gamma}\in\mathcal{M}_{m,n}^{(L- 2K)}}w(\boldsymbol{\gamma})\), to which we applied Viennot's formula (3.2).
Note that on the right-hand side of (3.8), the terms depending on \(L\) are
\[\frac{\boldsymbol{M}_{1}^{L-2K}}{\mathfrak{C}_{\boldsymbol{\alpha}, \boldsymbol{\beta},L}} =\frac{\boldsymbol{M}_{1}^{L-2K}}{\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(1,1)}\] \[=\frac{\frac{1}{2\pi}\int_{-2}^{2}(x+\sigma)^{L-2K}\sqrt{4-x^{2} }dx}{\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(1,1)}\cdot\frac{ \boldsymbol{M}_{1}^{L-2K}}{\frac{1}{2\pi}\int_{-2}^{2}(x+\sigma)^{L-2K}\sqrt{4 -x^{2}}dx},\]
and for each ratio we apply Lemma 3.2. Namely, for the first ratio we have
\[\frac{\mathsf{M}_{\boldsymbol{\alpha},\boldsymbol{\beta},L}(1,1)} {\frac{1}{2\pi}\int_{-2}^{2}(x+\sigma)^{L-2K}\sqrt{4-x^{2}}dx} =\frac{\frac{1}{2\pi}\int_{-2}^{2}(x+\sigma)^{L}\Phi_{\boldsymbol {\alpha}}(1,x)\Phi_{\boldsymbol{\beta}}(1,x)\sqrt{4-x^{2}}dx}{\frac{1}{2\pi} \int_{-2}^{2}(x+\sigma)^{L-2K}\sqrt{4-x^{2}}dx}\] \[\to(2+\sigma)^{2K}\Phi_{\boldsymbol{\alpha}}(1,2)\Phi_{ \boldsymbol{\beta}}(1,2)=C_{\boldsymbol{\alpha}}C_{\boldsymbol{\beta}}(2+ \sigma)^{2K},\]
as \(L\to\infty\). For the second ratio, formally applying Lemma 3.2 entry-wise we get
\[\lim_{L\to\infty}\frac{\boldsymbol{M}_{1}^{L-2K}}{\frac{1}{2\pi}\int_{-2}^{2}( x+\sigma)^{L-2K}\sqrt{4-x^{2}}dx}=\vec{U}(2)\times\vec{U}(2)^{T}=\begin{bmatrix}1\\ 2\\ 3\\ \vdots\end{bmatrix}\times\begin{bmatrix}1&2&3&\cdots\end{bmatrix}. \tag{3.10}\]
Putting this into (3.8), and using the identities \(\vec{a}^{T}\vec{b}=\vec{b}^{T}\vec{a}\) and \(\boldsymbol{M}_{1/s}^{T}=\boldsymbol{M}_{s}\) we rewrite the second inner product that arises on the right hand side of (3.8) as
\[\begin{bmatrix}1&2&3&\cdots\end{bmatrix}\boldsymbol{M}_{1/s_{K}}\cdots \boldsymbol{M}_{1/s_{2}}\boldsymbol{M}_{1/s_{1}}\vec{W}(z_{1})=\vec{W}(z_{1}) ^{T}\boldsymbol{M}_{s_{1}}\cdots\boldsymbol{M}_{s_{K}}\begin{bmatrix}1\\ 2\\ 3\\ \vdots\end{bmatrix}.\]
We arrive at
\[\lim_{L\to\infty}\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}\prod_{ j=1}^{K}t_{j}^{\gamma_{j}^{(L)}-\gamma_{j-1}^{(L)}}\prod_{j=1}^{K}s_{j}^{ \gamma_{L-j}^{(L)}-\gamma_{L+1-j}^{(L)}}z_{1}^{\gamma_{L}^{(L)}}\right]\\ =\left(\frac{1}{C_{\boldsymbol{\alpha}}(2+\sigma)^{K}}\vec{V}_{ \boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \cdots\boldsymbol{M}_{t_{K}}\begin{bmatrix}1\\ 2\\ 3\\ \vdots\end{bmatrix}\right)\\ \times\left(\frac{1}{C_{\boldsymbol{\beta}}(2+\sigma)^{K}}\vec{W}_{ \boldsymbol{\beta}}(z_{1})^{T}\boldsymbol{M}_{s_{1}}\cdots\boldsymbol{M}_{s_{ K}}\begin{bmatrix}1\\ 2\\ 3\\ \vdots\end{bmatrix}\right). \tag{3.11}\]
The rigorous argument that avoids formal entry-wise limit (3.10) is based on expanding the entire expression on the right hand of (3.8) using the integral representation (3.9) for the middle matrix \(\boldsymbol{M}_{1}^{L-2K}\). To apply Lemma 3.2 we rewrite this expression so that all series and sums appear under the integral. This use of Fubini's theorem is justified by convergence of the series \(\sum_{n}\alpha_{n}u_{n}(x)\) and \(\sum_{n}\beta_{n}u_{n}(x)\) which is uniform over \(x\in[-2,2]\). After an application of Lemma 3.2, all appearances of \(u_{n}(x)\) in the series get replaced by \(u_{n}(2)=n+1\). The same the expression arises by expanding
into a series the matrix product on the right hand side of (3.11). We omit cumbersome details of this re-write.
In view of this decomposition of the limit (3.11) into two factors of the same form, we need only to identify the first factor. Write \(\vec{X}_{j:K}=(X_{j},\ldots,X_{K})\) for simplicity. To this end we note that for any integrable function \(F\) we have
\[\mathds{E}\left[t^{X_{j}-X_{j-1}}F(\vec{X}_{j:K})|X_{j-1}=n\right]\] \[\quad=\sum_{\epsilon\in\{0,\pm 1\}}t^{\epsilon}\mathds{E}\left[F( \vec{X}_{j:K})|X_{j}=n+\epsilon\right]\mathds{P}(X_{j}=n+\epsilon|X_{j-1}=n)\] \[\quad=t\cdot\frac{n+2}{(2+\sigma)(n+1)}\mathds{E}\left[F(\vec{X} _{j:K})\Big{|}X_{j}=n+1\right]\] \[\quad\quad+1\cdot\frac{\sigma}{2+\sigma}\mathds{E}\left[F(\vec{X }_{j:K})\Big{|}X_{j}=n\right]\] \[\quad\quad+\frac{1}{t}\cdot\frac{n}{(2+\sigma)(n+1)}\mathds{E} \left[F(\vec{X}_{j:K})\Big{|}X_{j}=n-1\right].\]
In matrix notation, the above is the same as
\[\frac{1}{2+\sigma}\mathbf{M}_{t}\begin{bmatrix}1\cdot\mathds{E}\left[F(\vec{X}_{j :K})\Big{|}X_{j}=0\right]\\ 2\cdot\mathds{E}\left[F(\vec{X}_{j:K})\Big{|}X_{j}=1\right]\\ \vdots\end{bmatrix}=\begin{bmatrix}1\cdot\mathds{E}\left[t^{X_{j}-X_{j-1}}F( \vec{X}_{j:K})\Big{|}X_{j-1}=0\right]\\ 2\cdot\mathds{E}\left[t^{X_{j}-X_{j-1}}F(\vec{X}_{j:K})\Big{|}X_{j-1}=1\right] \\ \vdots\end{bmatrix}. \tag{3.12}\]
Starting with \(j=K\), \(t=t_{K}\) and constant \(F(X_{K})=1\), we get
\[\frac{1}{C_{\mathbf{\alpha}}(2+\sigma)^{K}}\vec{V}_{\mathbf{\alpha}}(z_{0 })^{T}\mathbf{M}_{t_{1}}\mathbf{M}_{t_{2}}\ldots\mathbf{M}_{t_{K}}\begin{bmatrix}1\\ 2\\ \vdots\end{bmatrix}\] \[\quad=\frac{1}{C_{\mathbf{\alpha}}(2+\sigma)^{K-1}}\vec{V}_{\mathbf{ \alpha}}(z_{0})^{T}\mathbf{M}_{t_{1}}\mathbf{M}_{t_{2}}\ldots\mathbf{M}_{t_{K-1}}\begin{bmatrix} 1\cdot\mathds{E}[t_{k}^{X_{k}-X_{K-1}}|X_{K-1}=0]\\ 2\cdot\mathds{E}[t_{k}^{X_{k}-X_{K-1}}|X_{K-1}=1]\\ \vdots\end{bmatrix}.\]
By applying iteratively (3.12), we arrive at
\[\frac{1}{C_{\boldsymbol{\alpha}}(2+\sigma)^{K}}\vec{V}_{ \boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \ldots\boldsymbol{M}_{t_{K}}\begin{bmatrix}1\\ 2\\ \vdots\end{bmatrix}\] \[=\frac{1}{C_{\boldsymbol{\alpha}}}\vec{V}_{\boldsymbol{\alpha}} (z_{0})^{T}\begin{bmatrix}1\cdot\mathds{E}[t_{1}^{X_{1}-X_{0}}\ldots t_{K-1}^{ X_{K-1}-X_{K-2}}t_{K}^{X_{K}-X_{K-1}}|X_{0}=0]\\ 2\cdot\mathds{E}[t_{1}^{X_{1}-X_{0}}\ldots t_{K-1}^{X_{K-1}-X_{K-2}}t_{K}^{X_{K }-X_{K-1}}|X_{0}=1]\\ \vdots\end{bmatrix}\] \[=\frac{1}{C_{\boldsymbol{\alpha}}}\sum_{n=0}^{\infty}\alpha_{n}z_{0 }^{n}(n+1)\mathds{E}\left[\prod_{r=1}^{K}t_{r}^{X_{r}-X_{r-1}}\ \middle|\ X_{0}=n\right]=\mathds{E}\left[z_{0}^{X_{0}}\prod_{j=1}^{K}t_{j}^{X_{ j}-X_{j-1}}\right],\]
with \(\mathds{P}(X_{0}=n)=\frac{\alpha_{n}(n+1)}{C_{\boldsymbol{\alpha}}}\) as claimed. Similarly, the second factor is \(\mathds{E}\left[z_{1}^{Y_{0}}\prod_{j=1}^{K}s_{j}^{Y_{j}-Y_{j-1}}\right]\), proving (3.7).
## 4. Proof of Theorem 1.6
As previously, we fix \(K\) and use matrix representation (3.8) for the generating function. The analysis is a little more involved. We let \(\boldsymbol{M}_{t}\) and \(\vec{U}(x)\) be as before, but this time work with the column vector
\[\vec{W}_{\rho_{1}}(z):=\begin{bmatrix}1\\ \rho_{1}z\\ \vdots\\ (\rho_{1}z)^{n}\\ \vdots\end{bmatrix},\quad\rho_{1}>0.\]
Taking \(s_{j}=1\), \(z_{1}=1\) in (3.8) we write
\[\mathds{E}\left[z_{0}^{\gamma_{(L)}^{(L)}}\prod_{j=1}^{K}t_{j}^{ \gamma_{j}^{(L)}-\gamma_{j-1}^{(L)}}\prod_{k=1}^{K}s_{k}^{\gamma_{L-k}^{(L)}- \gamma_{L+1-k}^{(L)}}\right]\\ =\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1},L}}\vec{V}_ {\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \cdots\boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\boldsymbol{M}_{1/s_{K}} \cdots\boldsymbol{M}_{1/s_{1}}\vec{W}_{\rho_{1}}(1), \tag{4.1}\]
where this time the normalizing constant is \(\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1},L}:=\vec{V}_{\boldsymbol{\alpha}}( 1)^{T}\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(1)\).
We observe that
\[\boldsymbol{M}_{t}\vec{W}_{\rho_{1}}(1)=\left(\frac{1}{t\rho_{1}}+\sigma+\rho_ {1}t\right)\vec{W}_{\rho_{1}}(1)-\vec{R}_{1},\]
where vector \(\vec{R}_{1}=[\frac{1}{t\rho_{1}},0,0,\ldots]^{T}\) has only one non-zero entry. Using this recurrently, we see that
\[\boldsymbol{M}_{1/s_{K}}\cdots\boldsymbol{M}_{1/s_{1}}\vec{W}_{\rho_{1}}(1)= \prod_{j=1}^{K}\left(\frac{1}{s_{j}\rho_{1}}+\sigma+\rho_{1}s_{j}\right)\vec{W} _{\rho_{1}}(1)-\vec{R}_{K},\]
where vector \(\vec{R}_{K}=[w_{0},w_{1},\ldots,w_{K-1},0,\ldots]^{T}\) has only \(K\) non-zero entries \(w_{j}\). These entries do not depend on \(L\), and their exact expressions are irrelevant for the limit theorem as shown a moment later. That is, writing
\[\mathds{E}\left[z_{0}^{\gamma_{0}^{(L)}}\prod_{j=1}^{K}t_{j}^{\gamma_{j}^{(L)} -\gamma_{j-1}^{(L)}}\prod_{k=1}^{K}s_{k}^{\gamma_{L-k}^{(L)}-\gamma_{L+1-k}^{( L)}}\right]=\Phi_{L}-r_{L}, \tag{4.2}\]
with
\[\Phi_{L} :=\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1},L}}\cdot \prod_{j=1}^{K}\left(\frac{1}{s_{j}\rho_{1}}+\sigma+\rho_{1}s_{j}\right)\cdot \vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M} _{t_{2}}\cdots\boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\vec{W}_{\rho_{ 1}}(1),\] \[r_{L} :=\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1},L}}\vec{V} _{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \cdots\boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\vec{R}_{K},\]
we shall show that \(\Phi_{L}\) has the desired limit as \(L\to\infty\) and \(\lim_{L\to\infty}r_{L}=0\).
The key step for \(\rho_{1}\geq 1\) is to first extend the integral representation used in the proof of Theorem 1.1 in Lemma 4.1 below. To this end, for \(\rho>0\), we introduce a probability measure measure \(\mu_{\rho}\) of mixed type given by
\[\mu_{\rho}(dx):=\frac{1}{2\pi}\frac{\sqrt{4-x^{2}}}{1-x\rho+\rho^{2}}\mathbf{1 }_{\{|x|<2\}}dx+\left(1-\frac{1}{\rho^{2}}\right)_{+}\delta_{\rho+\frac{1}{ \rho}}(dx),\quad\rho>0.\]
(Here \(x_{+}:=\max\{0,x\}\).) We also write
\[\mu_{0}(dx):=\frac{\sqrt{4-x^{2}}}{2\pi}\mathbf{1}_{\{|x|<2\}}dx.\]
At the critical value \(\rho=1\), we have
\[\mu_{1}(dx)=\frac{1}{2\pi}\sqrt{\frac{2+x}{2-x}}\mathbf{1}_{\{|x|<2\}}dx.\]
**Lemma 4.1**.: _For \(\rho_{1}>0,L>0\) we have_
\[\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(1)=\int_{\mathds{R}}(x+\sigma)^{L} \vec{U}(x)\mu_{\rho_{1}}(dx). \tag{4.3}\]
Proof.: Recall that \(\vec{W}_{\rho_{1}}(1)=[1,\rho_{1},\rho_{1}^{2},\rho_{1}^{3},\ldots]^{T}\). The equation is about two infinite-dimensional vectors. We examine the entries indexed by \(m\in\mathbb{Z}_{\geq 0}\) (with the convention that the first entry is indexed by \(0\), the second by \(1\), etc). Introduce
\[h_{m}(\rho):=\sum_{n=0}^{\infty}\rho^{n}\sum_{\boldsymbol{\gamma}\in\mathcal{ M}_{m,n}^{(L)}}w(\boldsymbol{\gamma}),\quad m\in\mathbb{Z}_{\geq 0},\rho\in \mathds{C}.\]
Note that for every \(m\) fixed, the summation over \(n\) is a sum of a finite number of non-zero terms (for those \(n\) such that \(|n-m|\leq L\)), and hence is analytic for all \(\rho\in\mathds{C}\). The \(m\)-indexed entry of the left-hand side of (4.3) is then
\[\left(\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(1)\right)_{m}=\sum_{n=0}^{ \infty}\rho_{1}^{n}\sum_{\boldsymbol{\gamma}\in\mathcal{M}_{m,n}^{(L)}}w( \boldsymbol{\gamma})=h_{m}(\rho_{1}).\]
Thus for \(|\rho|<1\) from (3.2) we see that \(h_{m}(\rho)=h_{m}^{(0)}(\rho)\) where
\[h_{m}^{(0)}(\rho) :=\sum_{n=0}^{\infty}\rho^{n}\frac{1}{2\pi}\int_{-2}^{2}u_{m}(x)u_{n }(x)(x+\sigma)^{L}\sqrt{4-x^{2}}dx\] \[=\int_{-2}^{2}(x+\sigma)^{L}u_{m}(x)\frac{\sqrt{4-x^{2}}}{2\pi(1- \rho x+\rho^{2})}dx. \tag{4.4}\]
Note that in the above we used the well-known identity for the Chebyshev polynomials
\[\sum_{n=0}^{\infty}\rho^{n}u_{n}(x)=\frac{1}{1-\rho x+\rho^{2}},\text{ for all }|\rho|\leq 1. \tag{4.5}\]
Our goal is to extend the integral representation in the last formula of (4.4) to a larger domain by explicit analytic continuation.
Assume \(|\rho|>0\) from now on. We first re-write (4.4) as a complex integral. Substituting \(x=2\cos\theta\), and then \(z=e^{i\theta}\), in (4.4) we get
\[h_{m}^{(0)}(\rho) =\frac{1}{2\pi}\int_{-2}^{2}u_{m}(x)\frac{(\sigma+x)^{L}}{1-x\rho +\rho^{2}}\sqrt{4-x^{2}}dx\] \[=\frac{1}{2\pi}\int_{0}^{\pi}u_{m}(2\cos\theta)\frac{4\sin^{2} \theta(\sigma+2\cos\theta)^{L}}{1-2\rho\cos\theta+\rho^{2}}d\theta\] \[=\frac{1}{4\pi}\int_{-\pi}^{\pi}u_{m}(2\cos\theta)\frac{4\sin^{2} \theta(\sigma+2\cos\theta)^{L}}{1-2\rho\cos\theta+\rho^{2}}d\theta\] \[=\left(-\frac{1}{2}\right)\cdot\frac{1}{2\pi i}\oint_{|z|=1}u_{m }\left(z+\frac{1}{z}\right)\frac{(z^{2}-1)^{2}\left(\sigma+z+\frac{1}{z} \right)^{L}}{(1-\rho z)(z-\rho)}\frac{dz}{z^{2}}, \tag{4.6}\]
\(|\rho|<1.\) Next, in the last line of (4.6) one can replace the contour \(|z|=1\) by \(|z|=r\), and this replacement is valid as long as the circle does not cross any pole of the integrand, that is, for \(r\in(1,1/|\rho|)\). Letting the contour cross the pole at \(1/\rho\) and adding half of the residue at \(z=1/\rho\) (because of the additional factor \(-1/2\) in front of the integral), we then arrive at
\[h_{m}(\rho)=h_{m}^{(0)}(\rho)=h_{m,r}^{(1)}(\rho)\quad\text{ with }\quad|\rho|<1,r>\frac{1}{|\rho|},\]
with
\[h_{m,r}^{(1)}(\rho):=-\frac{1}{4\pi i}\oint_{|z|=r}u_{m}\left(z +\frac{1}{z}\right)\frac{(z^{2}-1)^{2}\left(\sigma+z+\frac{1}{z}\right)^{L}}{( 1-\rho z)(z-\rho)}\frac{dz}{z^{2}}\\ +\frac{1}{2}u_{m}\left(\rho+\frac{1}{\rho}\right)\frac{\rho^{2}- 1}{\rho^{2}}\left(\rho+\frac{1}{\rho}+\sigma\right)^{L}. \tag{4.7}\]
By analytic extension, for \(r\) fixed \(h_{m,r}^{(1)}(\rho)\) can be extended to all \(\rho\) such that \(1/r<|\rho|<r\). In particular,
\[h_{m}(\rho)=h_{m,r}^{(1)}(\rho),\quad\text{ for all }\rho\text{ such that }\frac{1}{r}<|\rho|<r.\]
Next, consider the expression (4.7) for \(r>1\) and \(\rho\) such that \(|\rho|\in(1,r)\), and deform the contour of integration back to \(|z|=1\). This subtracts half of the residue of the integrand at \(z=\rho\). So for
\[h_{m}^{(2)}(\rho):=-\frac{1}{4\pi i}\oint_{|z|=1}u_{m}\left(z+ \frac{1}{z}\right)\frac{(z^{2}-1)^{2}\left(\sigma+z+\frac{1}{z}\right)^{L}}{(1 -\rho z)(z-\rho)}\frac{dz}{z^{2}}\\ +u_{m}\left(\rho+\frac{1}{\rho}\right)\left(1-\frac{1}{\rho^{2}} \right)\left(\rho+\frac{1}{\rho}+\sigma\right)^{L}, \tag{4.8}\]
we have \(h_{m,r}^{(1)}(\rho)=h_{m}^{(2)}(\rho)\) for all \(\rho\in\mathds{C}\) such that \(|\rho|\in(1,r)\). Note that \(r\) can be taken arbitrarily large. Therefore,
\[h_{m}(\rho)=h_{m}^{(2)}(\rho)\quad\text{ for all }\rho\text{ such that }|\rho|>1. \tag{4.9}\]
Returning back to the real arguments, we see that (4.6), (4.8) and (4.9) can be combined together into a single formula which for \(\rho>0,\rho\neq 1\) gives
\[h_{m}(\rho) = \frac{1}{2\pi}\int_{-2}^{2}u_{m}(x)\frac{(\sigma+x)^{L}}{1-x\rho +\rho^{2}}\sqrt{4-x^{2}}dx\ +\ \left(1-\frac{1}{\rho^{2}}\right)_{+}u_{m}\left(\rho+\frac{1}{\rho}\right) \left(\rho+\frac{1}{\rho}+\sigma\right)^{L}.\]
This formula extends to \(\rho=1\) by continuity. This proves (4.3).
We also need the following.
**Lemma 4.2**.: _If \(\sigma>0\), \(\rho_{1}\geq 1\) and \(F\) is a continuous function on \([-2,\rho_{1}+1/\rho_{1}]\) then_
\[\lim_{L\to\infty}\frac{\int_{\mathds{R}}F(x)(x+\sigma)^{L}\mu_{\rho_{1}}(dx)} {\int_{\mathds{R}}(x+\sigma)^{L}\mu_{\rho_{1}}(dx)}=F\left(\rho_{1}+\frac{1}{ \rho_{1}}\right). \tag{4.10}\]
This is Lemma A.1 applied to \(\mu=\mu_{\rho_{1}}\) with \(R=\rho_{1}+1/\rho_{1}\).
Proof of Theorem 1.6.: We first show that with \(\rho_{1}\geq 1\), the end-point of a random Motzkin path is not tight. This can be easily seen from the generating function which, using (3.8), for \(z_{1}\in(0,1/\rho_{1})\) takes the form
\[\mathds{E}\left[z_{1}^{\gamma_{L}^{(L)}}\right]=\frac{\vec{V}_{\boldsymbol{ \alpha}}(1)^{T}\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(z_{1})}{\vec{V}_{ \boldsymbol{\alpha}}(1)^{T}\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(1)}=\frac{ \int_{\mathds{R}}(x+\sigma)^{L}\vec{V}_{\boldsymbol{\alpha}}(1)^{T}\vec{U}(x) \mu_{\rho_{1}z_{1}}(dx)}{\int_{\mathds{R}}(x+\sigma)^{L}\vec{V}_{\boldsymbol{ \alpha}}(1)^{T}\vec{U}(x)\mu_{\rho_{1}}(dx)},\]
where we used (4.3) twice. Since \(z_{1}\rho_{1}<1\), by (3.6) the numerator becomes
\[\int_{\mathds{R}}\frac{(x+\sigma)^{L}}{1-xz_{1}\rho_{1}+z_{1}^{2}\rho_{1}^{2}} \vec{V}_{\boldsymbol{\alpha}}(1)^{T}\vec{U}(x)\mu_{0}(dx)\sim\frac{\vec{V}_{ \boldsymbol{\alpha}}(1)^{T}\vec{U}(2)}{(1-z_{1}\rho_{1})^{2}}\int_{\mathds{R} }(x+\sigma)^{L}\mu_{0}(dx).\]
Using (4.10) in the denominator, we get
\[\int_{\mathds{R}}(x+\sigma)^{L}\vec{V}_{\boldsymbol{\alpha}}(1)^{T}\vec{U}(x )\mu_{\rho_{1}}(dx)\sim\vec{V}_{\boldsymbol{\alpha}}(1)^{T}\vec{U}(\rho_{1}+1/ \rho_{1})\int_{\mathds{R}}(x+\sigma)^{L}\mu_{\rho_{1}}(dx).\]
Therefore,
\[\mathds{E}\left[z_{1}^{\gamma_{L}^{(L)}}\right]\sim\frac{\vec{V}_{\boldsymbol {\alpha}}(1)^{T}\vec{U}(2)}{(1-z_{1}\rho_{1})^{2}\vec{V}_{\boldsymbol{\alpha}}( 1)^{T}\vec{U}(\rho_{1}+1/\rho_{1})}\ \frac{\int_{\mathds{R}}(x+\sigma)^{L}\mu_{0}(dx)}{\int_{\mathds{R}}(x+\sigma)^{ L}\mu_{\rho_{1}}(dx)}\to 0\text{ as }L\to\infty.\]
Indeed, using (A.1) with \(R=\rho_{1}+1/\rho_{1}\) and \(F(x)=1-\rho_{1}x+\rho_{1}^{2}\) we have
\[\lim_{L\to\infty}\frac{\int_{\mathds{R}}(x+\sigma)^{L}\mu_{0}(dx)}{\int_{ \mathds{R}}(x+\sigma)^{L}\mu_{\rho_{1}}(dx)}=\lim_{L\to\infty}\frac{\int_{ \mathds{R}}F(x)(x+\sigma)^{L}\mu_{\rho_{1}}(dx)}{\int_{\mathds{R}}(x+\sigma)^{ L}\mu_{\rho_{1}}(dx)}=F(\rho_{1}+1/\rho_{1})=0. \tag{4.11}\]
We next prove the joint convergence. Applying Lemma 4.1 to (4.1) and recalling (4.2), we have
\[\begin{split}\Phi_{L}&=\prod_{j=1}^{K}\left(\frac{1}{s_ {j}\rho_{1}}+\sigma+\rho_{1}s_{j}\right)\cdot\frac{\vec{V}_{\boldsymbol{ \alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}}\cdots \boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\vec{W}_{\rho_{1}}(1)}{\vec{V}_ {\boldsymbol{\alpha}}(1)^{T}\boldsymbol{M}_{1}^{L}\vec{W}_{\rho_{1}}(1)}\\ &=\prod_{j=1}^{K}\left(\frac{1}{s_{j}\rho_{1}}+\sigma+\rho_{1}s_{ j}\right)\cdot\frac{\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}} \boldsymbol{M}_{t_{2}}\cdots\boldsymbol{M}_{t_{K}}\int_{\mathds{R}}(x+\sigma) ^{L-2K}\vec{U}(x)\mu_{\rho_{1}}(dx)}{\int_{\mathds{R}}(x+\sigma)^{L-2K}\mu_{ \rho_{1}}(dx)}\\ &\quad\times\frac{\int_{\mathds{R}}(x+\sigma)^{L-2K}\mu_{\rho_{1} }(dx)}{\int_{\mathds{R}}(x+\sigma)^{2K}(x+\sigma)^{L-2K}\vec{V}_{\boldsymbol{ \alpha}}(1)^{T}\vec{U}(x)\mu_{\rho_{1}}(dx)}.\end{split} \tag{4.12}\]
Note that function
\[F(x):=\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}} \boldsymbol{M}_{t_{2}}\cdots\boldsymbol{M}_{t_{K}}\vec{U}(x)\]
is continuous on \([-2,\rho_{1}+1/\rho_{1}]\). Indeed, since \(|u_{n}(x)|\leq n+1\), continuity is obvious for \(x\in[-2,2]\). Next, any \(x\in[1,\rho_{1}+1/\rho_{1}]\) can be written as \(x=\rho+1/\rho\) with unique \(\rho\in[1,\rho_{1}]\), and we have for some constant \(C>0\),
\[\left|F\left(\rho+\frac{1}{\rho}\right)\right|\leq C\sum_{n=0}^{ \infty}\alpha_{n}\rho^{-n}\left|\sum_{k=0}^{n}\rho^{2k}\right|\leq C\sum_{n=0} ^{\infty}n|\rho|^{n}\alpha_{n}+C\sum_{n=0}^{\infty}\frac{n\alpha_{n}}{\rho^{n} }<\infty,\]
where we needed the assumption (1.8). It then follows that \(F\) is also continuous on \([1,\rho_{1}+1/\rho_{1}]\). Therefore applying Lemma 4.2 to the two fractions on the right hand side of (4.12), we get
\[\lim_{L\to\infty}\Phi_{L}=\prod_{j=1}^{K}\frac{(s_{j}\rho_{1})^{- 1}+\sigma+\rho_{1}s_{j}}{\rho_{1}+1/\rho_{1}+\sigma}\times\frac{\vec{V}_{ \boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \cdots\boldsymbol{M}_{t_{K}}\vec{U}(\rho_{1}+1/\rho_{1})}{\mathfrak{C}_{ \boldsymbol{\alpha},\rho_{1}}(\rho_{1}+1/\rho_{1}+\sigma)^{K}}, \tag{4.13}\]
where
\[\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1}}:=\vec{V}_{\boldsymbol{\alpha}}(1) ^{T}\vec{U}(\rho_{1})=\sum_{n=0}^{\infty}\alpha_{n}u_{n}\left(\rho_{1}+\frac{ 1}{\rho_{1}}\right).\]
(In the last calculation, recall (3.1).)
We first recognize
\[\prod_{j=1}^{K}\frac{(s_{j}\rho_{1})^{-1}+\sigma+\rho_{1}s_{j}}{ \rho_{1}+1/\rho_{1}+\sigma}=\prod_{j=1}^{K}\mathds{E}s_{j}^{\xi_{j}}. \tag{4.14}\]
For the second fraction on the right-hand side of (4.13), as previously we write \(\vec{Z}_{j:K}=(Z_{j}^{(\rho_{1})},\ldots,Z_{K}^{(\rho_{1})})\) for simplicity, and we drop \(\rho_{1}\) from the notation. We first write the formulas
for \(\rho_{1}>1\). We note that for any integrable function \(F\) we have, in matrix notation,
\[\frac{1}{\rho_{1}+1/\rho_{1}+\sigma}\boldsymbol{M}_{t}\begin{bmatrix}( \rho_{1}-1/\rho_{1})\cdot\mathds{E}\left[F(\vec{Z}_{j:K})\Big{|}Z_{j}=0\right] \\ (\rho_{1}^{2}-1/\rho_{1}^{2})\cdot\mathds{E}\left[F(\vec{Z}_{j:K})\Big{|}Z_{j}= 1\right]\\ \vdots\end{bmatrix}\\ =\begin{bmatrix}(\rho_{1}-1/\rho_{1})\cdot\mathds{E}\left[t^{X_{j}-X_{j-1}}F (\vec{Z}_{j:K})\Big{|}Z_{j-1}=0\right]\\ (\rho_{1}^{2}-1/\rho_{1}^{2})\cdot\mathds{E}\left[t^{Z_{j}-Z_{j-1}}F(\vec{Z}_ {j:K})\Big{|}Z_{j-1}=1\right]\\ \vdots\end{bmatrix}. \tag{4.15}\]
Recall that (see (3.1))
\[\vec{U}\left(\rho_{1}+\frac{1}{\rho_{1}}\right)=\frac{1}{\rho_{1}-1/\rho_{1} }\begin{bmatrix}\rho_{1}-1/\rho_{1}\\ \rho_{1}^{2}-1/\rho_{1}^{2}\\ \vdots\end{bmatrix}.\]
We get
\[\frac{\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{ 1}}\cdots\boldsymbol{M}_{t_{K}}\vec{U}(\rho_{1}+1/\rho_{1})}{\mathfrak{C}_{ \boldsymbol{\alpha},\rho_{1}}(\rho_{1}+1/\rho_{1}+\sigma)^{K}}\] \[=\frac{\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t _{1}}\boldsymbol{M}_{t_{2}}\ldots\boldsymbol{M}_{t_{K}}}{\mathfrak{C}_{ \boldsymbol{\alpha},\rho_{1}}(\rho_{1}-1/\rho_{1})(\rho_{1}+1/\rho_{1}+\sigma )^{K}}\begin{bmatrix}\rho_{1}-1/\rho_{1}\\ \rho_{1}^{2}-1/\rho_{1}^{2}\\ \vdots\end{bmatrix}\] \[=\frac{\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t _{1}}\boldsymbol{M}_{t_{2}}\ldots\boldsymbol{M}_{t_{K-1}}}{\mathfrak{C}_{ \boldsymbol{\alpha},\rho_{1}}(\rho_{1}-1/\rho_{1})(\rho_{1}+1/\rho_{1}+\sigma )^{K-1}}\begin{bmatrix}(\rho_{1}-1/\rho_{1})\cdot\mathds{E}[t_{K}^{Z_{K}-Z_{K- 1}}|Z_{K-1}=0]\\ (\rho_{1}^{2}-1/\rho_{1}^{2})\cdot\mathds{E}[t_{K}^{Z_{K}-Z_{K-1}}|Z_{K-1}=1] \\ \vdots\end{bmatrix}.\]
Eventually, we have
\[\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1}}(\rho_{1}+1/ \rho_{1}+\sigma)^{K}}\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{ t_{1}}\cdots\boldsymbol{M}_{t_{K}}\vec{U}(\rho_{1})\] \[=\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1}}}\sum_{n=0} ^{\infty}\alpha_{n}z_{0}^{n}\frac{\rho_{1}^{n+1}-1/\rho_{1}^{n+1}}{\rho_{1}-1/ \rho_{1}}\mathds{E}\left[\prod_{r=1}^{K}t_{r}^{Z_{r}-Z_{r-1}}\;\middle|\;Z_{0} =n\right]\] \[=\mathds{E}\left[z_{0}^{Z_{0}}\prod_{j=1}^{K}t_{j}^{Z_{j}-Z_{j-1} }\right], \tag{4.16}\]
with
\[\mathbb{P}(Z_{0}=n)=\frac{\alpha_{n}(\rho_{1}^{n+1}-1/\rho_{1}^{n+1})}{\mathfrak{ C}_{\boldsymbol{\alpha},\rho_{1}}(\rho_{1}-1/\rho_{1})},\quad n\in\mathds{Z}_{ \geq 0}. \tag{4.17}\]
For \(\rho_{1}=1\), replacing \((\rho_{1}^{n+1}-1/\rho_{1}^{n+1})/(\rho_{1}-1/\rho_{1})\) by \(n+1\), we have the same derivation, (4.16) and (4.17) correspondingly. In fact, (4.15) becomes (3.12) (with \(X\) replaced by \(Z\)).
Combining (4.13), (4.14) and (4.16), we have proved that
\[\lim_{L\to\infty}\Phi_{L}=\prod_{j=1}^{K}\mathds{E}s_{j}^{\xi_{j}}\mathds{E} \left[z_{0}^{Z_{0}}\prod_{j=1}^{K}t_{j}^{Z_{j}-Z_{j-1}}\right].\]
In view of (4.2), it remains to show that \(\lim_{L\to\infty}r_{L}=0\). First, recall that
\[r_{L}=\frac{1}{\mathfrak{C}_{\boldsymbol{\alpha},\rho_{1},L}}\vec{V}_{ \boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{t_{1}}\boldsymbol{M}_{t_{2}} \cdots\boldsymbol{M}_{t_{K}}\boldsymbol{M}_{1}^{L-2K}\vec{R}_{K}.\]
We have
\[\vec{V}_{\boldsymbol{\alpha}}(z_{0})^{T}\boldsymbol{M}_{1}^{L-2K }\vec{R}_{K} =\sum_{m=0}^{\infty}\alpha_{m}z_{0}^{m}\sum_{n=0}^{K-1}w_{n}\int _{\mathds{R}}(x+\sigma)^{L-2K}u_{m}(x)u_{n}(x)\mu_{0}(dx)\] \[=\sum_{n=0}^{K-1}w_{n}\int_{\mathds{R}}(x+\sigma)^{L-2K}\left( \sum_{m=0}^{\infty}\alpha_{m}z_{0}^{m}u_{m}(x)\right)u_{n}(x)\mu_{0}(dx)\] \[\sim\left(\sum_{m=0}^{\infty}(m+1)\alpha_{m}z_{0}^{m}\right)\sum _{n=0}^{K-1}w_{n}u_{n}(2)\cdot\int_{\mathds{R}}(x+\sigma)^{L-2K}\mu_{0}(dx),\]
as \(L\to\infty\), where the last step follows from Lemma 3.2, and
\[\vec{V}_{\boldsymbol{\alpha}}(1)^{T}\boldsymbol{M}_{1}^{L}\vec{ W}_{\rho_{1}}(1) =\int_{\mathds{R}}(x+\sigma)^{L}\vec{V}_{\boldsymbol{\alpha}}(1)^ {T}\vec{U}(x)\mu_{\rho_{1}}(dx)\] \[=\int_{\mathds{R}}(x+\sigma)^{L}\sum_{n=0}^{\infty}\alpha_{n}u_{n} (x)\mu_{\rho_{1}}(dx)\] \[\sim\sum_{n=0}^{\infty}\alpha_{n}u_{n}\left(\rho_{1}+\frac{1}{ \rho_{1}}\right)\cdot\int_{\mathds{R}}(x+\sigma)^{L}\mu_{\rho_{1}}(dx),\]
as \(L\to\infty\), where the last step follows from Lemma 4.2. Since \(\rho_{1}\geq 1\), as in (4.11), \(\int_{\mathds{R}}(x+\sigma)^{L-2K}\mu_{0}(dx)=o(\int_{\mathds{R}}(x+\sigma)^{ L}\mu_{\rho_{1}}(dx))\) as \(L\to\infty\). The proof is completed.
_Remark 4.3_.: The choice of geometric boundary measures is crucial at two steps. The first is that it leads to the generating function for Chebyshev polynomials in (4.5), which is key for Lemma 4.1. Second, it is key for the derivation of (4.16).
### Acknowledgement
We thank Jacek Wesolowski for formula (3.1) and references. WB's research was partially supported by Simons Foundation/SFARI Award Number: 703475, US. YW's research was partially supported by Army Research Office, US (W911NF-20-1-0139). Both authors acknowledge support from the Taft Research Center at the University of Cincinnati.
## Appendix A A technical lemma
We need the following version of (Bryc and Wesolowski, 2017, Lemma 1).
**Lemma A.1**.: _Suppose \(\mu\) is a probability measure such that \(\operatorname{supp}(\mu)=[-2,\operatorname{R}]\) for some \(R\geq 2\). If \(\sigma>0\), and \(F\) is a continuous function on \([-2,R]\) then \(\int_{\operatorname{R}}(x+\sigma)^{L}\mu(dx)\) is non-zero for large \(L\) and_
(A.1) \[\lim_{L\to\infty}\frac{\int_{\operatorname{R}}F(x)(x+\sigma)^{L}\mu(dx)}{\int_ {\operatorname{R}}(x+\sigma)^{L}\mu(dx)}=F(R).\]
Proof.: Subtracting \(F(R)\) from both sides, we see that it suffices to prove (A.1) with \(F(R)=0\).
Fix \(\varepsilon>0\). By continuity of \(F\) at \(x=R\), there is \(\delta>0\) with \(\delta\leq\sigma/2\) and \(\delta\leq 2\) such that for all \(x\in[R-\delta,R]\) we have \(|F(x)|<\varepsilon\).
We rewrite the fraction under the limit (4.10) as follows:
\[\frac{\int_{-2}^{R}F(x)(x+\sigma)^{L}\mu(dx)}{\int_{-2}^{2}(x+ \sigma)^{L}\mu(dx)}\] \[\qquad=\frac{\int_{-2}^{R-\delta}F(x)(x+\sigma)^{L}\mu(dx)+\int_{ R-\delta}^{R}F(x)(x+\sigma)^{L}\mu(dx)}{\int_{0}^{R}(x+\sigma)^{L}\mu(dx)+ \int_{-2}^{0}(x+\sigma)^{L}\mu(dx)}\] \[\qquad=\frac{\int_{-2}^{R-\delta}F(x)(x+\sigma)^{L}\mu(dx)}{\int _{0}^{R}(x+\sigma)^{L}\mu(dx)\left(1+\frac{\int_{-2}^{0}(x+\sigma)^{L}\mu(dx)} {\int_{0}^{R}(x+\sigma)^{L}\mu(dx)}\right)}\] \[\qquad+\frac{\int_{R-\delta}^{R}F(x)(x+\sigma)^{L}\mu(dx)}{\int _{0}^{R}(x+\sigma)^{L}\mu(dx)\left(1+\frac{\int_{-2}^{0}(x+\sigma)^{L}\mu(dx)} {\int_{0}^{R}(x+\sigma)^{L}\mu(dx)}\right)}.\]
Since \(|F(x)|\) is bounded on \([-2,R]\) by its supremum \(\|F\|_{\infty}\),
\[\left|\int_{-2}^{R-\delta}F(x)(x+\sigma)^{L}\mu(dx)\right|\leq\|F\|_{\infty} \int_{-2}^{R-\delta}|x+\sigma|^{L}\mu(dx).\]
By continuity of \(F\),
\[\left|\int_{R-\delta}^{R}F(x)(x+\sigma)^{L}\mu(dx)\right|\leq\varepsilon\int _{R-\delta}^{R}(x+\sigma)^{L}\mu(dx)\leq\varepsilon\int_{0}^{R}(x+\sigma)^{L} \mu(dx).\]
To end the proof it suffices to show that
(A.2) \[\frac{\int_{-2}^{R-\delta}|x+\sigma|^{L}\mu(dx)}{\int_{0}^{R}(x+\sigma)^{L}\mu (dx)}\to 0\quad\text{ and }\quad\frac{\int_{-2}^{0}|x+\sigma|^{L}\mu(dx)}{\int_{0}^{R}(x+\sigma)^{L} \mu(dx)}\to 0.\]
To prove (A.2), we note that for \(x\in[-2,R-\delta]\) we have \(\sigma-2\leq x+\sigma\leq R+\sigma-\delta\) so \(|x+\sigma|^{L}\leq\left(\max\{2-\sigma,R+\sigma-\delta\}\right)^{L}\leq(R+ \sigma-\delta)^{L}\) (where in the last bound we used \(R\geq 2\) and \(\delta\leq\sigma/2\).) and similarly for \(-2\leq x<0\) we have \(|x+\sigma|^{L}\leq\left(\max\{2-\sigma,\sigma\}\right)^{L}\). Thus the numerators in (A.2) are bounded from above by \((2+\sigma-\delta)^{L}\) and \(\left(\max\{2-\sigma,\sigma\}\right)^{L}\) respectively.
Next, we tackle the denominator in (A.2). Since \(\lim_{L\to\infty}\operatorname{E}\left[|X|^{L}\right]^{1/L}=\|X\|_{\infty}\), we see that \(\left(\int_{0}^{R}(x+\sigma)^{L}\mu(dx)\right)^{1/L}\) is arbitrarily close to \(R+\sigma\). Thus, for any \(\widetilde{\delta}>0\) with \(\widetilde{\delta}<R+\sigma\) and all
large enough we have
\[\int_{0}^{R}(x+\sigma)^{L}\mu(dx)\geq(R+\sigma-\widetilde{\delta})^{L}.\]
For the first limit in (A.2), we choose \(\widetilde{\delta}=\delta/2\), so that the expression is bounded by \((R+\sigma-\delta)^{L}/(R+\sigma-\widetilde{\delta})^{L}\to 0\). For the second limit, we choose \(\widetilde{\delta}=\min\{\sigma,1\}\) so that the expression is bounded by \((\max\{2-\sigma,\sigma\})^{L}\left/(R+\sigma-\widetilde{\delta})^{L}\leq(\max \{2-\sigma,\sigma\})^{L}\left/(2+\sigma-\widetilde{\delta})^{L}=\max\{\frac{2- \sigma}{2},\frac{\sigma}{1+\sigma}\}^{L}\to 0\right.\). This proves (A.2), ending the proof.
|
2303.15430 | TextMI: Textualize Multimodal Information for Integrating Non-verbal
Cues in Pre-trained Language Models | Pre-trained large language models have recently achieved ground-breaking
performance in a wide variety of language understanding tasks. However, the
same model can not be applied to multimodal behavior understanding tasks (e.g.,
video sentiment/humor detection) unless non-verbal features (e.g., acoustic and
visual) can be integrated with language. Jointly modeling multiple modalities
significantly increases the model complexity, and makes the training process
data-hungry. While an enormous amount of text data is available via the web,
collecting large-scale multimodal behavioral video datasets is extremely
expensive, both in terms of time and money. In this paper, we investigate
whether large language models alone can successfully incorporate non-verbal
information when they are presented in textual form. We present a way to
convert the acoustic and visual information into corresponding textual
descriptions and concatenate them with the spoken text. We feed this augmented
input to a pre-trained BERT model and fine-tune it on three downstream
multimodal tasks: sentiment, humor, and sarcasm detection. Our approach,
TextMI, significantly reduces model complexity, adds interpretability to the
model's decision, and can be applied for a diverse set of tasks while achieving
superior (multimodal sarcasm detection) or near SOTA (multimodal sentiment
analysis and multimodal humor detection) performance. We propose TextMI as a
general, competitive baseline for multimodal behavioral analysis tasks,
particularly in a low-resource setting. | Md Kamrul Hasan, Md Saiful Islam, Sangwu Lee, Wasifur Rahman, Iftekhar Naim, Mohammed Ibrahim Khan, Ehsan Hoque | 2023-03-27T17:54:32Z | http://arxiv.org/abs/2303.15430v2 | TextMI: Textualize Multimodal Information for Integrating Non-verbal Cues in Pre-trained Language Models
###### Abstract
Pre-trained large language models have recently achieved ground-breaking performance in a wide variety of language understanding tasks. However, the same model can not be applied to multimodal behavior understanding tasks (e.g., video sentiment/humor detection) unless non-verbal features (e.g., acoustic and visual) can be integrated with language. Jointly modeling multiple modalities significantly increases the model complexity, and makes the training process data-hunpy. While an enormous amount of text data is available via the web, collecting large-scale multimodal behavioral video datasets is extremely expensive, both in terms of time and money. In this paper, we investigate whether large language models alone can successfully incorporate non-verbal information when they are presented in textual form. We present a way to convert the acoustic and visual information into corresponding textual descriptions and concatenate them with the spoken text. We feed this augmented input to a pre-trained BERT model and fine-tune it on three downstream multimodal tasks: sentiment, humor, and sarcasm detection. Our approach, TextMI, significantly reduces model complexity, adds interpretability to the model's decision, and can be applied for a diverse set of tasks while achieving superior (multimodal sarcasm detection) or near SOTA (multimodal sentiment analysis and multimodal humor detection) performance. We propose TextMI as a general, competitive baseline for multimodal behavioral analysis tasks, particularly in a low-resource setting.
## I Introduction
Humans are experts at understanding the nuances of non-verbal communication. To enable machines to understand those non-verbal signals, we often encode them in three modalities - text, acoustic, visual - and then fuse them using neural encoders. Recently, pre-trained large language models like BERT [1] have become extremely effective in providing highly contextualized representation of the text modality. Acoustic and visual modalities, on the other hand, are typically converted into mid to high-level features - such as pitch, MFCC, facial expression, etc. - and then fed into transformers or LSTM encoders for fusion [2, 3, 4, 5, 6]. As a result, the encoders for acoustic and visual modalities are often trained from scratch compared to the pre-trained text encoders. Therefore, the fusion process is usually dominated by the highly contextualized text modality, making it difficult to infuse acoustic and visual information properly. Moreover, multimodal (text, acoustic, visual) behavioral datasets are typically smaller in size due to the higher cost of collecting them. The scarcity of large and diverse datasets makes it more challenging to train parameter-heavy multimodal models.
In contrast, the pre-trained models like BERT [1] can be easily fine-tuned to achieve state-of-the-art results for many NLP tasks like text-based humor detection [7] and sentiment analysis [8]. In this paper, we ask - can the language model understand non-verbal cues presented in text format? If we feed the language spoken and its associated nonverbal cues in textual format to a pre-trained language model, can it analyze the multimodal content? How does it perform in learning human behavior compared to the existing resource-hungry multimodal models?
In this paper, we propose methods to textualize visual and acoustic cues present in multimodal (video) behavior understanding tasks: multimodal sentiment analysis, multimodal humor detection, and multimodal sarcasm detection. We utilize widely-used tools such as Openface [9] and Opensmile [10] for extracting visual and acoustic features from videos. We then cluster these extracted visual and acoustic features into a finite number of groups using K-means clustering, as shown in Figure 1 (middle). We generate textual descriptions of these visual and acoustic clusters, and refer to them as _visual-text_ and _acoustic-text_ respectively. We prepare an extended text input by combining the text modality with the visual-text and acoustic-text (when available) as shown in Fig. 1 (right) and feed the extended text into a pre-trained BERT model.
As _multimodal_ datasets, we use CMU-MOSI [11] and CMU-MOSEI [12] for sentiment analysis, UR-FUNNY [13] for humor detection, and MUStARD [14] for sarcasm detection task. TextMI outperforms all state-of-the-art (SOTA) models in the multimodal sarcasm detection task and achieves near SOTA performance in the multimodal humor detection task. It also achieves better/near SOTA performances across all metrics in multimodal sentiment analysis task.
These results demonstrate that our approach can be generalized across many diverse behavior-analysis tasks. The proposed methodology, while fairly simple, can serve as a
strong baseline for multimodal behavior understanding tasks, particularly with limited training data.
In summary, the main contributions of this paper are:
* We propose a framework to convert visual and acoustic cues into natural language text for analyzing multimodal human behavioral tasks.
* We demonstrate that large language models can readily integrate visual and acoustic information provided in text format and achieve superior (or competitive) performance than the baseline multimodal models that use intricate fusion mechanisms.
* TextMI is a simple and general methodology for multimodal behavior analysis that can act as a strong baseline for a diverse set of tasks. Our approach is _interpretable_ and particularly valuable for the tasks with limited data.
## II Background
In this section, we discuss some of the state-of-the-art multimodal models, their complexity and how the scarcity of large multimodal (video) behavioral datasets makes the training challenging.
### _Multimodal Models_
A lot of research exists on learning joint representation of multimodal data using LSTM, CNN, and fully-connected neural networks [15, 16, 17, 18, 19, 20, 21, 22, 23]. We will focus Transformer-based models [24] which are the current state-of-the-art for modeling multimodal - text,acoustic,visual- modalities.
Tsai et al. [25] trained a model with a set of transformers: each transformer encoder learns its modality-specific encoding while interacting with the other encoders to capture the cross-modal interactions. Similarly, HKT [4] has modality-specific encoders for attending to each modality and a bimodal cross attention layer to jointly represent pairs of modality groups effectively. Several frameworks have been proposed where pre-trained **BERT** is used as language encoder and LSTM/Transformer based architectures are used to encode other modalities [2, 3, 5, 6]. These architectures also introduced different intricate fusion mechanisms to achieve state-of-the-art results. Pham et al. [26] translates from a source modality to a target modality while maintaining cycle consistency to ensure maximal retention of information in the joint representation. Another approach named Multimodal Routing [27] learns to dynamically adjust weights between input modalities and output vector for each data sample - providing interpretation about cross-modality interactions for each example and whole dataset. In the domain of Graph-Neural-Networks, Model Temporal Attention Graph (MTAG) represents the sequential and temporal interactions in the multimodal sequence as nodes and edges [28]. These models have separate neural network components for capturing unimodal and cross-modal interactions that increase the number of parameters by the factor of the number of modalities. On the other hand, our approach uses a single pre-trained language model, and thus significantly reduces the model complexity and training time.
### _Scarcity of Large Multimodal Behavioral Datasets_
Collecting large multimodal datasets for human behavioral tasks such as analyzing sentiment, job interview performance, disrespectful behavior, humor, etc. is extremely challenging, time-consuming, and expensive. The ratings for these tasks are often subtle and subjective and require expert knowledge and careful considerations. As a result, multimodal behavioral datasets are typically small and contain from hundreds to a few thousand training examples.
Fig. 1: Instead of using complex models that try to fuse multiple modalities, we rely on a single pre-trained language model (left). To textualize nonverbal (visual and acoustic) cues, we group unimodal features that frequently appear together into finite number of clusters, and then describe the clusters using text (middle). Finally, we extend the utterance text with the extracted visual-text and acoustic-text and pass it to a language model (right).
For example, two commonly used multimodal sentiment analysis datasets are CMU-MOSI [11] and CMU-MOSEI [12], consisting of 2199 and 23500 opinions respectively. ICT-MMMO dataset [29], consisting of review videos, is also used for sentiment analysis with only 340 data instances. Similarly, MOUD [30] dataset contains 400 Spanish videos, each labeled with a positive, negative, or neutral sentiment. IEMOCAP [31] dataset consists of 302 videos - each annotated with the presence of 9 emotions as well as valence, arousal, and dominance. The UR-FUNNY dataset, the largest multimodal dataset for detecting humorous instances, contains 5k videos of humorous punchlines, with an equal number of negative samples [13]. Multimodal sarcasm detection dataset [14] has only 690 video segments collected from several sitcoms. The scarcity of the large multimodal behavioral datasets makes it very challenging to train multimodal models with large number of parameters. In contrast, TextMI requires less parameters and easy to train on smaller datasets.
## III Non-verbal Text Extraction
In this section, we describe how we extract features from acoustic and visual modalities and convert them into corresponding text representations - denoted as _acoustic-text_ and _visual-text_ (Figure 1).
### _Visual-text_
OpenFace2 [9] is used to extract facial Action Units (AU) features, based on the Facial Action Coding System (FACS) [32]. It is widely used in human affect analysis [33, 13, 3]. Each of the action units represents a specific facial muscle movement. For example, AU02, AU04, and AU06 represent 'Outer Brow Raiser', 'Brow Lowerer', and 'Cheek Raiser' respectively. These descriptions of the action units are readily understandable to humans. However, humans typically use a combination of these muscle movements simultaneously - each denoted by an action unit - to exhibit an expression. To mimic this behavior, we use K-means clustering to group the facial action units that co-occur often. In this paper, we use the following set of action units: [AU2, AU4, AU5, AU6, AU7, AU9, AU12, AU15, AU23, AU26, AU45].
First, we extract these eleven AU features from each frame of a video segment. The timing information of each word is used to slice off the relevant range of action unit features for that word. Then, for each word, we average out the sliced visual feature array across the time dimension and get a visual feature vector (11 dimensions). Extracting word-aligned visual/acoustic vector is a common practice in multimodal behavior analysis tasks [34, 2, 17, 3]. Once we have these visual feature vectors for each word across all videos, K-means clustering is used to group them into distinct sets. We use silhouette score to determine the optimal number of clusters. By analyzing the word-aligned visual features that belong to each cluster, we find the dominant (high intensity) action units in each cluster. Then, we represent each cluster by the text descriptions of the dominant action units. Table I shows the clusters, their dominant action units, and the corresponding descriptions of each action unit. These resulting textual descriptions are used to generate the visual-text.
Let, there are \(n\) words in a video segment \(U=[w_{1},w_{2},....,w_{n}]\). For the \(i\)th word (\(w_{i}\)), we can use the corresponding facial unit vector to extract the relevant cluster id. Thus, we can represent the cluster ids of the video utterance as: \(C_{v}=[c_{1},c_{2},....,c_{n}]\). Each cluster-id represents a set of dominant AUs (e.g. table I). We sort all the AUs based on how many times they appear in the video utterance; the most commonly occurring ones are put at the beginning. For example, in Figure 1, the visual cluster id 4 has the highest frequency. This cluster id is represented by the dominant action units _brow raiser_ and _jaw drop_. That is why these visual words appear at the beginning of the facial expression description. We concatenate all the visual words extracted from the sorted (based on frequency) AUs to generate the visual-text. We use k-means clustering since it gives a hard label to each of the clusters. Other techniques like the gaussian mixture model give the probabilities of a data point belonging to each of the K clusters - making it difficult to convert these probabilities into words.
### _Acoustic-text_
Similar to extracting visual-text, we extracted the following interpretable acoustic features using Opensmile: pitch, loudness, jitter, and shimmer. Similar to the visual features, we extract word-aligned acoustic features of the whole dataset and apply K-Means clustering to them. Each cluster is assigned descriptions based on the intensity of the features present within it. A normal distribution is fitted to find the threshold of low, normal, and high intensity. Table II shows an example of the UR-FUNNY (multimodal humor detection) dataset. We denote these resulting descriptions as acoustic-text.
For a video segment, first, we extract the cluster ids that are associated with the word-aligned acoustic vectors. Let, there are \(n\) words in a video utterance \(U=[w_{1},w_{2},....,w_{n}]\). Similar to the visual words, we extract the corresponding acoustic cluster ids \(C_{a}=[c_{1},c_{2},....,c_{n}]\). Then, we create a textual description of the acoustic features by following the same methodology in III-A: replace each cluster-id with the underlying set of features (Table II), sort the features by placing the most frequently appearing ones at the beginning, remove repeated features and concatenate all the texts.
### _Combining Text, Acoustic-text and Visual-text_
We append the acoustic-text and visual-text at the end of the text utterances separated by the separator token. As we
experiment with the BERT language model, we represent the multimodal text as: [CLS] utterance text [SEP] Facial expressions shown: visual-text and acoustic expressions shown: acoustic-text [SEP] (Figure 1).
## IV Experiments
In this section, we discuss the datasets we use, the baseline models we compare with and the hyper parameter settings we experiment with.
### _Datasets_
**CMU-MOSI & CMU-MOSEI:** Both the CMU-MOSI [11] and the CMU-MOSEI [12] are widely used benchmark datasets for evaluating a model's performance in predicting multimodal sentiment intensity. The CMU-MOSI is composed of 2199 video utterances segmented from 93 movie review videos. Each video utterance is manually annotated with a real number score ranging from -3 (most negative) to +3 (most positive) sentiment. The CMU-MOSEI dataset is an extension of the CMU-MOSI, but it increased the size of the dataset to 23,454 video utterances.
We use five different metrics following the previous works to evaluate the performance: mean absolute error (MAE), Pearson correlation, seven class accuracy (Acc-7), binary accuracy (Acc-2), and F1 score. Both Acc-2 and F1 score are computed for positive/negative (excluding zero).
**UR-FUNNY:** The UR-FUNNY [13] is a multimodal dataset of humor detection. It contains 10k video segments sampled from TED talks where punchline sentence is followed by context sentences. Each video segment is annotated with binary labels indicating if the punchline is humorous or not.
**MUSARD:** Multimodal Sarcasm Detection Dataset [14] is compiled from popular TV shows like Friends, The Big Bang Theory, The Golden Girls, and Sarcasmaholics. 690 video segments are manually annotated with binary sarcastic/non-sarcastic labels. Each video segment has a target punchline sentence and the associated historical dialogues as context. Binary accuracy is used as the performance metric for both UR-FUNNY and MUStARD since both datasets have balanced test sets.
### _Baseline Models_
Numerous methods have been proposed to learn the multimodal representation of text, acoustic and visual. We compare TextMI with the most recent and competitive baselines as mentioned below.
**TFN**[33] Tensor fusion network learns unimodal tensors and fuses them by three fold Cartesian product.
**LMF**[22] creates multimodal fusion from the modality-specific low-rank factors by decomposing high dimensional tensors into many low dimensional factors.
**MFM**[17] is a generative-discriminative model that factorizes representations into multimodal discriminative factors and modality-specific generative factors.
**ICCN**[35] learns the correlations between all three modalities of text, acoustic and visual via deep canonical correlation analysis.
**MuIT**[17] uses a set of transformer encoders to model intermodal & intra-modal interactions and combines their output in a late fusion manner.
**MISA**[2] projects all the video utterances into three modality-specific and one modality invariant spaces and then aggregates all those projections.
**MAG-BERT**[36] introduced Multimodal Adaption Gate (MAG) to fuse acoustic and visual information in pretrained language transformers. During fine tuning, the MAG shifts the internal representations of BERT in the presence of the visual and acoustic modalities.
**BBFN**[3] is an end-to-end network that performs fusion (relevance increment) and separation (difference increment) on pairwise modality representations.
**Self-MM**[5] design a unimodal label generation strategy based on the self-supervised method that helps to learn modality specific representation.
**MMIM**[6] hierarchically maximizes the Mutual Information (MI) in multimodal fusion. It applies MI lower bounds for the unimodal inputs and the fusion stage.
**HKT**[4] models multimodal humorous punchline using a set of transformer encoders and bi-modal cross attention layer. It also incorporates some humor-centric features extracted from external knowledge.
**State of the Art:**_MMIM_ is the state of the art model for the multimodal sentiment analysis task (in both CMU-MOSI and CMU-MOSEI datasets). Only MISA and HKT have experimented with the task of multimodal humor detection (UR-FUNNY) and multimodal sarcasm detection (MuSTARD) where _HKT_ has achieved SOTA performance.
### _Experimental Design_
Adam optimizer, linear scheduler with warmup and reduce learning rate at plateu scheduler are used to train the BERT language model. The search space of the learning rates is {1e-05,3e-05,5e-05,1e-06}. MSE loss is used to train models on CMU-MOSI and CMU-MOSEI as the sentiment intensity label is a real number between -3 to +3. Binary cross-entropy is used for other datasets. Dropout \([0.05-0.30]\) is used to regularize the model. All the experiments are run on K-80 & A-100 GPUs.
## V Results
In this section, we present the performance of TextMI compared to the baseline multimodal models. We also report an ablation study to show the importance of having acoustic-text and visual-text alongside the main text.
### _Multimodal Sentiment Analysis_
The results of multimodal sentiment analysis tasks are presented in Table III. TextMI achieves superior performance in the CMU-MOSEI dataset in terms of binary accuracy (0.48% increase), F1 score (0.44% increase), and pearson correlation (0.25% relative improvement). Significance tests are run between TextMI and MMIM models for these metrics. We have run both models configured with the hyperparameters of best performances and changed the random seed only for different runs. Significant differences (\(p<0.05\)) are observed for these three metrics between TextMI and MMIM. For the other metrics, it is very close competitor of the SOTA model. TextMI also attain very competitive result in CMU-MOSI dataset across _all_ metrics. It achieves the second best performance in terms of binary accuracy and F1 score. These datasets are well studied for analyzing the performances of the multimodal models. All these baseline models use intricate fusion mechanism to summarize multimodal information. In contrast, our simple approach based on only pre-trained language encoder has achieved superior/near-SOTA performances. These results corroborate our hypothesis that the language model has the capability of understanding non-verbal cues presented in textual form.
### _Multimodal Humanor and Sarcasm Detection_
The results of multimodal humor (UR-FUNNY) and sarcasm detection (MUSARD) tasks are presented in Table IV. Since both datasets have a balanced test set, binary accuracy is reported as the performance metric. TextMI achieves superior performance (2.94% increase) in the smaller MUSARD dataset (690 utterances) compared to the HKT. In comparison, HKT is superior in the relatively larger UR-FUNNY dataset (10K utterances). The HKT model incorporates external, task
Fig. 2: A multimodal sentiment analysis example to illustrate how the model put importance on text, acoustic and visual words. (a) word importance’s are highlighted by color. (b) shows how the visual-text and acoustic-text are extracted. [CLS] and [SEP] are special tokens of BERT.
specific features inspired by the theories of humor, and the availability of a large dataset helps train a complex model. However, the performance of TextMI that does not include any external task-specific feature is not far behind (2.71% less accurate than HKT). This brings about an interesting perspective - while generalizability of models is strongly desired, task-specific knowledge can be valuable and researchers might benefit from blending both general and task-specific knowledge into their models.
### _Interpretation of Model Output_
The results discussed above indicate that incorporating nonverbal text into a language model has the capability of generalizing across various types of multimodal tasks. In addition, converting the facial and acoustic expressions into textual form makes it easy to interpret. Figure 2 shows an example of multimodal sentiment analysis where input tokens are highlighted with colors based on the TextMI model's importance. Integrated gradients [37] method is used to decipher how each input token (across modalities) contributed to the model's final decision. We can see that the nonverbal-text such as high loudness plays an important role to identify the negative sentiment in this utterance. Since one of the major objectives of affective computing is understanding human emotion, such interpretability is highly valuable to the community.
### _Role of Non-verbal Text_
To understand the roles of nonverbal-text quantitatively, we fine-tune the BERT encoder with text-only, text+acoustic-text and text+ visual-text information separately. The results are presented in Table V. Adding acoustic-text and visual-text improves the accuracy significantly, especially in the smaller datasets of CMU-MOSI and MUStARD. A qualitative analysis is presented in Figure 3. In the first three examples, TextMI correctly predicted not only the polarity but also adjusted the sentiment intensity more accurately compared to the BERT model trained with text-only information. Additional information is present in visual-text and acoustic-text which text-only encoder could not utilize. The cross-attentions among the words of text utterance and nonverbal-text generate better scores. We also show an example where TextMI can fail (example 4). These examples and results demonstrate that the language model can integrate non-verbal cues (in text form) for affective computing tasks.
## VI Discussion and Future Work
**Generalizability:** TextMI can be generalized across diverse multimodal tasks as the input is presented in textual format. Moreover, the use of pre-trained language models makes it easier to fine-tune on downstream tasks. We experimented with three multimodal behavior understanding tasks with four publicly available datasets. The superior results across these diverse tasks indicate the generalizability of TextMI. One limitation of our approach is that we depend on existing tools such as Openface and Opensmile. They may limit TextMI's performance since the error of these tools will propagate to the language model. Besides, if these tools are trained from biased data, it can hurt the fairness of our model as well. However, all the baseline multimodal models also have the same limitations as they used similar tools and features [12, 33, 17, 13, 2, 3]. Building an end-to-end model trained from representative and unbiased data that can textualize the acoustic and visual features, and make inferences using a language model is a direction we plan to explore.
**Interpretability vs. Performance:** In a wide array of human behavior understanding tasks (e.g. identifying hateful speech
Fig. 3: Example from the CMU-MOSI dataset. The ground truth sentiment labels are between strongly negative (-3) and strongly positive (+3). For each example, we show the Ground Truth and prediction output of both the _text+acoustic-text+visual-text_ and _text only_. Integrated gradients [37] method is used to decipher how each input token (across modalities) contributed to the model’s final decision. [CLS] and [SEP] are special tokens of BERT.
video in social media), it is of utmost importance to identify the key factors behind the model's decision. Since TextMI describes acoustic and visual information in textual format, it is much easier to interpret and visualize. Figure 2 and Figure 3 illustrate how easy it is for humans to interpret the key acoustic and visual nuances when they are presented in textual format. However, as we concentrate on a set of interpretable features, our approach can result in loss of information and performance gained from incorporating complex features.
Typically, visual/acoustic information is modeled from scratch using low to mid-level features [2, 3, 4] through complex fusion processes. A complex multimodal model trained on a lot of data may be able to outperform our approach by sacrificing interpretability. However, our approach can be a very useful baseline for assessing models deployed in resource-constrained applications that must be interpretable.
**Dataset Size vs. Model's Complexity:** Unlike the existing multimodal models, TextMI uses only pre-trained language encoders, significantly reducing the model complexity and the number of parameters that are trained from scratch. Pre-trained large language models are generally sample efficient, and can learn well from fewer examples to solve downstream tasks. As a result, our model can be particularly useful for multimodal tasks with limited training data (CMU-MOSI & MSUtARD) - making our approach very suitable for applying in new, exciting domains. A computationally efficient benchmark would make research of exciting problems more accessible since not all researchers have access to huge computing facilities.
## VII Conclusion
In this paper, we show that large pre-trained language models can efficiently integrate non-verbal cues if they are described in textual form. Our approach achieves superior performance in multimodal sarcasm detection, and demonstrates better/near-SOTA performances in multimodal sentiment analysis and multimodal humor detection tasks, compared to the established baseline multimodal models that use intricate fusion mechanisms. An ablation study also indicates that the pre-trained language model performs better with acoustic and visual information than textual information only. As our approach reduces the model's complexity and improves interpretability, it can be very useful for the scenarios where a large dataset is scarce or interpretability is requisite. Though a multimodal model trained on a large dataset might provide better performance, our approach can still serve as a very strong and simpler baseline for future studies on multimodal behavioral tasks.
|
2310.15508 | Spacetime surgery for black hole fireworks | We construct an explicit model for the black hole to white hole transition
(known as the black hole fireworks scenario) using the cut-and-paste technique.
We model a black hole collapse using the evolution of a time-like shell in the
background of the loop quantum gravity inspired metric. We then use the
space-like shell analysis to construct the firework geometry. Our simple and
well defined analysis removes some subtle issues that were present in the
previous literature. In particular, we demonstrate that the null energy
condition must be violated for the bounce. We also calculate the proper time
scales required for the black to white hole transition, which in any valid
scenario must be shorter than the evaporation time scale. In contrast, we show
that the bouncing time for the distant observer can be chosen arbitrarily,
since it is determined by how one cuts and pastes the spacetimes outside the
event horizon, and thus does not have any obvious connection to quantum gravity
effects. | Wei-Chen Lin, Dong-han Yeom, Dejan Stojkovic | 2023-10-24T04:32:03Z | http://arxiv.org/abs/2310.15508v1 | # Spacetime surgery for black hole fireworks
###### Abstract
We construct an explicit model for the black hole to white hole transition (known as the black hole fireworks scenario) using the cut-and-paste technique. We model a black hole collapse using the evolution of a time-like shell in the background of the loop quantum gravity inspired metric. We then use the space-like shell analysis to construct the firework geometry. Our simple and well defined analysis removes some subtle issues that were present in the previous literature. In particular, we demonstrate that the null energy condition must be violated for the bounce. We also calculate the proper time scales required for the black to white hole transition, which in any valid scenario must be shorter than the evaporation time scale. In contrast, we show that the bouncing time for the distant observer can be chosen arbitrarily, since it is determined by how one cuts and pastes the spacetimes outside the event horizon, and thus does not have any obvious connection to quantum gravity effects.
###### Contents
* I Introduction
* II Bouncing black hole model
* III Time-like thin-shells and gravitational collapses
* III.1 Junction equations
* III.2 Analysis of the solution
* IV Space-like thin-shells and black hole fireworks
* IV.1 Junction equations
* IV.2 Conditions for thin-shells
* IV.3 Construction of the black hole firework geometry
* V Bouncing time-scale for black hole firework scenarios
* V.1 Bouncing time scale with the \(\delta\) parameter
* V.2 Bouncing time for the comoving observer
* V.3 Interpretation: coordinate time and proper time
* VI Discussion
## I Introduction
The issue of formation and evaporation of a black hole is very important for understanding the nature of quantum gravity. In particular, this issue is related to the information loss problem of an evaporating black hole [1]. Is there a unitary theory of quantum gravity that explains the unitary evolution of evaporating black holes? If there is, is this theory consistent with the semi-classical description [2]? Will the classical singularity survive in the regime where quantum gravitational effects are dominant [3; 4]?
It is clear that understanding the fate of the singularity is very important to obtain the complete answer to the black hole evaporation and the information loss problem. Intuitively, we may classify two ways. First, we may address this problem _by introducing a wave function_, i.e., by solving the Wheeler-DeWitt equation [5]. In this approach, we need to solve the Wheeler-DeWitt equation (or some version of it) and interpret the solution in the classical background, which is sometimes a subtle problem [6; 7; 8; 9]; for an attempt to model quantum radiation from quantum background, see [10]. Second, we may remove the singularity _by introducing an effective matter_[11; 12; 13; 14; 15]. As a result, one could extend the effectively classical spacetime beyond the singularity. However, we need to justify the ad hoc introduced matter from the first principles, which is usually a difficult task.
Interestingly, an approach coming from the loop quantum gravity provides a method that is in between these two ways. In that approach, one first needs to solve the Wheeler-DeWitt equation in order to obtain the physical quantum state of the singularity. Usually, it is not easy to solve the Wheeler-DeWitt equation directly. However, one may reasonably expect an effective modification of the Hamiltonian which includes loop quantum gravitational effects [16]. With this modified Hamiltonian, one can solve a set of semi-classical equations and obtain a spacetime that includes loop quantum gravitational effects, e.g., resolution of the singularity.
A typical solution in the framework of the loop quantum gravity includes _bouncing_ of the collapsing object [17; 18]. Bouncing inside the horizon is not a very surprising scenario, except for some technical issues [19]. However, in reality, this is not easy to generalize to global spacetimes in an evaporating background. In some cases, inconsistencies may arise [20]. In an evaporating background, the bouncing spacetimes have to consistently connect not only inside but also outside the horizon [6; 21]. This might be realized by cutting and pasting spacetimes, e.g. like in the Haggard-Rovelli model [22]. Moreover, if we modify the interior of the black hole solution, one can obtain a bouncing model that has two horizons [23]. The scenario proposed in [22; 23] is also known as the _black hole fireworks_.
One can revisit the cut-and-paste technique of [22] and [23] using the thin-shell approximation [24]. This spacetime surgery could explain the global spacetime of the Haggard-Rovelli model [25]. For this purpose, one needs to introduce a space-like shell and paste two space-like hypersurfaces. To do this in a self-consistent way, a space-like matter shell that violates the null energy condition and reaches asymptotic infinity is required.
In this paper, we further extend this idea to the model in [23], which contains two horizons. We consider a time-like shell that describes a collapsing star interior and the dynamical formation of a black hole. In addition, we cut and paste two spacetimes to accommodate a bouncing spacetime, and also cover both the outer and inner apparent horizons. This approach is technically well-defined and hence allows a more concrete and reliable way to evaluate the transition time scale from the collapsing to the bouncing phase. Unless this time scale is sufficiently long, this process will be already excluded by astrophysical observations.
This paper is organized as follows. In Sec. II, we describe the black hole bouncing model of [23] in which a black hole phase is followed by a white hole phase. In Sec. III, we consider a collapsing time-like shell and the dynamical process of black hole formation. In Sec. IV, we consider a space-like shell that separates the black hole and white hole phases in a cut-and-paste procedure. In Sec. V, we discuss the bouncing time scales from the black hole and white hole phases. Finally, in Sec. VI, we summarize our results and discuss possible future research.
## II Bouncing black hole model
We consider the black hole model defined in[23], which has a quantum-corrected center. The metric is
\[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{1}\]
with
\[f(r)=1-\frac{2M}{r}+\frac{AM^{2}}{r^{4}}, \tag{2}\]
where \(M\) is the black hole mass, while \(A\) is a constant. Generically, this geometry has two horizons, labeled with \(r_{\pm}\), with a time-like center (Fig. 1). We note here that the bounce in this model is driven by the \(AM^{2}/r^{4}\) term in Eq. (2). This term dominates only at small values of \(r\), and provides a repulsive gravity. Thus, directly from this form, we expect an oscillating behavior. At large values of \(r\), the attractive term, \(2M/r\), dominates and drives the collapse. At some minimal value of \(r\), the repulsive term causes bounce and pushes the collapsing object out to larger values of \(r\) where the attractive term again dominates and the cycle starts again.
## III Time-like thin-shells and gravitational collapses
We first consider a thin time-like shell in order to explain the process of gravitational collapse in the framework of the model given in Eq. (1).
### Junction equations
The metric outside and inside the shell is
\[ds_{\pm}^{2}=-f_{\pm}(r)dt^{2}+\frac{1}{f_{\pm}(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{3}\]
where \(+\) and \(-\) stand for outside and inside the shell, respectively. The metric of the time-like shell is
\[ds_{\rm shell}^{2}=-d\tau^{2}+r^{2}(\tau)d\Omega^{2}. \tag{4}\]
Here, we assume \(f_{+}=f(r)\) and \(f_{-}=1\).
After imposing the junction equation [24], we obtain
\[\epsilon_{-}\sqrt{\dot{r}^{2}+f_{-}}-\epsilon_{+}\sqrt{\dot{r}^{2}+f_{+}}=4 \pi r\sigma(r), \tag{5}\]
where \(\sigma(r)\) is the tension of the shell, and \(\epsilon_{\pm}=\pm 1\) are the signs of the extrinsic curvatures. Here, extrinsic curvatures \(\beta_{\pm}\) are
\[\beta_{\pm}\equiv\frac{f_{-}-f_{+}\mp 16\pi^{2}\sigma^{2}r^{2}}{8\pi\sigma r }=\epsilon_{\pm}\sqrt{\dot{r}^{2}+f_{\pm}}. \tag{6}\]
Note that if \(\epsilon_{\pm}=+1\), \(r\) increases along the outward normal direction, while if \(\epsilon_{\pm}=-1\), \(r\) decreases along the outward normal direction. Therefore, we have to assume \(\epsilon_{\pm}=1\).
After simple computations, we obtain the equation:
\[\dot{r}^{2}+V_{\rm eff}(r)=0, \tag{7}\]
where
\[V_{\rm eff}(r)=f_{+}-\frac{\left(f_{-}-f_{+}-16\pi^{2}\sigma^{2}r^{2}\right)^ {2}}{64\pi^{2}\sigma^{2}r^{2}}. \tag{8}\]
Here, we interpret that \(V_{\rm eff}<0\) corresponds to the region where classical trajectories are allowed.
Figure 1: The Penrose diagram of the model in Eq. (1). The solution has two horizons, \(r_{\pm}\), and the time-like center.
Figure 2: Dynamics of the time-like shell. Top left: \(V_{\rm eff}\) with \(M=10\), \(A=0.1\), and \(\sigma_{0}=0.04\). There are two bouncing points located at \(r_{\rm max}\simeq 19.9993\) and \(r_{\rm min}\simeq 0.795\). Note that the outer horizon is \(r_{+}=19.9987\) and the inner horizon is \(r_{-}=0.8046\). Top right: \(V_{\rm eff}\) around \(r_{\rm min}\simeq 0.795\). Bottom: \(\beta_{+}\) (black) and \(\beta_{-}\) (red). This shows that for \(r_{\rm min}\leq r\leq r_{\rm max}\), \(\beta_{\pm}>0\) conditions are satisfied.
### Analysis of the solution
In order to form a black hole, one can set \(\sigma=\sigma_{0}\), and assume \(\lambda/\sigma=-1\), where \(\lambda\) is the pressure of the shell. For example, a scalar field can satisfy such a condition [26].
Fig. 2 is an example that describes the gravitational collapse of a time-like shell and the formation of a black hole. Top left and right of Fig. 2 are \(V_{\rm eff}\), where we choose \(M=10\), \(A=0.1\), and \(\sigma_{0}=0.04\). For these values of parameters, \(r_{+}=19.9987\) and \(r_{-}=0.8046\). By evaluating \(V_{\rm eff}\), we find two bouncing points \(r_{\rm max}\simeq 19.9993\) and \(r_{\rm min}\simeq 0.795\). Therefore, \(r_{\rm min}<r_{-}\) and \(r_{\rm max}>r_{+}\), and hence, the shell propagates from the region outside of the outer horizon to the region inside of the inner horizon. In addition, bottom of Fig. 2 shows \(\beta_{+}\) (black) and \(\beta_{-}\) (red), which indicates that for a classically allowed region \(r_{\rm min}\leq r\leq r_{\rm max}\), the extrinsic curvatures \(\beta_{\pm}\) are always positive, as we expected.
If we summarize these numerical results, one can conceptually reconstruct Fig. 3 as a Penrose diagram. The time-like shell is located between \(r_{\rm min}\leq r\leq r_{\rm max}\), where \(r_{\rm max}\) is outside the outer horizon and \(r_{\rm min}\) is inside the inner horizon. Using the cut-and-paste technique, we paste a Minkowski space inside the shell. On the right side of the Fig. 3, there are dashed curves. These curves apparently do not follow the thin-shell trajectories. However, assuming some properties of a star interior, it is reasonable to assume that such a stationary shell is located outside the horizon [27].
## IV Space-like thin-shells and black hole fireworks
To consider the black hole firework scenario, we need to cut and paste on top of Fig. 3. We will introduce a space-like shell and use it to paste two space-like slices [25].
### Junction equations
The metric outside and inside the shell:
\[ds_{\pm}^{2}=-\frac{1}{\tilde{f}_{\pm}(r)}dr^{2}+\tilde{f}_{\pm}(r)dt^{2}+r^{2 }d\Omega^{2}, \tag{9}\]
where \(+\) and \(-\) denote outside and inside the shell. The metric of the space-like shell is
\[ds_{\rm shell}^{2}=ds^{2}+r^{2}(s)d\Omega^{2}. \tag{10}\]
Here, we impose that
\[\tilde{f}_{\pm}(r)=-f(r)=-1+\frac{2M}{r}-\frac{AM^{2}}{r^{4}}, \tag{11}\]
in other words, the regions outside and inside the shell correspond to the black hole solution in question.
After imposing the junction equation [28], the result is
\[\epsilon_{-}\sqrt{\dot{r}^{2}+\tilde{f}_{-}}-\epsilon_{+}\sqrt{\dot{r}^{2}+ \tilde{f}_{+}}=4\pi r\sigma(r), \tag{12}\]
where \(\sigma(r)\) is the tension of the shell, and \(\epsilon_{\pm}=\pm 1\) are the signs of the extrinsic curvatures. Here, the extrinsic curvatures \(\tilde{\beta}_{\pm}\) are
\[\tilde{\beta}_{\pm}\equiv\frac{\tilde{f}_{-}-\tilde{f}_{+}\mp 16\pi^{2}\sigma^{2}r ^{2}}{8\pi\sigma r}=\epsilon_{\pm}\sqrt{\dot{r}^{2}+\tilde{f}_{\pm}}. \tag{13}\]
Note that if \(\epsilon_{\pm}=+1\), \(r\) increases along the outward normal direction (direction from future to the past), while if \(\epsilon_{\pm}=-1\), \(r\) decreases along the outward normal direction. Therefore, in our case, we assume that \(\epsilon_{+}=+1\) and \(\epsilon_{-}=-1\). Hence, \(\sigma<0\) is required, and the null energy condition must be violated. This is expected because of the repulsive term in Eq. (2).
After simple computations, we obtain the equation
\[\dot{r}^{2}+\tilde{V}_{\rm eff}(r)=0, \tag{14}\]
Figure 3: Left: The Penrose diagram of the black hole solution, where the red lines are outer horizons and the blue lines are inner horizons. There exists a time-like shell solution that is oscillating between \(r_{\rm min}\leq r\leq r_{\rm max}\), where \(r_{\rm max}\) is outside the outer horizon and \(r_{\rm min}\) is inside the inner horizon. Right: Inside the shell, the geometry is Minkowski. This diagram represents the resulting spacetime of the black hole formation.
where
\[\tilde{V}_{\rm eff}(r)=\tilde{f}_{+}-\frac{\left(\tilde{f}_{-}-\tilde{f}_{+}-16 \pi^{2}\sigma^{2}r^{2}\right)^{2}}{64\pi^{2}\sigma^{2}r^{2}}. \tag{15}\]
### Conditions for thin-shells
We now need to assume the condition for the thin-shell. The energy conservation equation is
\[\dot{\sigma}=-2\frac{\dot{r}}{r}\left(\sigma-\lambda\right), \tag{16}\]
where \(\lambda\) is the pressure of the shell. If we assume the equation of state of the space-like shell \(w_{i}=-\lambda_{i}/\sigma_{i}\) to be a constant, the generic solution of this equation is
\[\sigma(r)=\sum_{i}\frac{\sigma_{0i}}{r^{2(1+w_{i})}}, \tag{17}\]
where \(\sigma_{0i}\) are constants.
By assuming a specific function of the tension, we want to impose the following conditions:
* 1. The shell covers the region from \(r_{0}<r_{\rm min}\) to infinity, i.e., \(\tilde{V}(r)<0\) for \(r_{0}\leq r\leq\infty\).
* 2. Extrinsic curvatures satisfy \(\tilde{\beta}_{+}>0\) and \(\tilde{\beta}_{-}<0\) for \(r_{\rm min}\leq r\leq\infty\).
### Construction of the black hole firework geometry
In order to satisfy the extrinsic curvature conditions, the null energy condition of the shell must be violated. For example, Fig. 4 shows the case when the shell has a constant negative tension (a domain wall case). Fig. 5 shows the case where the tension asymptotically approaches zero at infinity (\(w=-0.5\) and \(\sigma\sim 1/r\)), and thus
Figure 6: Left: The space-likes shells with \(\epsilon_{-}=-1\) (upper) and \(\epsilon_{+}=+1\) (lower), where small black arrows denote the outward normal direction. We paste the future of the upper shell (\(\tilde{f}_{-}\), yellow-colored region) and the past of the lower shell (\(\tilde{f}_{+}\), orange-colored region). Right: After we paste two regions, we obtain the final causal structure of the black hole fireworks.
the negative tension effects disappear at infinity.
After we cut and paste the spacetimes outside and inside the shell, we obtain the causal structure in Fig. 6. Outside the shell satisfies \(\epsilon_{-}=-1\), while inside the shell satisfies \(\epsilon_{+}=+1\). We paste the future of the outer shell (yellow-colored region) and the past of the inner shell (orange-colored region). As a result, we obtain the final causal structure of the black hole fireworks (right of Fig. 6).
Using the thin-shell approximation, it is possible to justify such a causal structure. If \(r_{0}\neq r_{\rm min}\), the time-like and space-like shell can intersect. A complete description of this intersection perhaps belongs to the regime of quantum gravity, however it is still interesting to ask what happens there in the framework of general relativity.
The only price that we need to pay for this construction is a violation of the null energy condition, even outside of the horizon [25]. In addition, we need to ask whether the causal structure of the analytic solution will still be valid in dynamical situations. The inner horizon might be unstable due to the mass inflation [29]. If we take this into account, we may not be able to trust the causal structure inside the event horizon.
## V Bouncing time-scale for black hole firework scenarios
In this section, we discuss the bouncing time observed by different observers. We use the same assumption that the quantum gravity corrections should be small, _i.e._\(A\sim m_{Pl}^{2}\ll M^{2}\). In this limit, the parameter \(A\) plays no role in the leading order estimate, and therefore, the mundane Schwarzschild solution is sufficient for this particular discussion.1 We first re-derive the bouncing time for a distant observer at a fixed \(R\) as presented in Ref. [23]. After that, we calculate the bouncing time measured by the observer who is comoving with the shell, which we believe is more relevant for the fireworks scenario. We discuss the slicing dependence of the two different bouncing times in Sec. V.3.
Footnote 1: Though, we comment on the possible relation between \(\delta\) and \(A\) later in Sec. V.3.
### Bouncing time scale with the \(\delta\) parameter
Outside the horizon, the black hole is well approximated by the Schwarzschild solution. Assuming the Schwarzschild solution, the double-null coordinates \(U\) and \(V\) satisfy
\[UV=\left(1-\frac{r}{2M}\right)e^{r/2M} \tag{18}\]
and
\[\frac{U}{V}=-e^{-t/2M}, \tag{19}\]
in the region of interest (see Fig. 7). In Fig. 7, events A, B, and C are given by the intersections of a specific ingoing light ray \(V=V_{1}\) with the constant-\(t\) hypersurfaces, \(t=0\) and \(t=t_{B}\), and a constant-\(r\) trajectory \(r=R\), respectively. Since \(t_{B}\) is an arbitrary constant, we can also consider an event A to be a special case where we consider \(t_{B}=0\).
In terms of the Schwarzschild coordinate \((t,r)\), the locations of B and C are given by \((t_{B},2M+\Delta)\) and \((t_{C},R)\), respectively. Using (18) and (19), one can show that the quantities \(\Delta\), \(t_{B}\), and \(t_{C}\) satisfy the following relation
\[\frac{\left(\frac{R}{2M}-1\right)e^{R/2M}}{\frac{\Delta}{2M}e^{(1+\Delta/2M)}}= e^{\tilde{T}/4M}, \tag{20}\]
where \(\tilde{T}\equiv 2(t_{B}-t_{C})\). Assuming \(\Delta\ll 2M\ll R\), the above relation reduces to
\[\tilde{T}\approx 2R+4M\ln R-4M\ln\Delta. \tag{21}\]
By choosing \(t_{B}=0\), _i.e._ considering event A with location \((t,r)=(0,2M+\delta)\), we obtain the relation given in Ref. [23]
\[T\approx 2R+4M\ln R-4M\ln\delta, \tag{22}\]
where \(T/2\approx-t_{C}\). Since we used the relation \(2M\gg R\), this interval of the Schwarzschild time \(T\) is approximately the bouncing time measured by a distant observer fixed at \(r=R\). Eq. (21) has a simple physical interpretation. A distant observer shoots a ray of light radially into the black hole (event C). After \(\tilde{T}/2\) of this observer's proper time elapsed, he/she would think the light ray is \(\Delta\) away from the event horizon.
### Bouncing time for the comoving observer
Apart from the asymptotic observer whose coordinate system is incomplete, there is another observer who is perhaps more relevant to the bouncing process. This is an observer comoving with the collapsing shell. Thus,
Figure 7: Bouncing time in the Schwarzschild metric. In Ref. [23], the trajectory of \(V=V_{1}\) is uniquely determined by the corresponding spacetime diagram therein. However, the bouncing time, defined by \(\mathcal{T}\equiv-4M\ln\delta\), can be shown to be from a geometric relation in the Schwarzschild spacetime, which is related to the arbitrary cutting of spacetime.
a more appropriate physical time scale can be calculated using the proper time of the observer that crosses the event horizon. One can easily evaluate the proper time of the time-like shell that transitions from the black hole to the white hole phase as
\[\tau=2\left|\int_{R}^{r_{\rm min}}\frac{dr}{\sqrt{-V(r)}}\right|. \tag{23}\]
Here we use a collapsing shell of (pressureless) dust as a demonstration. In this case, the rest mass of the dust \(\alpha\) is assumed to be conserved and is given by \(\alpha=4\pi r^{2}\sigma=const.\). From the Israel junction conditions, we obtain
\[M=\alpha\sqrt{1+\dot{r}^{2}}-\frac{\alpha^{2}}{2r}, \tag{24}\]
where the overdot is the derivative with respect to the proper time along the timelike trajectory of the infalling shell. From this, one can compute the proper time elapsed along the shell trajectory for one complete cycle as follows
\[\tau=2\left|\int_{R_{\rm max}}^{r_{\rm min}}\frac{dr}{\sqrt{\left(\frac{M}{ \alpha}+\frac{\alpha}{2r}\right)^{2}-1}}\right|. \tag{25}\]
To have \(R_{\rm max}\) finite, _i.e._ the shell is bounded, we must have \(\alpha>M\), for which \(R_{\rm max}=\frac{\alpha^{2}}{2(\alpha-M)}\). In this case, the above integral is given by
\[\tau=\left.2\sqrt{\frac{1}{1-\frac{m^{2}}{\alpha^{2}}}}\left(\frac{1}{2}C \tan^{-1}\left(\frac{2r-C}{2\sqrt{Cr+B-r^{2}}}\right)-\sqrt{Cr+B-r^{2}}\right) \right|_{r_{\rm min}}^{R_{\rm max}}, \tag{26}\]
where \(C=\frac{m}{1-\frac{m^{2}}{\alpha^{2}}}\) and \(B=\frac{\alpha^{2}}{4\left(1-\frac{m^{2}}{\alpha^{2}}\right)}\). If we further consider large \(R_{\rm max}\gg M\), based on the relation for \(R_{\rm max}\), we also have \(\alpha\sim 2R_{\rm max}\). Thus, the above integration is approximately given by
\[\tau\sim 2\left|\int_{R_{\rm max}}^{r_{\rm min}}\frac{dr}{\sqrt{\left(\frac{ \alpha}{2r}\right)^{2}-1}}\right|\sim 2\left|\int_{R_{\rm max}}^{r_{\rm min}} \frac{dr}{\sqrt{\left(\frac{R_{\rm max}}{r}\right)^{2}-1}}\right|=2\sqrt{R_{ \rm max}^{2}-r_{\rm min}^{2}}. \tag{27}\]
In this limit, the bouncing time is mostly determined by \(R_{\rm max}\), while the exact value of \(r_{\rm min}\) is not that important. This is physically reasonable since in this limit, the shell's velocity relative to the center of the black hole is high when \(r\) is small. If we include the quantum gravity modification, _i.e._ using the metric in Eq. (2) instead of the Schwarzschild solution, the shell will be repelled at some minimal radius due to the extra repulsive term \(AM^{2}/r^{4}\). Thus, we have to set the \(r_{\rm min}\) to be the bouncing point inside the horizon, which is determined by the three parameters \(\{\alpha,M,A\}\). However, due to the smallness of \(AM^{2}\), the bouncing point must be deep inside the event horizon and the modifications to Eqs. (26) and (27) are small.
### Interpretation: coordinate time and proper time
We now analyze the coordinate time difference between two slices (Fig. 8). Due to the _time-translation symmetry_, one can choose an arbitrary coordinate time (at infinity) for the space-like hypersurface (left of
Fig. 8). This means that the time difference between the \(t=t_{i}\) (that can be chosen in the sufficient past) and \(t=t_{b}\) (the bouncing time inside the horizon) is arbitrary in this setup (right of Fig. 8). According to the discussion in Sec. V.1, one may find a corresponding \(\delta\) parameter to denote the \(t=t_{b}\) hypersurface. As already mentioned in Ref. [23], the bouncing time for the distant observer is determined by how one cuts and pastes the spacetime outside the event horizon. The same argument is valid for the spacelike slicing considered in our case (see Fig. 8). Therefore, this parameter \(\delta\) is not really appropriate to parameterize the physical bouncing time. On the other hand, the bouncing time measured by the comoving observer discussed in Sec. V.2 is very different in this aspect. Based on the previous discussion, the contribution to the bouncing time around \(r_{\rm min}\) is small, so even if we cut out a certain portion of the spacetime as in Fig. 6, the corresponding proper time is only mildly affected by the cut-and-paste procedure. Interestingly, this bouncing time can be unambiguously determined in the model considered in Ref. [23] since the trajectory of the shell (or surface of the collapsing star) is intact by the designed cut (see Fig. 4 therein). One can easily construct the scenario in which two observers (the comoving and the fixed-\(r\) one) begin their journeys at the same spacetime event when the shell is at some \(R_{\rm max}\). After the whole period of the bounce, in the absence of any dissipation (as in Refs. [22] and [23]), the two observers meet each other again at the next \(R_{\rm max}\) defined for the comoving observer. It is possible that some relation exists between the parameter \(A\), which modifies the proper time of the comoving observer due to the quantum gravity correction to the Schwarzschild solution, and the parameter \(\delta\), which controls the proper time of the observer fixed at \(R_{\rm max}\).
Finally, in addition to them, indeed, we need to include Hawking radiation and its back-reaction. To describe smoothly the evolution of the collapsing object from infinity to black hole horizons, we need to study a dynamical causal structure of the spacetime from formation to evaporation including the back-reaction of the background geometry. When a black hole model has two horizons that disappear within a finite proper time, i.e. an apparent horizon has a circular shape in the Penrose diagram, then there should be a smooth way to describe the entire spacetime without any divergences; for example, see [30; 31]. Applications to the present model are left for a future research project.
## VI Discussion
In this paper, we revisited some aspects of the black hole fireworks (i.e. a black hole to white hole transition) scenario proposed in [22; 23]. We constructed an explicit model for the black hole fireworks using the cut-and-paste technique. First, we used evolution of a time-like shell in the background of the loop quantum gravity inspired metric to model the process of gravitational collapse. Then using the space-like shell analysis, we constructed the firework geometry. We used the well defined thin-shell techniques where all the relevant quantities are clearly defined. Thus, our analysis removes some subtle issues that were present in the previous literature.
We showed that the firework scenario requires specific conditions outside the event horizon, in principle the violation of the energy conditions. This can be expressed in terms of the tension of the space-like junction where
the two metrics meet. In particular, we used a rather simple and well-studied space-like junction technique to create the black to white hole bounce with a single asymptotic region. For comparison, in Ref. [23], a more complicated cut-and-paste procedure is utilized to achieve the same goal, without violating the null energy condition away from the horizon. However, such a cut corresponds to a hypersurface which changes its characteristic from spacelike to null. The tension conditions for such a scenario are highly non-trivial and might not be physically justifiable. We leave this issue for future work.
We also calculated the proper and coordinate time scales required for the black hole to white hole transition. The proper time scale is classical and hence it must be sufficiently shorter than the evaporation time scale. However, we point out that the coordinate time scale (related to the \(\delta\)-parameter in the black hole firework scenario in [23]) can be chosen arbitrarily. The bouncing time for the distant observer is determined by how one cuts and pastes the spacetimes outside the event horizon, and thus does not have any obvious connection to quantum gravity effects.
Figure 8: Left: Due to the time-translation symmetry, there can be several equivalent spacelike slices (red curves) that have a different coordinate time at infinity. Black dotted curves correspond to constant \(t\) hypersurfaces. Right: The bouncing time is the difference between \(t=t_{i}\) and \(t=t_{b}\), where \(t_{b}\) is arbitrary.
## Acknowledgment
DY and WL was supported by the National Research Foundation of Korea (Grant No. : 2021R1C1C1008622, 2021R1A4A5031460). DS is partially supported by the US National Science Foundation, under Grants No. PHY-2014021 and PHY-2310363.
|
2305.09397 | EXPRESSNET: An Explainable Residual Slim Network for Fingerprint
Presentation Attack Detection | Presentation attack is a challenging issue that persists in the security of
automatic fingerprint recognition systems. This paper proposes a novel
explainable residual slim network that detects the presentation attack by
representing the visual features in the input fingerprint sample. The
encoder-decoder of this network along with the channel attention block converts
the input sample into its heatmap representation while the modified residual
convolutional neural network classifier discriminates between live and spoof
fingerprints. The entire architecture of the heatmap generator block and
modified ResNet classifier works together in an end-to-end manner. The
performance of the proposed model is validated on benchmark liveness detection
competition databases i.e. Livdet 2011, 2013, 2015, 2017, and 2019 and the
classification accuracy of 96.86\%, 99.84\%, 96.45\%, 96.07\%, 96.27\% are
achieved on them, respectively. The performance of the proposed model is
compared with the state-of-the-art techniques, and the proposed method
outperforms state-of-the-art methods in benchmark protocols of presentation
attack detection in terms of classification accuracy. | Anuj Rai, Somnath Dey | 2023-05-16T12:29:50Z | http://arxiv.org/abs/2305.09397v2 | # EXPRESSNET: An Explainable Residual Slim Network for Fingerprint Presentation Attack Detection
###### Abstract
Presentation attack is a challenging issue that persists in the security of automatic fingerprint recognition systems. This paper proposes a novel explainable residual slim network that detects the presentation attack by representing the visual features in the input fingerprint sample. The encoder-decoder of this network along with the channel attention block converts the input sample into its heatmap representation while the modified residual convolutional neural network classifier discriminates between live and spoof fingerprints. The entire architecture of the heatmap generator block and modified ResNet classifier works together in an end-to-end manner. The performance of the proposed model is validated on benchmark liveness detection competition databases i.e. Livdet 2011, 2013, 2015, 2017, and 2019 and the classification accuracy of 96.86%, 99.84%, 96.45%, 96.07%, 96.27% are achieved on them, respectively. The performance of the proposed model is compared with the state-of-the-art techniques, and the proposed method outperforms state-of-the-art methods in benchmark protocols of presentation attack detection in terms of classification accuracy.
Fingerprint Biometrics, Explainable Deep Learning, Presentation Attack Detection.
## 1 Introduction
An Automatic Fingerprint Recognition System (AFRS) is a user-friendly and cost-effective solution for biometric-based person recognition. It takes less time, computing resources and human effort to verify a person than other biometric recognition systems. Due to its ease of use and automation, AFRS is being used for verification or authentication of a person in security-related applications such as Aadhar verification, airports [24], international borders, etc. Its usage in such security-sensitive applications makes it vulnerable to various threats. A Presentation Attack (PA) is one of them which is imposed by creating an artifact of a genuine user's finger and presenting it to the sensing device of an AFRS. The PAs can be created in two ways i.e. non-cooperative method of spoofing and cooperative method of spoofing. In the non-cooperative method, the latent fingerprint left on a surface is captured and then fabricated using spoofing material after digitization. On the other side, the user itself provides an impression of their fingers to create the spoof in the cooperative method. Apart from this, the discovery of novel spoofing materials also imposes a big challenge to the security of AFRS. These materials are used to fabricate more realistic artifacts of fingers. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure to PAs. The FPAD methods can be classified into two broad categories that are hardware-based methods and software-based methods. Hardware-based methods require additional devices for the measurement of the natural properties of the finger such as temperature, pulse rate and humidity which makes them costly. On the other hand, software-based methods require only the fingerprint sample which makes them user-friendly and cost-effective. Therefore, our focus is on the development of a software-based method that will be able to detect the PAs created with the help of known as well as unknown spoofing materials.
The state-of-the-art software-based methods are further classified as perspiration and pore-based methods [5, 27, 33], statistical and handcrafted features-based methods [2, 11, 18, 25, 39, 40, 45, 46] and deep learning-based methods [3, 4, 6, 7, 22, 30, 41, 43]. Perspiration-based methods are proven to be insufficient because this property is affected by external temperature and other environmental factors. Along with this limitation, the feature extraction process of these methods requires multiple impressions of the same finger which makes it less user-friendly. Pore-based methods require the input samples to be of high-resolution (\(>\)1000 pixels per inch) which increases the cost of the FPAD system. Similarly, the quality of the sensing device impacts the performance of the statistical and handcrafted feature-based methods. In recent times, deep learning approaches have been adopted by various researchers due to their superior image classification capability. A set of convolutional filters possessed by them extracts minute features from input fingerprint samples. However, Convolutional Neural Networks (CNN) have the unmatched capability of extracting the discriminating features but they do not exhibit the same capability on fingerprint databases. The lack of texture and color information in fingerprint images is one of the possible reasons behind this. The depth of these networks makes them suffer from the vanishing gradient due to the lack of discriminating information. Hence some pre-processing is required in fingerprint databases to get good classification results.
In this paper, we propose a novel end-to-end architecture that consists of a heatmap generator and a modified ResNet classifier. The Heatmap generator is composed of an encoder-decoder block and a channel attention block. It converts the input sample into a heatmap by emphasizing the important features present in an input fingerprint sample. The encoder-decoder block highlights the features present in the region of interest in an image while the channel attention block finds discriminant features in the sample. The outcome of these aforementioned blocks is a single-channel heatmap which is fed to the modified ResNet classifier for the classification. The ResNet architecture [19] is modified to make it less computationally expensive while being trained and tested on the fingerprint samples. The modification is done by removing the redundant convolutional blocks while maintaining their spatial properties and reducing the number of learnable parameters as well. The proposed EXPlainable RESIdual Slim NETwork (EXPRESSNET) model is validated using Liveness Detection Competition (LivDet) 2011, 2013, 2015, 2017 and 2019 databases. It outperforms existing FPAD methods in intra-sensor same-material and unknown-material protocols. The main contributions of this paper are discussed as follows.
1. To the best of our knowledge, we are the first to introduce the concept of explainability of deep CNN in the area of FPAD.
2. The proposed model highlights the driving features of input fingerprint samples by converting them into a single-channel heatmap. In this way, discriminating features such as wetness, ridge and valley clarity and scars are highlighted for better classification.
3. The proposed heatmap generator block can be attached to any CNN classifier to enhance its classification performance.
4. The spatial properties of ResNet's feature maps are preserved along with a reduction in the number of learnable parameters by proposing modifications in the original ResNet architecture.
5. A detailed comparison of the proposed model has been done against the spoofs created using cooperative and non-co-operative subjects as well as known and unknown spoofing materials.
The remainder of this paper is organized as follows. Section 2 discusses existing methodologies suggested by various researchers. Section 3 describes the design and working of the proposed architecture. In section 4, experimental results, as well as comparative analysis are given. Finally, the paper is concluded in section 6.
## 2 Related Work
FPAD is an essential tool for the AFRS to deal with PAs. As a countermeasure to PAs, researchers have proposed a variety of software-based solutions, which may be further categorized as pore and perspiration-based methods, statistical and handcrafted feature-based methods and deep learning-based methods. This section discusses the most recent approaches that fall into these categories, as well as their advantages and limitations.
### _Perspiration and pore based-methods_
The presence of small holes or pores in human skin causes perspiration in fingers. This natural property is not present in the spoofs fabricated with different materials. An initial study was proposed by Derakshani et al. [10]. They utilized the diffusion pattern of sweat as a feature to discriminate between live and spoof fingerprints. Later, Abhyankar et al. [2] proposed a wavelet-based method that utilizes the sweat feature of the fingerprint to detect PAs. Since, the pores are hard to reflect in the spoofs at the time of fabrication, the number of pores may differ in a live fingerprint and its spoofs created with different materials. This dissimilarity is utilized as a discriminating feature by Espinoza [13]. The proposed method is validated using a custom-made fingerprint database. Similarly, Marcialis et al. [28] captured two fingerprint impressions at an interval of five-second and then detects the pores in both impressions. The proposed method utilizes the number of pores present in both impressions as a feature for detecting PAs. The proposed method is validated using a custom-made fingerprint database that consists of 8960 live and 7760 spoof images. Though, the perspiration pattern is used for the detection of PAs, its presence depends on the atmosphere temperature. A live finger in a dry environment does not exhibit this property which causes the discard of the live sample by the FPAD system working on this feature. Moreover, the extraction of pores has been shown to be expensive since the fingerprint sensor must be capable of capturing high-definition samples (\(>\)=1000 pixels per inch). For the reasons stated above, perspiration and pore-based approaches are less user-friendly and cost-effective.
### _Statistical and handcrafted feature based-methods_
The skin of a finger and its counterpart, the fabricated spoofs, have different natural properties such as color, wetness and elasticity level which are reflected in the quality of the samples captured with fingerprint sensors. Statistical and handcrafted feature-based methods use quality features of the fingerprints for the detection of their liveness. Choi et al. [5] extracted histogram, directional contrast, ridge thickness and ridge signal features for detecting PAs. They utilized these features for the training of an SVM classifier. The proposed method is validated using the custom-made fingerprint database. Similarly, Park et al. [32] utilized statistical features including standard deviation, variance, skewness, kurtosis, hyper-skewness and hyper-flatness along with three additional features i.e. average brightness, standard deviation and differential image for the training of SVM to detect PAs. They validated their method using the ATVSFFp database which contains 272 real fingerprints and 270 spoof fingerprint samples. Further, Xia et al. [45] extracted second and third-order occurrence of gradients from fingerprint samples. They used these features for the training of the SVM classifier. The proposed method is validated using LivDet 2009 and 2011 databases. In another work [46], Xia et al. suggested a novel image descriptor that extracts intensity variance along with gradient properties of the fingerprint samples to form a feature vector. This feature vector is further used for the training of the SVM classifier. The proposed work is validated using LivDet 2011 and
2013 databases. Yuan et al. [50] in continuation to the work of [46], proposed a method that utilizes gradient property for the detection of the PAs. It creates two co-occurrence matrices using the Laplacian operator that compute image gradient values for different quantization operators. Further, The matrices are utilized as a feature vector for the training of the back-propagation neural network. The suggested method is validated using LivDet 2013 database. Since the live finger and its spoof have different levels of elasticity, it is reflected in the varying width of the ridges and valleys as well as the quality of their image samples. Sharma et al. [40] extracted some quality features such as Ridge and Valley Clarity (RVC), Ridge and Valley Smoothness (RVS), Frequency Domain Analysis (FDA) and Orientation Certainty Level (OCL) which are combined together for the training of Random-Forest classifier. The proposed method is validated on LivDet 2009, 2011, 2013 and 2015 databases. Sharma et al. [39] suggested a novel feature named the Local Adaptive Binary Pattern (LABP) which is the modification to the existing Local Binary Pattern (LBP). They have combined this feature with existing BSIF and Complete Local Binary Pattern (CLBP) and used them for the training of the SVM classifier. The proposed method is validated on LivDet 2009, 2011, 2013 and 2015 databases. Ghiani et al. [14] utilized BSIF which is obtained by applying a set of pre-defined filters whose output is then converted to a binary sequence. This binary sequence is used as a feature vector for the training of the SVM classifier. The proposed method is tested on LivDet 2011 database. The varying elasticity of the live fingers and corresponding spoofs causes a significant difference in their shapes and textures also. Further, Dubey et al. [11] suggested a shape and texture feature-based method. They utilized Speeded Up Robust Feature (SURF) and Pyramid extension of Histogram of Gradient (PHOG) to extract shape information from the fingerprint sample. Along with the aforementioned features, the Gabor wavelet is used by them to extract the texture information. The proposed method is validated using LivDet 2011 and 2013 databases. Ajita et al. [36] proposed a novel method for the detection of PAs created with unknown materials. They suggested the use of an Adaptive-Boost (AdaBoost) multi-class classifier that classifies an input fingerprint as live, spoof and unknown. The Fingerprint samples detected as 'unknown' are further used to train the classifier to detect their presence in the future. The proposed method is tested on LivDet 2011 database. In continuation to their previous work [36], Ajita et al. [37] suggested the use of a Weibull-calibrated SVM classifier for the detection of PAs. This SVM is a combination of 1-class as well as binary SVM. This modification shows a significant improvement as compared with the results on LivDet 2011 database. Kim et al. [25] proposed a novel image descriptor that utilizes the local coherence of the fingerprint sample as a feature for the training of SVM. The proposed method is validated using ATVSFP and LivDet 2009, 2011, 2013 and 2015 databases. The efficacy of these methods depends on the quality of the input fingerprint sample which further depends on the sensing device. Some of the aforementioned methods [25, 39, 40] have shown remarkable performance against the PAs created using known fabrication materials but do not resemble the same against the spoofs created using novel materials.
### _Deep learning based-methods_
Deep CNNs can extract minute information from image samples since they have convolutional layers. These models have shown excellent classification capabilities when evaluated on imagenet [8], CIFAR [44] and MNIST [9] databases. This benefit led researchers to use CNNs in the detection of PAs as well. This section discusses state-of-the-art deep learning-based FPAD methods. Arora et al. [4] proposed a robust framework to detect presentation attacks in fingerprint biometric systems that involves contrast enhancement using histogram equalization. Fingerprint samples after preprocessing are fed to the VGG classifier. The proposed work is validated on benchmark fingerprint databases which include FVC 2006, ATVSFP, Finger vein data-set, LivDet 2013 and 2015 databases. Similarly, Nogueira et al. [30] utilized pre-trained CNN architectures using transfer learning. Their method involves existing deep CNN architectures such as VGG, Alexnet and CNN with SVM. The proposed method is tested on LivDet 2009, 2011 and 2013 databases. Uliyan et al. [43] proposed deep features-based methods for the detection of PAs. It utilizes a Deep Boltzmann Machine (DBM) for the extraction of features from fingerprint images. DBM has been utilized by them to find the complex relationship among the features. The proposed work is validated using benchmark fingerprint databases. Chugh et al. [7] suggested a deep learning-based method that uses minutiae-centered fingerprint patches for the training and testing of a MobileNet classifier. A fingerprint is divided into a finite number of patches based on the number of minutiae points present in it. Extracted patches are fed to a CNN model which generates a liveness score for every patch. The liveness score for an input sample is computed using score-level fusion. This proposed method is validated using LivDet 2011, 2013 and 2015 databases and Michigan State University's (MSU) FPAD database. Since, novel fabrication materials are discovered every day, it is hard to generalize an FPAD model to perform FPAD in an open-set or unknown-material protocol. In continuation of their previous work [6], Chugh et al. [7] suggested another method for the detection of spoofs fabricated using unknown materials. They proposed an image synthesis technique to create new fingerprint patches which contribute to better training of the MobileNet classifier. The proposed method is validated using LivDet 2017, ATVSFP and MSU-FPAD databases. Zhang et al. [52] suggested a CNN architecture that outperforms all the feature-based methods in terms of classification accuracy. They proposed an architecture that consists of a series of improved residual connected blocks. This modified architecture results in the detection of PAs without over-fitting and less computing time. The proposed method is validated on Livdet 2013 and 2015 databases.
### _Explainability in Deep Learning_
The term explainability refers to any information that helps the user understand the pattern of the decisions made by the deep learning model for the input samples belonging to different classes. In recent times, various surveys [34, 38]
have been proposed to enlighten this area. The Explainability in DNNs can be achieved in three ways including Visualization methods, Model distillation and Intrinsic-methods. Visualization methods, being applied to the image classifiers, are further classified as backpropagation-based methods [35], activation maximization methods [12], deconvolution methods [51] and layer-wise relevance propagation-based methods [26], etc. Deconvolution methods utilize inverse convolution operations to visualize high-layer features present in the input image samples. Amir et al. [42] utilized the deconvolution method in an attempt to emphasize the important features present in the input sample. The proposed method is tested on CIFAR, MNIST and tiny-imagenet databases. The performance is compared with state-of-the-art explainability methods. This method performs well on images belonging to different classes based on the shape, color and texture of the objects present in them. Since live and spoof fingerprint samples can not be discriminated based on these features the deconvolution method is required to be enhanced for fingerprint databases.
The detailed literature review concludes that the deep learning-based methods have shown remarkable performance while being applied in the area of image classification problems but they are not sufficient while being utilized for live and spoof fingerprint samples. One of the possible reasons may be the limited amount of discriminating features in fingerprint samples. We have developed a novel approach that highlights the key features that play a vital role in the discrimination of live and spoof fingerprint samples without imposing computational overhead on the entire FPAD system which is discussed in the following sections.
## 3 Proposed Work
In this paper, we propose a novel architecture to detect PAs by generating heatmaps. The architecture, shown in Fig. 1 consists of the encoder-decoder and the channel attention block for heatmap generation and modified ResNet for classification. The first component highlights the regions as well as discriminating features that play a vital role in the classification process. In this way, the classifier is empowered for better classification of the input samples. The details of the components of the EXPRESSNET architecture are mentioned in the following subsections.
### _Preprocessing_
The sample captured with different sensing devices has different spatial dimensions. To overcome this problem, fingerprint samples are resized into the size of \(512\times 512\). This modification, in turn, increases model training time while having no effect on the number of trainable parameters.
### _Heatmap Generator Block_
The resized input sample is passed to the heatmap generator block which constitutes of encoder-decoder, channel attention block and heatmap generation layers. The details of the aforementioned blocks are given in the following subsections.
#### 3.2.1 **Encoder-Decoder Block**
The proposed encoder-decoder block first down-size and then up-sizes the input feature maps to highlight the features present in them. In other words, the encoder extracts relevant information and the decoder shows the driving features present in the feature maps while retaining their spatial properties. The encoder part is composed of convolutional operation along with pooling operation. The convolutional filter extracts feature from the sample while the poling operation downsamples the input sample. The output of the encoder block is formulated as Eq. (1).
\[Encoder_{out}=Maxpool\bigg{[}\sum_{X,x=0}^{M,m}\sum_{Y,y=0}^{N,n}I_{X,Y} \times K_{x,y}\bigg{]} \tag{1}\]
Here, \(I_{X,Y}\) denotes the input fingerprint sample of dimension \(M\times N\) and \(K_{x,y}\) denotes the convolutional filter with size \(x\times y\). After convolution, the max-pooling operation is used to downsample the output feature maps. \(Encoder_{out}\) denote the output feature maps. The output of the encoder is passed to the decoder to enhance the features. In [42] the
Fig. 1: Block diagram of EXPRESSNET architecture
decoder consists of transposed convolution operator which is higher in terms of computational cost. To keep this cost low, we have constituted the decoder block using an up-sample operation followed by the convolutional operation. The decoder block can be formulated as Eq. (2).
\[Decoder_{out}=\bigg{[}\sum_{X,x=0}^{M,m}\sum_{Y,y=0}^{N,n}\left(( Upsample(Encoder_{out})\right. \tag{2}\] \[\times K_{x,y})\bigg{]}\]
Here, \(Decoder_{out}\) is the output of the encoder-decoder block which is a set of '\(f\)' feature maps of size \(M\times N\) each. In this model, the value of '\(f\)' is kept as 32. These output feature maps have highlighted pixels that contribute to the classification of the input sample. The feature maps are fed to the channel attention block which is described in the following subsection.
#### 3.2.2 **Channel Attention Block (CAB)**
The CAB produces an attention map that exploits the inter-channel relationship of features. The goal of the encoder-decoder block is to find "where" the important feature is present while the CAB is responsible for finding "what" is important in the image. The calculation of channel attention is formulated as per Eq. (3).
\[CAB_{out}=MLP(AveragePool(Decoder_{out})) \tag{3}\]
Here, Multi-Layer Perceptron (MLP) is a collection of two dense layers. The formation of MLP is denoted with Eq. (4).
\[MLP=ReLU(W_{1}\sigma(W_{0}(0))) \tag{4}\]
Here, \(W1\) and \(W0\) represent the weights of fully-connected layers and ReLU and Sigmoid are the activation functions applied to those layers respectively. The channel attention map is then multiplied by the feature maps generated by the encoder-decoder block. The feature maps with highlighted information are then merged together to form a single-channel heatmap. A convolutional filter is utilized for the same which is mentioned in the following subsection.
#### 3.2.3 **Heatmap Generation Layer**
The output of the channel attention block is a set of feature maps that have important features highlighted. These feature maps are further to be merged to form a single-channel heatmap. For the same, a convolutional filter is used that takes '\(f\)' feature maps as input and produces a single heatmap as an output. The formulation of the same is given as Eq. 5. This operation is followed by the Tanh activation function that maps input values in the range (-1 to +1). Figure 2 depicts live and spoof fingerprint samples belonging to LivDet 2011, biometricka dataset and respective heatmaps generated by the heatmap generator module.
\[Heatmap=Tanh\bigg{[}\sum_{X,x=0}^{M,m}\sum_{Y,y=0}^{N,n}(Decoder_{out_{f}} \times CAB_{out})\bigg{]} \tag{5}\]
As seen in Fig. 2, it is evident that the discriminating features such as wetness, noise, scar, clarity of ridges and valley widths are highlighted by the proposed heatmap generator. The output heatmap is fed as an input to the classifier. For the classification of fingerprint heatmaps, Residual CNN is opted as a classifier and to reduce the computational cost, its architecture has been modified. The details of the original and modified ResNet classifiers are mentioned in the following subsection.
### _Modified Residual CNN (Slim-ResNet) Classifier_
The process of highlighting the driving features by introducing the encoder-decoder and channel attention impose computational overhead on the entire system while an FPAD system should take a minimum amount of time to classify the input fingerprint sample. We reduced the depth of the opted CNN architecture without tampering with its spatial properties to address the overhead imposed by the heatmap generator block. The original ResNet architecture consists of four building blocks, each having a set of three convolutional layers. In ResNet-50, the first, second, third and fourth blocks are repeated 3, 4, 6 and 3 times, respectively. In this way, the total number of convolutional layers in it is 48 (\(3\times 3+3\times 4+3\times 6+3\times 3\)). This architecture had been proposed to deal with the problem of vanishing gradient that occurs when the CNNs are trained
Fig. 2: Live and spoofs fabricated with various materials along with their generated heatmaps by the proposed heatmap generator
with images that have fewer features. The skip connections between the blocks maintain the gradient and persist the parameters to learn for better classification. The depth of the Resnet can be reduced in two ways. i.e. removing the presence of the entire last block or reducing the repetitions of all the blocks.
In the first approach, the number of feature maps is reduced as we remove the last convolutional block along with its repetitions. The major disadvantage of this approach is that we get the bigger-sized feature maps at the end of the architecture resulting in decreased classification performance. We choose to use the second strategy, which minimizes the recurrence of a block resulting in less number of layers as compared with the original ResNet architecture. In this approach, there are 30 convolutional layers since the first convolution block repeats twice, the second block twice, the third block four times and the last block twice. As we obtain feature maps at every level with the same size as the original architecture, the removal of the layers in this manner survives in the spatial attributes of feature maps. The Slim ResNet architecture's spatial dimension consists of 2048 feature maps of the size of \(7\times 7\) pixels each. Since, the input samples are resized, we get the feature maps of the size of \(16\times 16\). The output feature maps undergo the process of pooling which results in an array of 2048 values. The downsample the output of the convolution base, and make it suitable for binary classification, three fully-connected layers with 512, 256 and 1 neurons, respectively, are added. The original and the modified ResNet architecture are depicted in Fig. 3.
## 4 Experimental Setup
In this section, we discuss the different databases used in our experimental evaluation, the performance metrics for evaluation and the implementation details of our method.
### _Database_
The performance of the proposed model is validated using LivDet 2011, 2013, 2015, 2017 and 2019 databases. Each database is prepared with multiple sensing devices. The training and testing fingerprint samples are arranged in a separate group of datasets. The details of all the utilized databases are mentioned in Table I. Table I describes the information of sensors, number of live and spoof samples and materials utilized for the fabrication of spoofs. The sensors including, Biometrika (hi-Scan), italdata, digital-persona, sagem, crossmatch and greenbit are optical sensors while orcanthus is a thermal sensor. The samples captured with orcanthus consist of noise and scars making them hard to classify for an FPAD model.
### _Performance Metrics_
The performance of the proposed model is measured using ISO/IEC IS 30107 criteria [1]. The Attack Presentation Classification Error Rate (APCER) shows the percentage of misclassified spoof fingerprint images and its counterpart the Bonafide Presentation Classification Error Rate (BPCER), shows the percentage of misclassified live fingerprint images. APCER and BPCER are donated by Eq. (6) and Eq. (7) respectively.
\[APCER=\frac{\text{Number of mis-classified fake samples}}{\text{Total fake samples}}\times 100 \tag{6}\]
\[BPCER=\frac{\text{Number of mis-classified live samples}}{\text{Total live samples}}\times 100 \tag{7}\]
The Average classification error (ACE) is calculated by taking an average of APCER and BPCER and is used to
Fig. 3: Block diagram of original and modified ResNet architecture
evaluate the system's overall performance. Equation (8) represents the formulation of ACE.
\[ACE=\frac{APCER+BPCER}{2} \tag{8}\]
The ACE is further utilized to derive the accuracy of the proposed model which is formulated as Eq. (9).
\[Accuracy=100-ACE \tag{9}\]
### _Implementation Details_
The proposed algorithm is implemented in python using the Tensorflow-Keras library. All training and testing have been done over NVIDIA TESLA P100 GPU. Each model has been trained from scratch for 250 epochs which took around 10-12 hours to converge. The learning rate and batch size are kept as 0.0001 and 8 respectively.
## 5 Experimental Results and Comparative Analysis
### _Experimental Results_
The performance of the proposed model is validated in two different benchmark protocols, including intra-sensor and known spoof material and intra-sensor and unknown spoof material based on the arrangement of training and testing spoof samples captured with multiple devices. A description of these protocols along with the findings of the proposed method is discussed in the following subsections.
#### 5.1.1 **Intra-Sensor and Known Spoof Material**
In this experimental setup, the training and testing fingerprint samples are captured using the same sensing device. The spoof samples belonging to both training and testing datasets are fabricated with the same spoofing materials. LivDet 2011 and 2013 are prepared according to this setup while LivDet 2015 partially belongs to this category as two-thirds of the testing samples are captured using known spoof materials. The results on LivDet 2011 and 2013 databases are reported in Table II. Table II indicates that the proposed model attains an average BPCER of 3.50%, APCER of 2.79% and ACE of 3.14% while being tested on the LivDet 2011 database. In the same protocol, the model achieves a BPCER of 0.15%, APCER of 0.17% and ACE of 0.16% while being tested on the LivDet 2013 database. The results on LivDet 2015 are reported in Table III which indicates that the proposed model achieves an average BPCER of 3.23% and APCER of 2.91% as mentioned by the column "APCER (Known)".
#### 5.1.2 **Intra-Sensor and Unknown Spoof Material**
In this experimental setup, the fingerprint samples belonging to the training and testing datasets are captured using the same sensing device however the samples belonging to the spoof category in both datasets are fabricated using different materials. Validation in this protocol, measures the robustness of the FPAD system to defend the AFRS in the real-world scenario since an intruder can present an artifact of a user's fingerprint made with newly discovered fabrication materials that are unseen to the FPAD model. LivDet 2017 and 2019 are captured in the same way as the training and testing spoof samples are fabricated from different materials. The findings of the proposed method on the aforementioned databases are reported in Table IV. Table IV shows that the proposed model achieves an average BPCER of 4.70%, APCER of 3.28% and ACE of 3.92% on the LivDet 2017 database. Similarly the proposed model classifies the live and spoof samples with an error of 4.68% and 2.96% respectively on LivDet 2019. The proposed method also confronts the spoof samples present in LivDet 2015
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Database** & **Sensor** & **Live** & **Spoof** & **Spoofing Materials** \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **Biometrika** & 100/1000 & 100/1000 & Ecolife, Gelatine, Lateve, Siligum, Woodglue \\ \cline{2-4} & **Lialda** & 100/1000 & 100/1000 & Ecolife, Gelatine, Lateve, Siligum, Woodglue \\ \cline{2-4} & **Digital Persona** & 100/1000 & 100/1000 & Ecolife, Gelatine, Lateve, Maydoli, Woodglue \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **Bigrat Persona** & 100/1000 & 100/1000 & Ecolife, Gelatine, Lateve, Maydoli, Woodglue \\ \cline{2-4} & **Digital Persona** & 100/1000 & 100/1000 & Ecolife, Gelatine, Lateve, Maydoli, Woodglue \\ \cline{2-4} & **Digital Persona** & 100/1000 & 100/1000 & Ecolife, Leitve, Maydoli, Woodglue, Woodglue \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & **Digital Persona** & 100/1000 & 100/1000 & Ecolife, Leitve, Leitve, Maydoli, Woodglue, Liquid Ecoflex, RTV \\ \cline{2-4} & **Digital Persona** & 100/1000 & 100/1500 & Ecolife, Lateve, Geatine, Woodglue, Liquid Ecoflex, RTV \\ \cline{2-4} & **Thei-Scan** & 100/1000 & 100/1500 & Ecolife, Leitve, Maydoli, Woodglue, Geatine, Lateve, Liquid Ecoflex \\ \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & **Greenblist** & 100/1000 & 1100/1500 & Ecolife, Leitve, Maydoli, Woodglue, Geatine, Lateve, Liquid Ecoflex \\ \cline{2-4} & **Greenblist** & 100/1000 & 1100/1700 & Ecolife, Leitve, Maydoli, Woodglue, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met, Met,, Met, Met, Met,, Met, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met,, Met, Met,, Met, Met,, Met, Met, Met, Met, Met,, Met, Met, Met, Met, Met,, Met, Met, Met, Met,, Met, Met, Met, Met, Met, Met,, Met, Met,, Met, Met, Met,, Met, Met,, Met, Met,, Met, Met, Met, Met, Met,, Met, Met,, Met, Met,, Met, Met,, Met, Met, Met,, Met,, Met, Met, Met,, Met, Met, Met,, Met, Met,, Met,, Met, Met,, Met,, Met,, Met, Met,, Met, Met,, Met,, Met,, Met,, Met,, Met,, Met, Met, Met,, Met,, Met, Met,, Met,, Met, Met,, Met,, Met,, Met,, Met,, Met,, Met,, Met,, Met, Met,, Met, Met,, Met,, Met, Met,,, Met,, Met,, Met,, Met,, Met,, Met,, Met,, Met,, Met,, Met,,, Met,, Met,, Met,,, Met,, Met,, Met,,, Met,,, Met,,, Met,,,, Met,,, Met,,, Met,,, Met,,, Met,,,, Met,,, Met,,,,, Met,,,,,, Met,,,,, Met,,,,,,, Met
database with an average APCER of 5.82% as mentioned by the column "APCER (unknown)" in Table III.
#### 5.1.3 **Discussion**
The properties of the live and spoof fingerprint samples differ due to the lack of moisture in the spoof. Apart from that, the spoof samples include noise, scars and uneven width of ridges and valleys that are introduced during the fabrication process. These abnormalities are emphasized by the proposed heatmap generator which plays an important role in the detection of PAs. The findings of the proposed method are compared with existing methods tested on benchmark databases which are mentioned in the following subsection.
### _Comparative Analysis_
The findings of the proposed method, are compared with state-of-the-art approaches in several benchmark settings. A detailed comparative analysis is given in the following subsections.
#### 5.2.1 **Comparison with existing methods on LivDet 2011 database**
The performance of the proposed model is compared with state-of-the-art methods tested on LivDet 2011 database which is mentioned in Table V. As per Table V, the proposed method outperforms the methods discussed in [46, 11, 49, 20, 6, 18, 30] over the fingerprint samples collected with biometricka, digital-persona and sagem sensors. The spoof fingerprint samples in this database were obtained using the cooperative spoofing approach, resulting in the development of efficient spoof samples that can readily deceive a CNN-based FPAD model. The suggested heatmap generator emphasizes the presence of moisture in the input fingerprint data. As a result, spoof samples lack this feature and are easily spotted by the classifier. This advantage elevates the suggested technique over handcrafted features-based and deep CNN-based FPAD approaches. The proposed method attains overall classification accuracy of 96.86%.
#### 5.2.2 **Comparison with existing methods on LivDet 2013 database**
The findings of the proposed method are compared with the method tested on the LivDet 2013 database. This database is captured using the non-cooperative method of spoofing in which the latent fingerprints left on the glass, wood, or other smooth surface are used to fabricate the spoofs. This process adds a significant amount of noise, scars and other irregularities to the spoofs which are highlighted by the heatmap generator. Table VI shows a detailed comparison of the proposed method's performance with state-of-the-art methods validated on the LivDet 2013 database. It is evident that the proposed methods perform better as compared with the method discussed in [49, 20, 53, 32, 17, 21, 50, 23, 43, 30, 3], and [6], while being tested on dataset captured with biometricka and italdata sensors.
#### 5.2.3 **Comparison with existing methods on LivDet 2015 database**
The LivDet 2015 database is composed of the spoof samples captured with known and unknown spoofing materials. A detailed comparison mentioned in Table VII clearly indicates that the classification performance of the proposed method is better than the method discussed in [32, 40, 29, 43] and [25]. The heatmap generator finds discriminating features that result in better classification accuracy of the classifier than state-of-the-art deep CNN-based approaches.
#### 5.2.4 **Comparison with existing methods on LivDet 2017 database**
The performance of the proposed method is also compared with state-of-the-art methods tested on LivDet 2017 database. The training and testing spoof samples captured in this database are fabricated using different spoofing materials which makes it more challenging for an FPAD model to classify. However, the fabrication materials available for the
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{**Method**} & **Accuracy** & **Accuracy** & **Avg.** \\ & **(Biometrika)** & **(Italdata)** & **Avg.** \\ \hline Yuan et al. [49] & 96.45 & 97.65 & 97.05 \\ \hline Jian et al. [20] & 99.25 & 99.40 & 99.32 \\ \hline Zhang et al. [53] & 99.53 & 96.99 & 98.26 \\ \hline Park et al. [32] & 99.15 & 98.75 & 98.95 \\ \hline Gottschlich et al. [17] & 96.10 & 98.30 & 97.0 \\ \hline Johnson et al. [21] & 98.0 & 98.4 & 98.20 \\ \hline Yuan et al. [50] & 95.65 & 98.6 & 97.12 \\ \hline Jung et al. [23] & 94.12 & 97.92 & 96.02 \\ \hline Uliyan et al. [43] & 96.0 & 94.50 & 95.25 \\ \hline Nogueira et al. [30] & 99.20 & 97.7 & 98.45 \\ \hline Chugh et al. [6] & 99.80 & 99.70 & 99.75 \\ \hline Anusha et al. [3] & 99.76 & 99.68 & 99.72 \\ \hline
**EPRESSNET** & **99.85** & **99.83** & **99.84** \\ \hline \end{tabular}
\end{table} TABLE VI: The performance on LivDet 2017 and 2019 databases on intra-sensor unknown materials protocol
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline
**Database** & **Sensor** & **BPCER** & **APCER** & **ACE (\%)** \\ \hline \multirow{4}{*}{**LivDet 2017**} & **Digital Persona** & 5.14 & 3.4 & 4.2 \\ \cline{2-5} & **Oracthus** & 3.36 & 2.86 & 3.09 \\ \cline{2-5} & **Greenbit** & 5.59 & 3.58 & 4.49 \\ \cline{2-5} & **Average** & **4.70** & **3.28** & **3.93** \\ \hline \multirow{4}{*}{**LivDet 2019**} & **Digital Persona** & 7.6 & 7 & 7.3 \\ \cline{2-5} & **Greenbit** & 5.3 & 1.23 & 3.08 \\ \cline{1-1} \cline{2-5} & **Oracthus** & 1.12 & 0.65 & 0.87 \\ \cline{1-1} \cline{2-5} & **Average** & **4.67** & **2.95** & **3.75** \\ \hline \end{tabular}
\end{table} TABLE IV: The performance on LivDet 2017 and 2019 databases on intra-sensor unknown materials protocol
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline
**Method** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Avg.** \\ \hline Xia et al. [66] & 93.55 & 96.2 & 88.25 & 96.6 & 93.50 \\ \hline Dubrey et al. [11] & 92.11 & 93.75 & 91.9 & 94.64 & 93.1 \\ \hline Yuan et al. [49] & 97.05 & 89.84 & 97.8 & 92.01 & 93.58 \\ \hline Graganello et al. [18] & 93.1 & 92.00 & 87.35 & 96.35 & 92.2 \\ \hline Nogueira et al. [30] & 91.8 & 98.1 & 94.51 & 95.36 & 95.04 \\ \hline Yuan et al. [49] & 90.08 & 98.05 & 87.65 & 97.1 & 93.55 \\ \hline Tian et al. [49] & 90.75 & 98.4 & 94.1 & 98.3 & 96.27 \\ \hline Shamma et al. [49] & 92.7 & 94.4 & 88.6 & 93.3 & 92.25 \\ \hline Chung et al. [49] & 96.76 & 95.9 & 97.58 & 96.1 & 98.35 \\ \hline EXPRESSNET & 95.48 & 98.35 & **94.35** & **96.5** & **96.56** \\ \hline \end{tabular}
\end{table} TABLE V: Comparison with state-of-the-art methods on LivDet 2011 in intra-sensor protocol
spoofing, do not resemble the moisture present in the live fingerprint samples. The proposed method is able to find the discriminating features with the help of the heatmap generator. Table VIII shows that the proposed method performs better than the method discussed in [7, 6, 53] and [16] while being tested on the fingerprint samples captured with orcanthus and digital persona. The proposed method also outperforms the aforementioned methods with an average classification accuracy of 96.07%. This comparison reveals that the heatmap generator can produce a heatmap with discriminating information regardless of the material used for fabrication.
#### 5.2.5 Comparison with existing methods on LivDet 2019 database
Table IX reports a comparison of the proposed model's findings with state-of-the-art methods tested on the LivDet 2019 database. It shows that the proposed method outperforms the method discussed in [6] as well as the participating FPAD algorithms i.e., JungCNN, JWL LivDet, ZJUT DET while being tested on the samples collected with orcanthus and digital persona sensors. The proposed method also outperforms the aforementioned methods in terms of average classification accuracy.
The comparative analysis of the performance of the proposed method on various LivDet databases indicates that it consistently performs better regardless of the sensors in the intra-sensor paradigm of FPAD whether the spoof samples are fabricated using known or unknown materials. The possession of the heatmap generator enables the classifier to learn better as compared with traditional CNN-based approaches.
### _Evaluation of EXPRESSNET in High-Security Systems_
An FPAD model is to be tested for its performance in high-security systems too as its main objective is not only to achieve the minimum APCER, BPCER and ACE. In this paper, we have reported the findings of the proposed model using the Detection Error Trade-off (DET) curve. A DET curve is a graphical representation of error rates achieved by a binary classification system by adjusting the value of the classification threshold. We have reported the DET curves for all the datasets of LivDet 2011, 2013, 2015, 2017 and 2019 databases which are depicted in Fig. 4. In the Fig. 4, it can be observed that the proposed model attains the BPCER of less than 1% to retain the APCER of 1% on biometricka and digital persona sensors of the LivDet 2011 database while it is less than 5% and 22% on sagem and italdata sensors of the same database. On LivDet 2013, the proposed model achieves a BPCER of less than 1% to maintain the APCER of 1% on biometricka and italdata sensors. Similarly, the proposed model is able to achieve a BPCER of less than 5% to gain the APCER of 1% while being tested on crossmatch, digital-persona and greenbit sensors of the LivDet 2015 database. LivDet 2017 and 2019 in which the testing spoof samples are captured using unknown spoof materials, the model is able to retain the BPCER in the range of 5% - 17% on Livdet 2017 database. In the same way, the model retains the BPCER of less than 5% on orcanthus and greenbit sensors of the LivDet 2019 database.
### _Processing Time_
The processing time of an FPAD model is considered the amount of time it takes to find whether the input fingerprint sample is live or spoof. This time is supposed to be minimum as the sample has to undergo the process of verification after the detection of its liveness. The proposed model, \(EXPRESSNET\), takes the classification time of 300 milliseconds and 20 milliseconds on Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz \(6^{th}\) generation processor, and Nvidia TESLA P00 respectively, to classify a single fingerprint image. The less amount of classification time makes it suitable for the AFRS in real-time applications.
## 6 Conclusion
AFRS deployed in various security and commercial applications can be deceived by PAs. This paper presents an FPAD mechanism that has shown the capability of detecting spoofs when they are created using cooperative, or non-cooperative methods of spoofing as well as using known and unknown fabrication materials. Existing handcrafted and deep learning-based methods are insufficient in detecting PAs while being tested in the aforementioned scenarios. One of the possible reasons behind this is the lack of feature extraction capability of CNN-based methods due to the limited amount of discriminating information present in the input fingerprint samples. In this paper, a novel end-to-end model is presented which first converts the input
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Cronanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personala)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Biometals)** \\ \end{tabular} & **Avg.** \\ \hline \hline \begin{tabular}{l} Chung et al. [7] \\ \end{tabular} & 96.3 & 97.30 & 91.5 & 95.9 & 96.28 \\ \hline \begin{tabular}{l} Shams et al. [20] \\ \end{tabular} & 96.07 & 97.57 & 94.16 & 95.52 & 96.78 \\ \hline \begin{tabular}{l} Zhang et al. [53] \\ \end{tabular} & 97.51 & 97.51 & 95.42 & 97.62 & 96.52 \\ \hline \begin{tabular}{l} Liu et al. [22] \\ \end{tabular} & 96.00 & 96.20 & 90.50 & 90.80 & 90.72 \\ \hline \begin{tabular}{l} Liu et al. [22] \\ \end{tabular} & 96.10 & 96.20 & 96.20 & 96.36 & 95.97 \\ \hline \begin{tabular}{l} Liu et al. [22] \\ \end{tabular} & 96.10 & 96.20 & 96.20 & 96.36 & 95.97 \\ \hline \begin{tabular}{l} Liu et al. [22] \\ \end{tabular} & 96.00 & 96.00 & 96.20 & 96.36 & 95.97 \\ \hline
\begin{tabular}{l} Liu et al. [22] \\ \end{tabular} & 96.30 & 96.00 & 96.20 & 96.36 & 96.27 \\ \hline \end{tabular}
\end{table} TABLE VII: Comparison with state-of-the-art methods on LivDet 2015 in intra-sensor protocol
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personala)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Greenbio)** \\ \end{tabular} & **Avg.** \\ \hline \begin{tabular}{l} Jung CNN [31] \\ \end{tabular} & 99.13 & 81.23 & 99.06 & 93.14 \\ \hline \begin{tabular}{l} Chugh et al. [6] \\ \end{tabular} & 97.50 & 83.64 & 99.73 & 93.62 \\ \hline \begin{tabular}{l} JWL LivDet [31] \\ \end{tabular} & 97.45 & 88.86 & 99.20 & 95.17 \\ \hline \begin{tabular}{l} ZJUT Del A [31] \\ \end{tabular} & 97.50 & 88.77 & 99.20 & 95.16 \\ \hline
\begin{tabular}{l} EXPRESSNET \\ \end{tabular} & **99.16** & **92.70** & **96.92** & **96.27** \\ \hline \end{tabular}
\end{table} TABLE IX: Comparison with state-of-the-art methods on LivDet 2015 in intra-sensor protocol
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personala)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Greenbio)** \\ \end{tabular} & **Avg.** \\ \hline \begin{tabular}{l} Chung et al. [7] \\ \end{tabular} & 99.13 & 81.23 & 99.06 & 93.14 \\ \hline \begin{tabular}{l} Chugh et al. [6] \\ \end{tabular} & 97.50 & 83.64 & 99.73 & 93.62 \\ \hline \begin{tabular}{l} JWL LivDet [31] \\ \end{tabular} & 97.45 & 88.86 & 99.20 & 95.17 \\ \hline \begin{tabular}{l} ZJUT Del A [31] \\ \end{tabular} & 97.50 & 88.77 & 99.20 & 95.16 \\ \hline
\begin{tabular}{l} EXPRESSNET \\ \end{tabular} & **99.16** & **92.70** & **96.92** & **96.27** \\ \hline \end{tabular}
\end{table} TABLE IX: Comparison with state-of-the-art methods on LivDet 2019 database in intra-sensor protocol
fingerprint sample into a heatmap that represents the most informative part of it. It also finds what is important in the input sample. The generated heatmap is then fed to the standard modified CNN classifier for classification. We have tested the method on benchmark databases in different experimental settings. The efficacy of the proposed models is compared with state-of-the-art methods including statistical and handcrafted feature-based methods, perspiration and pore-based methods and deep learning-based methods. In future, we will explore the capability of the proposed model for cross-sensor and cross-database validation on benchmark fingerprint databases.
|
2306.10396 | A trace formula for Hecke operators on Fuchsian groups | In this paper we give a trace formula for Hecke operators acting on the
cohomology of a Fuchsian group of finite covolume, with coefficients in a
module $V$. The proof is based on constructing an operator whose trace on $V$
equals the Lefschetz number of the Hecke correspondence on cohomology,
generalizing the operator introduced together with Don Zagier for the modular
group. | Alexandru A. Popa | 2023-06-17T17:32:12Z | http://arxiv.org/abs/2306.10396v1 | # A trace formula for Hecke operators on Fuchsian groups
###### Abstract.
In this paper we give a trace formula for Hecke operators acting on the cohomology of a Fuchsian group of finite covolume \(\Gamma\) with coefficients in a module \(V\). The proof is based on constructing an operator whose trace on \(V\) equals the Lefschetz number of the Hecke correspondence on cohomology, generalizing the operator introduced together with Don Zagier for the modular group.
Key words and phrases:Trace formula; Hecke operators; cohomology of Fuchsian groups 2010 Mathematics Subject Classification: 11F11, 11F25, 11F75 The author was partially supported by the CNCS-UEFISCDI grant PN-III-P4-ID-PCE-2020-2498.
## 1. Introduction
Let \(\Gamma\) be a discrete, finite covolume subgroup of \(\mathrm{PSL}_{2}(\mathbb{R})\), and \(\Sigma\) a double coset of \(\Gamma\) contained in the commensurator \(\widetilde{\Gamma}\subset\mathrm{GL}_{2}^{+}(\mathbb{R})\). For \(V\) a finite dimensional \(\widetilde{\Gamma}\)-module, we consider the action of the Hecke operator defined by the double coset \(\Sigma\), which we denote by \([\Sigma]\), on the cohomology groups \(H^{n}(\Gamma,V)\), \(n=0,1,2\). The goal of this paper is to prove a trace formula of the type
\[\sum_{i}(-1)^{i}\operatorname{tr}\big{(}[\Sigma],H^{i}(\Gamma,V)\big{)}=\sum_ {X\subset\Sigma}\varepsilon_{\Gamma}(X)\cdot\operatorname{tr}(M_{X},V), \tag{1}\]
where the sum is over \(\Gamma\)-conjugacy classes \(X\) with representatives \(M_{X}\in X\), and \(\varepsilon_{\Gamma}(X)\) are certain conjugacy class invariants. We assume throughout that \(V\) is a finite dimensional vector space, over a field in which the orders of finite elements in \(\Gamma\) are invertible.
When \(V=V_{k-2}\), the \(\mathrm{GL}_{2}(\mathbb{R})\)-module of homogeneous polynomials of degree \(k-2\), we obtained such formulas with Zagier in [11] for the modular group, and in [12] for congruence subgroups, under a mild hypothesis on \(\Sigma\). By a version of the Eichler-Shimura correspondence we have a Hecke-equivariant isomorphism
\[H^{1}(\Gamma,V_{k-2})\simeq M_{k}(\Gamma)\oplus S_{k}(\Gamma)^{c}\]
where \(M_{k}(\Gamma)\) is the space of modular forms and \(S_{k}(\Gamma)^{c}\) the space of anti-holomorphic cusp forms. The trace on \(H^{0}\) is easy to compute (and \(H^{2}\) is trivial for congruence subgroups), and one gets explicit formulas on the space of modular forms.
Let \(\mathbb{Q}[\Sigma]\) be the vector space of finite linear combinations of elements of \(\Sigma\), viewed as a left and right \(\mathbb{Z}[\Gamma]\)-module. For the modular group, we showed in [11] that the right-hand-side of (1) arises as the trace on \(V\) of a specific element \(\widetilde{T}_{\Sigma}\in\mathbb{Q}[\Sigma]\), giving the action of the Hecke operator \([\Sigma]\) on the space of period polynomials associated to modular forms. The same operator introduced for the modular group was used in [12] for congruence subgroups, because formula (1) behaves well with respect to induction: if \(\Gamma^{\prime}\subset\Gamma\) is a finite index subgroup and \(V\) a \(\Gamma^{\prime}\)-module, then a formula of type (1) for \(\Gamma\) and the induced module \(\operatorname{Ind}_{\Gamma^{\prime}}^{\Gamma}V\) implies a formula for \(\Gamma^{\prime}\) and \(V\), by the Shapiro lemma and an argument as in [12, Sec. 4]. When \(V\) is also a \(\Gamma\)-module, the trace of elements \(M\in\Sigma\) on \(\operatorname{Ind}_{\Gamma^{\prime}}^{\Gamma}V\) is easy to compute, and in [13] we obtain from (1) uniform formulas for the trace of a composition of Atkin-Lehner and Hecke operators on \(S_{k}(\Gamma_{0}(N))\), which hold without any restriction on the index of the operators.
The present paper is motivated by generalizing a key ingredient of [11] to Fuchsian groups \(\Gamma\) of finite covolume: find elements \(\widetilde{T}_{\Sigma}\in\mathbb{Q}[\Sigma]\) such that the left hand side of (1) is given by \(\operatorname{tr}(\widetilde{T}_{\Sigma},V)\). Such a group \(\Gamma\) admits non-trivial Hecke operators if and only if it is of arithmetic type, namely \(\Gamma\) is commensurable either
with a congruence subgroup of the modular group (if \(\Gamma\) has cusps), or with a congruence subgroup of the unit group in a definite quaternion algebra (if \(\Gamma\) has no cusps). In this paper we treat both cases in a uniform manner, and construct explicitly such elements \(\widetilde{T}_{\Sigma}\), under an assumption on \(V\) when \(\Gamma\) has no cusps. Even for the modular group, our method is different from that used in [11], where we used the theory of period polynomials. Rather, we use the action of Hecke operators on the algebraically defined cohomology groups \(H^{i}(\Gamma,V)\) using a presentation of \(\Gamma\) in terms of generators and relations.
As known to Poincare, a finite covolume Fuchsian group \(\Gamma\) has a presentation in terms of hyperbolic elements \(\gamma_{1},\gamma_{2},\ldots,\gamma_{2g-1},\gamma_{2g}\), elliptic elements \(s_{1},\ldots,s_{l}\) generating the stabilizers of all non-equivalent elliptic points, and parabolic elements \(t_{1},\ldots,t_{h}\) belonging to non-conjugate maximal parabolic subgroups, subject to the relations
\[[\gamma_{1},\gamma_{2}]\cdot\ldots\cdot[\gamma_{2g-1},\gamma_{2g}]\;s_{1} \cdot\ldots\cdot s_{l}\cdot t_{1}\cdot\ldots\cdot t_{h}=1,\quad s_{i}^{m_{i}} =1,\ m_{i}\geqslant 2\;. \tag{2}\]
The signature \((g;m_{1},\ldots,m_{l};h)\) is an invariant of the group, and by the Gauss-Bonnet formula
\[2g-2+\sum_{j=1}^{l}\Big{(}1-\frac{1}{m_{j}}\Big{)}+h=\frac{|\Gamma\backslash \mathcal{H}|}{2\pi}, \tag{3}\]
where \(|\Gamma\backslash\mathcal{H}|\) is the area of a fundamental domain for \(\Gamma\) acting on the upper half plane \(\mathcal{H}\), with respect to the standard hyperbolic metric. Moreover, for every signature such that the left side is positive, there exists such a group \(\Gamma\).
To state the main result, in this introduction we assume \(\Gamma\) has cusps (\(h\geqslant 1\)). Replace \(t_{h}\) in terms of the other generators eliminates one relation, and consequently we view \(\Gamma\) as a group with generators \(g_{1},\ldots,g_{n}\) with \(n=2g+\ell+h-1\), and relations \(g_{i}^{m_{i}}=1\) for \(i=1,\ldots,\ell\). For every \(g\in\Gamma\), in the group algebra \(\mathbb{Z}[\Gamma]\) we have a decomposition
\[1-g=\sum_{i=1}^{n}(1-g_{j})\frac{\partial g}{\partial g_{j}},\]
for some elements \(\partial g/\partial g_{i}\in\mathbb{Z}[\Gamma]\). These elements are not unique, unless \(\Gamma\) is free (\(\ell=0\)), and in the case of a free group they are called Fox derivatives. They were introduced by Ralph Fox in the 1950s [5], who developed a free differential calculus with applications to topology and knot theory.
We will also need a generalization of these elements \(\partial_{\Sigma}g_{i}/\partial g_{j}\in\mathbb{Z}[\Sigma]\), defined by
\[T_{\Sigma}(1-g_{i})=\sum_{j=1}^{n}(1-g_{j})\frac{\partial_{\Sigma}g_{i}}{ \partial g_{j}}\]
where \(T_{\Sigma}=\sum_{\tau\in\Gamma\backslash\Sigma}\overline{\tau}\) is the usual Hecke operator acting on modular forms, with \(\overline{\tau}\) fixed coset representatives. Note that for \(\Sigma=\Gamma\) the elements \(\partial_{\Sigma}g_{i}/\partial g_{j}\) reduce to the usual Fox derivatives. Define
\[\widetilde{T}_{\Sigma}=T_{\Sigma}-\sum_{i=1}^{n}\frac{\partial_{\Sigma}g_{i}} {\partial g_{i}}+\sum_{i=1}^{\ell}\pi_{i}\cdot\frac{\partial_{\Sigma}g_{i}}{ \partial g_{i}}\in\mathbb{Q}[\Sigma],\]
where \(\pi_{i}=(1+g_{i}+\ldots+g_{i}^{m_{i}-1})/m_{i}\) are idempotents associated to the elliptic elements \(g_{i}\), \(i=1,\ldots,\ell\). In Proposition 9 we prove the following result.
**Proposition**.: _For \(\Gamma\) a finite covolume Fuchsian group with cusps and \(V\) a finite dimensional \(\widetilde{\Gamma}\)-module over a field in which the orders of elliptic elements are invertible we have:_
\[\sum_{i=0,1}(-1)^{i}\operatorname{tr}([\Sigma],H^{i}(\Gamma,V))=\operatorname{ tr}(\widetilde{T}_{\Sigma},V).\]
Writing \(\widetilde{T}_{\Sigma}=\sum_{M}c_{M}M\), with finitely many non-zero coefficients \(c_{M}\), formula (1) immediately follows by taking
\[\varepsilon_{\Gamma}(X)=\sum_{M\in X}c_{M} \tag{4}\]
for \(X\) a conjugacy class. In Theorem 12 we also show that \(\varepsilon_{\Gamma}(X)\) do not depend on the choices made in defining \(\widetilde{T}_{\Sigma}\).
We construct such elements also when \(\Gamma\) has no cusps (Proposition 10), assuming it has no elliptic elements either and under some restriction on the module \(V\), which is likely not necessary (see Remark 11). In constructing the elements \(\widetilde{T}_{\Sigma}\) and proving their properties, we use results of Lyndon [8], so our approach can be seen as belonging to combinatorial group theory.
We now discuss the conjugacy class invariants \(\varepsilon_{\Gamma}(X)\). For \(M\in\Sigma\), let \(n(M)\) be the number of fixed points of \(M\) in \(\mathcal{H}\cup\{\text{cusps of }\Gamma\}\), and let \(\Gamma_{M}\) denote the centralizer of \(M\) in \(\Gamma\). We define:
\[w_{\Gamma}(M)=\frac{(-1)^{n(M)+1}}{|\Gamma_{M}|}\text{ if }M\text{ not scalar}, \qquad w_{\Gamma}(M)=-\frac{|\Gamma\backslash\mathcal{H}|}{2\pi}\text{ if }M\text{ scalar},\]
with the understanding that \(w_{\Gamma}(M)=0\) if \(\Gamma_{M}\neq\Gamma\) is infinite. More explicitly, it can be shown (e.g. [9, Proof of Theorem 2]) that any \(M\in\widetilde{\Gamma}\) for which \(\Gamma_{M}\) is finite is either: elliptic (that is \(\operatorname{tr}^{2}M<4\det M\)) so \(n(M)=1\); or it is split hyperbolic (that is \(\operatorname{tr}^{2}M>4\det M\) and \(M\) fixes two distinct cusps of \(\Gamma\)) so \(n(M)=2\) and \(|\Gamma_{M}|=1\).
Since \(n(M)\) and \(|\Gamma_{M}|\) are conjugacy class invariants, we define \(w_{\Gamma}(X)=w_{\Gamma}(M)\) for a conjugacy class \(X\subset\Sigma\) with representative \(M\). We have shown in [11, 12] that the trace formula (1) holds for
\[\varepsilon_{\Gamma}(X)=w_{\Gamma}(X), \tag{5}\]
if \(\Gamma\) is any finite index subgroup of \(\Gamma_{1}=\operatorname{SL}_{2}(\mathbb{Z})\), and \(\Sigma\) is a double coset satisfying \(|\Gamma\backslash\Sigma|=|\Gamma_{1}\backslash\Gamma_{1}\Sigma|\).
We expect formula (5) to hold for all finite covolume Fuchsian groups. It is an interesting open problem to prove it using only the elements \(\widetilde{T}_{\Sigma}\) defined in this paper, which do not depend on the module \(V\). We can only do it in the case of \(\operatorname{SL}_{2}(\mathbb{Z})\) in Section 3.4, by relating the element introduced here to the explicit one constructed in [11], for which we explicitly computed the coefficient sums (4).
One case when we can prove formula (5) is for the trivial Hecke operator \(\Sigma=\Gamma\). Under some restrictions stated precisely in Corollary 13, we obtain a formula for the Euler-Poincare characteristic of the \(\Gamma\)-module \(V\):
\[\sum_{i}(-1)^{i}\dim H^{i}(\Gamma,V)=-\frac{|\Gamma\backslash\mathcal{H}|}{2 \pi}\dim V+\sum_{M\in E(\Gamma)}\frac{1}{|\Gamma_{M}|}\operatorname{tr}(g,V),\]
where \(E(\Gamma)\) is a set of representatives for elliptic conjugacy classes in \(\Gamma\). However in this case the formula is known for a much larger class of groups, by work of Bass [2] and Brown [3]. The constants \(-|\Gamma\backslash\mathcal{H}|/2\pi\) and \(1/|\Gamma_{M}|\) appearing above occur there as homological Euler characteristics of the groups \(\Gamma\) and \(\Gamma_{M}\), as defined for example by Serre [14]. It would be interesting to extend the techniques of the papers above to prove a formula for the Lefschetz number of a Hecke correspondence of the type (1) for a more general class of arithmetic groups. In particular, for Fuchsian groups of finite covolume, it would be interesting to give a more conceptual proof of (5), other than by constructing an explicit element \(\widetilde{T}_{\Sigma}\) as in [11].
Formulas for Lefschetz numbers of Hecke correspondences are known by the topological trace formula of Goresky and MacPherson, developed over the course of a decade [6, 7]. Such formulas are proved for \(\Gamma\) any arithmetic subgroup of a reductive group and the algebraic group cohomology replaced by other geometric cohomology theories. The formulas involve local contributions from the fixed point varieties of the correspondence, which lump together contributions from the various conjugacy classes in \(\Gamma\). It would be interesting to investigate whether a simple minded version stated in terms of group cohomology as in (1) holds in this more general setting.
_Acknowledgements._ I would like to thank Vicentiu Pasol for helpful discussions while preparing this paper.
## 2. Hecke operators on the first cohomology of groups defined by generators and relations
### Groups defined by generators and relations
Here we review some results of Lyndon [8], who used them to compute the cohomology of a group defined by a single relation.
Let \(\Gamma=\langle g_{1},\ldots,g_{n}:r_{1},\ldots,r_{m}\rangle\) be a group given in terms of a presentation with generators \(g_{i}\), and relations \(r_{j}=1\). We view also \(\Gamma=F/R\) as a quotient of a free group \(F\) with generators denote by the same symbols \(g_{1},\ldots,g_{n}\), by the normal subgroup \(R\) generated by the elements \(r_{j}\in F\). We identify elements of \(\Gamma\) with right cosets \(Rg\) of \(F/R\). To keep the notation simple, we use the same symbols to denote the elements of \(F\) and their projections onto \(\Gamma\), as the group will be clear from the context.
Let \(\mathbb{Z}[F]\) be the group algebra of \(F\). For \(g\in F\) there are unique elements \(\partial g/\partial g_{i}\in\mathbb{Z}[F]\) such that
\[1-g=\sum_{i=1}^{n}(1-g_{i})\frac{\partial g}{\partial g_{i}}. \tag{6}\]
The map \(\partial/\partial g_{i}:F\mapsto\mathbb{Z}[F]\) is a cocycle for the right action of \(F\) on \(\mathbb{Z}[F]\) (traditionally called a derivation of \(F\)), namely it satisfies
\[\frac{\partial gh}{\partial g_{i}}=\frac{\partial h}{\partial g_{i}}+\frac{ \partial g}{\partial g_{i}}\cdot h.\]
It is the unique derivation of \(F\) such that
\[\frac{\partial g_{j}}{\partial g_{i}}=\delta_{i,j},\quad\frac{\partial g_{i}^ {-1}}{\partial g_{j}}=-\frac{\partial g_{i}^{-1}}{\partial g_{j}}\cdot g_{j},\]
where \(\delta_{i,j}\) is the Kronecker delta function.
The group algebra \(\mathbb{Z}[\Gamma]\) is identified with the quotient of \(\mathbb{Z}[F]\) by the right ideal generated by elements \((r-1)a\) with \(r\in R,a\in Z[F]\). For \(g\in F\) we can define uniquely the partial derivatives \(\partial g/\partial g_{i}\in\mathbb{Z}[\Gamma]\) by means of this projection. For \(g\in\Gamma\), we define \(\partial g/\partial g_{i}\in\mathbb{Z}[\Gamma]\) to be any elements such that (6) holds. However these elements are no longer unique in \(\mathbb{Z}[\Gamma]\): for each \(r\in R\) we have
\[0=1-r=\sum_{i=1}^{n}(1-g_{i})\frac{\partial r}{\partial g_{i}}\]
(here the symbol \(r\) is viewed both as an element of \(F\), and as its projection on \(\Gamma\) which equals the identity element \(1\)). The failure of uniqueness is precisely described in terms of the relators \(r_{i}\) by Lyndon [8, Lemma 5.1], which we state below. For \(g\in F\), it is convenient to denote by \(\nabla g=(\partial g/\partial g_{1},\ldots,\partial g/\partial g_{n})\in \mathbb{Z}[\Gamma]^{n}\).
**Proposition 1** ([8]).: _Let \(\mathbf{X}=(X_{1},\ldots,X_{n})\in\mathbb{Z}[\Gamma]^{n}\). In \(\mathbb{Z}[\Gamma]\) we have_
\[\sum_{i=1}^{n}(1-g_{i})\cdot X_{i}=0\text{ if and only if }\mathbf{X}=\sum_{j=1}^{m} \nabla r_{j}\cdot k_{j}\text{ for some }k_{j}\in\mathbb{Z}[\Gamma].\]
**Example 2**.: For the modular group \(\Gamma=\mathrm{PSL}_{2}(\mathbb{Z})\), the proposition reduces to the aciclicity lemma [4, Lemma 2]. The group \(\Gamma\) is the quotient of the free group on two elements \(S,U\), by the relations \(r_{1}=S^{2},r_{2}=U^{3}\). We have \(\partial r_{1}/\partial S=1+S\), \(\partial r_{2}/\partial U=1+U+U^{2}\), and \(\partial r_{1}/\partial U=0\), \(\partial r_{2}/\partial S=0\). Therefore in this case the proposition shows that
\[(1-S)Y=(1-U)Z\Leftrightarrow Y\in(1+S)\mathbb{Z}[\Gamma],\ Z\in(1+U+U^{2}) \mathbb{Z}[\Gamma],\]
that is \(\mathrm{Im}(1-S)\cap\mathrm{Im}(1-U)=\{0\}\). Since \(\mathrm{Ker}(1+S)=\mathrm{Im}(1-S)\), \(\mathrm{Ker}(1+U+U^{2})=\mathrm{Im}(1-U)\), we recover the aciclicity lemma.
Let \(V\) be a right \(\Gamma\)-module, which we view also as a right \(F\)-module with \(R\) acting trivially on \(V\). If \(\varphi:\Gamma\to V\) is a cocycle, namely \(\varphi(gh)=\varphi(h)+\varphi(g)|h\), we view it as a cocycle on \(F\) trivial on \(R\) (by the inflation map \(H^{1}(\Gamma,V)\to H^{1}(F,V)\)). Note that by the cocycle relation we have \(\varphi(rg)=\varphi(gr)=\varphi(g)\) for \(r\in R,g\in F\).
The following lemma can also be found in [8, Sec. 3, eq. (5)].
**Lemma 3**.: _Let \(V\) be a (right) \(\Gamma\)-module, and \(\varphi:\Gamma\to V\) be a cocycle. Viewing \(\varphi\) as a cocycle on \(F\) as above, we have for \(g\in F\):_
\[\varphi(g)=\sum_{i=1}^{n}\varphi(g_{i})\left|\frac{\partial g}{\partial g_{i} }\right.\.\]
Proof.: The statement follows from (6) by induction on the length of \(g\) as a product in the generators \(g_{i}\). Indeed we have \(1-g_{i}g=1-g+(1-g_{i})g\), and \(\varphi(g_{i}g)=\varphi(g_{i})|g+\varphi(g)\).
### Double coset operators on cohomology
Let \(\Gamma\) be a group and \(\widetilde{\Gamma}\) its commensurator inside an ambient group. Let \(\Sigma\subset\widetilde{\Gamma}\) be a double coset of \(\Gamma\), so that the number of cosets \(\Gamma\backslash\Sigma\) is finite. Let \(V\) be right \(\widetilde{\Gamma}\)-module.
In this paper we view the cohomology groups defined algebraically as \(H^{i}(\Gamma,V)=Z^{i}(\Gamma,V)/B^{i}(\Gamma,V)\), where \(Z^{i}(\Gamma,V)\) are (inhomogeneous) cocycles and \(B^{i}(\Gamma,V)\) coboundaries. The double coset operator \([\Sigma]\) acts on cohomology as follows. We fix representatives \(\overline{\tau}\in\Sigma\) for cosets \(\tau\in\Gamma\backslash\Sigma\), and for \(g\in\Gamma\), we let \(\gamma_{\tau,g}\in\Gamma\) be the unique element such that
\[\overline{\tau}g=\gamma_{\tau,g}\overline{\tau}g. \tag{7}\]
If \(\varphi:\Gamma^{i}\to V\) is a a cocycle, following [10] we define
\[\varphi|[\Sigma](h_{1},\dots,h_{i})=\sum_{\tau\in\Gamma\backslash\Sigma} \varphi(\gamma_{\tau h_{1}^{-1},h_{1}},\gamma_{\tau h_{1}^{-1}h_{2}^{-1},h_{2} },\dots,\gamma_{\tau h_{1}^{-1},\dots h_{i}^{-1},h_{i}})|\overline{\tau}.\]
Then \(\varphi|[\Sigma]\) is a cocycle, whose cohomology class is independent of the choice of representatives \(\overline{\tau}\).
We are mostly interested in \(H^{0}\) and \(H^{1}\). We have \(H^{0}(\Gamma,V)=V^{\Gamma}\), the space of invariants of \(\Gamma\), and for \(v\in V^{\Gamma}\) we have
\[v|[\Sigma]=\sum_{\tau\in\Gamma\backslash\Sigma}v|\overline{\tau}.\]
If \(\varphi:\Gamma\to V\) is a cocycle, we have
\[\varphi|[\Sigma](g)=\sum_{\tau\in\Gamma\backslash\Sigma}\varphi(\gamma_{\tau g ^{-1},g})|\overline{\tau}. \tag{8}\]
### Double coset operators for groups generated by generators and relations
Assume now \(\Gamma=F/R\) is given in terms of generators and relations as in section 2.1. Let \(\mathbb{Z}[\Sigma]\) be the set of finite linear combinations \(\sum n_{j}\sigma_{j}\) with \(\sigma_{j}\in\Sigma\), \(n_{j}\in\mathbb{Z}\). We view \(\mathbb{Z}[\Sigma]\) as a left and right \(\mathbb{Z}[\Gamma]\)-module, and also as a left and right \(\mathbb{Z}[F]\)-module by having \(R\) act trivially by left and right multiplication.
Fix representatives \(\overline{\tau}\in\Sigma\) for cosets \(\tau\in\Gamma\backslash\Sigma\), and define
\[T_{\Sigma}=\sum_{\tau\in\Gamma\backslash\Sigma}\overline{\tau}\in\mathbb{Z}[ \Sigma].\]
For a fixed generator \(g_{i}\), the map \(\tau\mapsto\tau g_{i}\) permutes the cosets \(\tau\in\Gamma\backslash\Sigma\), so we can write:
\[T_{\Sigma}(1-g_{i}) =\sum_{\tau\in\Gamma\backslash\Sigma}\overline{\tau}-\overline{ \tau g_{i}^{-1}}g_{i}=\sum_{\tau}(1-\gamma_{\tau g_{i}^{-1},g_{i}})\overline{\tau}\] \[=\sum_{j=1}^{n}(1-g_{j})\sum_{\tau}\frac{\partial\gamma_{\tau g_{i} ^{-1},g_{i}}}{\partial g_{j}}\overline{\tau},\]
where \(\gamma_{\tau,g}\) is defined as in (7). We conclude that
\[T_{\Sigma}(1-g_{i})=\sum_{j=1}^{n}(1-g_{j})\frac{\partial_{\Sigma}g_{i}}{ \partial g_{j}}\;,\;\text{where}\;\frac{\partial_{\Sigma}g_{i}}{\partial g_{j}} :=\sum_{\tau\in\Gamma\backslash\Sigma}\frac{\partial\gamma_{\tau g_{i}^{-1},g_ {i}}}{\partial g_{j}}\cdot\overline{\tau}\in\mathbb{Z}[\Sigma]. \tag{9}\]
The elements \(\partial_{\Sigma}g_{i}/\partial g_{j}\) depend on the choice of representatives \(\overline{\tau}\) for the cosets \(\tau\), and on the non-unique choices of \(\partial\gamma_{\tau g_{i}^{-1},g_{i}}/\partial g_{j}\), but we omit the dependence from the notation. For arbitrary \(g\in F\) we define
\[\frac{\partial_{\Sigma}g}{\partial g_{i}}=\sum_{j=1}^{n}\frac{\partial_{\Sigma }g_{j}}{\partial g_{i}}\frac{\partial g}{\partial g_{j}},\]
so that an easy computation gives
\[T_{\Sigma}(1-g)=\sum_{j=1}^{n}T_{\Sigma}(1-g_{j})\frac{\partial g}{\partial g _{j}}=\sum_{i=1}^{n}(1-g_{i})\frac{\partial_{\Sigma}g}{\partial g_{i}}.\]
Note that for \(\Sigma=\Gamma\), the trivial double coset, we have \(\partial_{\Sigma}g/\partial g_{j}=\partial g/\partial g_{j}\), so this is an extension to double cosets of the usual Fox derivatives. We also denote by \(\nabla_{\Sigma}g\) the column vector of "partial \(\Sigma\)-derivatives:"
\[\nabla_{\Sigma}g=(\partial_{\Sigma}g/\partial g_{1},\dots,\partial_{\Sigma}g/ \partial g_{n})\in\mathbb{Z}[\Sigma]^{n},\]
and by \(M_{\Sigma}\) the \(n\times n\) matrix with entries \(\partial_{\Sigma}g_{j}/\partial g_{i}\). In matrix notation, the formula above becomes:
\[\nabla_{\Sigma}g=M_{\Sigma}\nabla g.\]
Lemma 3 generalizes to express the action of double coset operators on cocycles in terms of partial \(\Sigma\)-derivatives.
**Lemma 4**.: _If \(\varphi:\Gamma\to V\) is a cocycle, for \(1\leqslant i\leqslant n\) we have_
\[\varphi[[\Sigma](g_{i})=\sum_{j=1}^{n}\varphi(g_{j})\left|\frac{\partial_{ \Sigma}g_{i}}{\partial g_{j}}\right.\text{.} \tag{10}\]
Proof.: This follows immediately from (8), (9) and Lemma 3.
We have the following immediate generalization of Proposition 1 to double cosets.
**Proposition 5**.: _Let \(\mathbf{X}=(X_{1},\dots,X_{n})\in\mathbb{Z}[\Sigma]^{n}\). We have_
\[\sum_{i=1}^{n}(1-g_{i})\cdot X_{i}=0\text{ if and only if }\mathbf{X}=\sum_{j=1}^{m} \nabla r_{j}\cdot k_{j}\text{ for some }k_{j}\in\mathbb{Z}[\Sigma].\]
Proof.: Write each \(X_{i}=\sum_{\tau\in\Gamma\backslash\Sigma}X_{\tau,i}\overline{\tau}\) with \(X_{\tau,i}\in\mathbb{Z}[\Gamma]\). It follows that
\[\sum_{i}(1-g_{i})X_{\tau,i}=0,\]
for all \(\tau\), and the conclusion follows from Proposition 1.
We apply the proposition to the following situation. Since \(r_{i}=1\) in \(\Gamma\) we have
\[0=T_{\Sigma}(1-r_{i})=\sum_{j=1}^{n}(1-g_{j})\frac{\partial_{\Sigma}r_{i}}{ \partial g_{j}},\]
and we obtain:
**Corollary 6**.: _We have the following decompositions inside \(\mathbb{Z}[\Sigma]^{n}\):_
\[\nabla_{\Sigma}r_{i}=M_{\Sigma}\nabla r_{i}=\sum_{k=1}^{m}\nabla r_{k}\cdot T_{ \Sigma}^{i,k},\]
_for some elements \(T_{\Sigma}^{i,k}\in\mathbb{Z}[\Sigma]\), \(1\leqslant i,k\leqslant m\)._
### Double coset operators and the inflation-restriction exact sequence
Let \(\Gamma=F/R\) as before and \(V\) a right \(\Gamma\)-module. We view \(V\) as an \(F\)-module as well, by letting \(R\) act trivially. Since free groups have cohomological dimension \(1\), we have \(H^{2}(F,V)=0\) and the inflation-restriction exact sequence gives:
\[0\longrightarrow H^{1}(\Gamma,V)\stackrel{{\mathrm{inf}}}{{ \longrightarrow}}H^{1}(F,V)\stackrel{{\mathrm{res}}}{{ \longrightarrow}}\mathrm{Hom}(R,V)^{F}\stackrel{{\mathrm{tg}}}{{ \longrightarrow}}H^{2}(\Gamma,V)\longrightarrow 0. \tag{11}\]
Since \(R\) acts trivially on \(V\), we have \(H^{1}(R,V)^{F}=\mathrm{Hom}(R,V)^{F}\) consists of the homomorphisms \(f:R\to V\) such that \(f(g^{-1}rg)=f(r)|g\) for \(r\in R,g\in F\).
For \(\Sigma\) a double coset of \(\Gamma\), the operator \([\Sigma]\) acts on two of the four cohomology groups in the exact sequence. We want to define an action of \([\Sigma]\) on the remaining two groups such that it is equivariant with respect to all the maps. In this section we concentrate on the first two maps.
Since \(F\) is free, we have a map \(V^{n}\to H^{1}(F,V)\) taking a vector \(\mathbf{v}=(v_{1}\ldots,v_{n})\) to the class of the cocycle \(\varphi_{\mathbf{v}}\) defined by \(\varphi_{\mathbf{v}}(g_{i})=v_{i}\)[15, Lemma 1.1]. We have an exact sequence
\[0\longrightarrow V^{\Gamma}\longrightarrow V\longrightarrow V^{n} \stackrel{{\mathbf{v}\mapsto[\varphi_{\mathbf{v}}]}}{{ \longrightarrow}}H^{1}(F,V)\longrightarrow 0 \tag{12}\]
where the second map takes \(v\mapsto(v|(1-g_{1}),\ldots,v|(1-g_{n}))\in V^{n}\). We define an action of \([\Sigma]\) on \(V^{n}\) and on cocycles \(\varphi:F\to V\) as follows
\[(\mathbf{v}|[\Sigma])_{i}:=\sum_{j=1}^{n}v_{j}\left|\frac{\partial_{\Sigma}g_ {i}}{\partial g_{j}}\right.,\qquad\varphi|[\Sigma](g_{i}):=\sum_{j=1}^{n} \varphi(g_{j})\left|\frac{\partial_{\Sigma}g_{i}}{\partial g_{j}}\right.. \tag{13}\]
By Lemma 4, the resulting map on \(H^{1}(F,V)\) coincides on the image of \(\mathrm{inf}\) with the action of \([\Sigma]\) on \(H^{1}(\Gamma,V)\), so the inflation map is equivariant with respect to the two actions.
**Proposition 7**.: _With the action of \([\Sigma]\) defined in (13), we have_
\[\mathrm{tr}([\Sigma],H^{1}(F,V))=\sum_{i=1}^{n}\mathrm{tr}(\partial_{\Sigma} g_{i}/\partial g_{i},V)-\mathrm{tr}(T_{\Sigma},V)+\mathrm{tr}([\Sigma],H^{0}( \Gamma,V))\;.\]
Proof.: The kernel of the map \(v\mapsto[\varphi_{\mathbf{v}}]\) is preserved by the map \(\mathbf{v}\mapsto\mathbf{v}|[\Sigma]\) defined above, and we have
\[[v|(1-g_{i})]_{i}|[\Sigma]=[v|T_{\Sigma}](1-g_{i})]_{i}\;. \tag{14}\]
Therefore the map (14) corresponds to the map \(v\mapsto v|T_{\Sigma}\) on \(V\) and \(V^{\Gamma}\). Since
\[\mathrm{tr}(T_{\Sigma},V^{\Gamma})=\mathrm{tr}([\Sigma],H^{0}(\Gamma,V)), \qquad\mathrm{tr}([\Sigma],V^{n})=\sum_{i=1}^{n}\mathrm{tr}(\partial_{\Sigma} g_{i}/\partial g_{i},V),\]
the proof is finished by the exact sequence (12).
The action of \([\Sigma]\) on the restriction of \(F\)-cocycles to \(R\) is given in the next lemma.
**Proposition 8**.: _If \(\varphi:F\to V\) is a cocycle, for \(1\leqslant i\leqslant m\) we have_
\[\varphi|[\Sigma](r_{i})=\sum_{k=1}^{m}\varphi(r_{k})|T_{\Sigma}^{i,k},\]
_with \(T_{\Sigma}^{i,k}\in\mathbb{Z}[\Sigma]\) defined in Corollary 6._
Proof.: From Lemma 4 and Corollary 6 we have:
\[\varphi|[\Sigma](r_{i}) =\sum_{j=1}^{n}\varphi(g_{j})\left|\frac{\partial_{\Sigma}r_{i}}{ \partial g_{j}}\right.=\sum_{j=1}^{n}\sum_{k=1}^{m}\varphi(g_{j})\left|\frac{ \partial r_{k}}{\partial g_{j}}T_{\Sigma}^{i,k}\right.\] \[=\sum_{k=1}^{m}\varphi(r_{k})|T_{\Sigma}^{i,k}.\]
## 3. The trace formula on Fuchsian groups
We now specialize \(\Gamma\) to be a discrete, finite covolume subgroup \(\Gamma\) of \(\mathrm{PSL}_{2}(\mathbb{R})\), so \(\Gamma\) has presentation (2).
### The case \(\Gamma\) has cusps
We assume \(h>0\) in the presentation (2). Then we can solve for \(t_{h}\) in terms of the other generators, which we relabel \(g_{1},\ldots,g_{n}\) where \(n=2g+\ell+h-1\), so that the elliptic generators are \(g_{i}=s_{i}\) for \(1\leqslant i\leqslant\ell\). Therefore \(\Gamma=F/R\) where \(F\) is free on \(g_{1},\ldots,g_{n}\) and \(R\) is the normal closure of the relators \(r_{i}=g_{i}^{m_{i}}\), for \(1\leqslant i\leqslant\ell\).
We assume \(V\) is a (right) \(k\Gamma\)-module where \(k\) is any field in which the orders \(m_{i}\) of elliptic elements are invertible. It follows that \(H^{2}(\Gamma,V)=0\), as \(\Gamma\) is a free product of the cyclic groups generated by \(g_{i}\), and one can use the Mayer-Vietoris sequence for free products of groups with amalgamation due to Swan [15, Sec. 2].
For \(i=1,\ldots,\ell\), let
\[\pi_{i}=\frac{\partial r_{i}/\partial g_{i}}{m_{i}}=\frac{1+g_{i}+\ldots+g_{i} ^{m_{i}-1}}{m_{i}}\in\mathbb{Q}[\Gamma] \tag{15}\]
be the idempotent associated with \(g_{i}\).
**Proposition 9**.: _For \(\Gamma\) a finite covolume Fuchsian group with cusps we have:_
\[\sum_{i=0,1}(-1)^{i}\operatorname{tr}([\Sigma],H^{i}(\Gamma,V))=\operatorname {tr}(\widetilde{T}_{\Sigma},V),\]
_where_
\[\widetilde{T}_{\Sigma}=T_{\Sigma}-\sum_{i=1}^{n}\frac{\partial_{\Sigma}g_{i} }{\partial g_{i}}+\sum_{i=1}^{\ell}\pi_{i}\cdot\frac{\partial_{\Sigma}g_{i}} {\partial g_{i}}\in\mathbb{Q}[\Sigma]. \tag{16}\]
Proof.: If \(\ell=0\), the theorem follows immediately from Proposition 7, so we assume \(\ell>0\). For \(a\in\mathbb{Q}[\Gamma]\), we denote by \(\operatorname{Im}(a),\operatorname{Ker}(a)\subset V\) the image and kernel of \(a\) acting on the right on \(V\).
The inflation-restriction sequence (11) reduces to three terms, and we identify
\[\operatorname{Hom}(R,V)^{F}=\oplus_{i=1}^{\ell}\operatorname{Im}(\partial r_{ i}/\partial g_{i}) \tag{17}\]
by the map \(f\mapsto[f(r_{i})]_{i}\). Indeed \(r_{i}=g_{i}^{m_{i}}\) and \(f(r_{i})=f(g_{i}^{-1}r_{i}g_{i})=f(r_{i})|g_{i}\) so \(f(r_{i})\in\operatorname{Ker}(1-g_{i})=\operatorname{Im}(1+g_{i}+\ldots,g_{i} ^{m_{i}-1})\) (one inclusion in the equality of subspaces is clear, and if \(v=v|g_{i}\) it follows that \(v=v|\pi_{i}\) since \(m_{i}\) is invertible in \(V\), so \(v\in\operatorname{Im}(\pi_{i})=\operatorname{Im}(\partial r_{i}/\partial g_{i})\) and the other inclusion follows).
In Section 2.4 we have defined an action of the double coset operator \([\Sigma]\) on cocycles \(\varphi:F\to V\). Corollary 8 shows that under the restriction map \(\varphi\mapsto[\varphi(r_{i})]_{i}\), the action of \([\Sigma]\) on \(\varphi\) corresponds to the map on \(\oplus_{i=1}^{\ell}\operatorname{Im}(\partial r_{i}/\partial g_{i})\) given by
\[[v_{i}]_{i}\mapsto[\sum_{j=1}^{\ell}v_{j}|T_{\Sigma}^{i,j}]_{i}.\]
We conclude that
\[\operatorname{tr}([\Sigma],\operatorname{Hom}(R,V)^{F})=\sum_{i=1}^{\ell} \operatorname{tr}(T_{\Sigma}^{i,i},\operatorname{Im}\partial r_{i}/\partial g_ {i}).\]
Finally we would like to express the left side as the trace of an operator on \(V\) rather than on the subspaces \(\operatorname{Im}\partial r_{i}/\partial g_{i}\). Note that we can view these subspaces also as \(\operatorname{Im}\pi_{i}\), with \(\pi_{i}\) defined in (15). Consider the exact sequence
\[0\longrightarrow\operatorname{Im}(1-\pi_{i})\longrightarrow V\xrightarrow{v \mapsto v|\pi_{i}}\operatorname{Im}(\pi_{i})\longrightarrow 0.\]
We have that \(\partial_{\Sigma}g_{i}/\partial g_{i}\cdot\pi_{i}=\pi_{i}\cdot T_{\Sigma}^{i,i}\) from the definition of \(T_{\Sigma}^{i,i}\) in Corollary 6, and it follows that the operator \(\pi_{i}\partial_{\Sigma}g_{i}/\partial g_{i}\) acting on \(V\) corresponds to \(T_{\Sigma}^{i,i}\) on \(\operatorname{Im}(\pi_{i})\) under the second map above (using that \(\pi_{i}\) is idempotent). Since the same operator vanishes on \(\operatorname{Im}(1-\pi_{i})\), we have
\[\operatorname{tr}(\pi_{i}\partial_{\Sigma}g_{i}/\partial g_{i},V)= \operatorname{tr}(T_{\Sigma}^{i,i},\operatorname{Im}\pi_{i}),\]
and the proof is finished by Proposition 7 and the inflation-restriction exact sequence.
### The case \(\Gamma\) has no cusps
We now assume that \(\Gamma\) has no parabolic elements. By a result of Shimura [10, Prop 8.2] proved by geometric methods we have
\[H^{2}(\Gamma,V)\simeq V/\{v|1-g\ \mid\ v\in V,g\in\Gamma\}. \tag{18}\]
For simplicity we assume \(\Gamma\) has no elliptic elements either, so \(h=\ell=0\) in the presentation (2). Therefore \(\Gamma=F/R\) has a presentation with \(F\) free on \(n=2g\) generators, and \(R\) the normal subgroup generated by the only relator
\[r=[g_{1},g_{2}]\cdot\ldots\cdot[g_{n-1},g_{n}].\]
By Corollary 6, we have \(M_{\Sigma}\cdot\nabla r=\nabla r\cdot T_{\Sigma}^{\vee}\), for an element \(T_{\Sigma}^{\vee}\in\mathbb{Z}[\Sigma]\). Moreover the element \(T_{\Sigma}^{\vee}\) is unique, by [8, Sec. 11].
**Proposition 10**.: _Let \(\Gamma\) be a finite covolume Fuchsian group with no parabolic and no elliptic elements. Let \(V\) be a \(\widetilde{\Gamma}\)-module such that \(H^{2}(\Gamma,V)=0\). We then have:_
\[\sum_{i=0}^{2}(-1)^{i}\operatorname{tr}([\Sigma],H^{i}(\Gamma,V))= \operatorname{tr}(\widetilde{T}_{\Sigma},V),\]
_where_
\[\widetilde{T}_{\Sigma}=T_{\Sigma}+T_{\Sigma}^{\vee}-\sum_{i=1}^{n}\frac{ \partial_{\Sigma}g_{i}}{\partial g_{i}}\in\mathbb{Z}[\Sigma]. \tag{19}\]
Proof.: We identify
\[\operatorname{Hom}(R,V)^{F}=V \tag{20}\]
by mapping \(f\mapsto f(r)\). Under this identification, the image of the restriction map in (11) is \(\sum_{i=1}^{n}\operatorname{Im}(\partial r/\partial g_{i})\) by Lemma 3.
If \(H^{2}(\Gamma,V)=0\), the restriction map in (11) is onto. By Proposition 8, the action of \([\Sigma]\) on \(H^{1}(F,V)\) corresponds to the action of \(T_{\Sigma}^{\vee}\) on \(V\) via the identification (20), and the proof is finished by the exact sequence (11) and Proposition 7.
**Remark 11**.: By (18), we have \(V^{\Gamma}=V\) if and only if \(H^{2}(\Gamma,V)\simeq V\). In this case we have that restriction map in (11) is trivial, and \(\operatorname{tr}([\Sigma],H^{2}(\Gamma,V))=\operatorname{tr}([\Sigma],H^{0}( \Gamma,V))\) by Poincare duality [1, Lemma 1.4.3]. From Proposition 7 we obtain
\[\sum_{i=0}^{2}(-1)^{i}\operatorname{tr}([\Sigma],H^{i}(\Gamma,V))= \operatorname{tr}(\widetilde{T}_{\Sigma}^{\prime},V),\qquad\text{ for }\widetilde{T}_{\Sigma}^{\prime}=2T_{\Sigma}-\sum_{i=1}^{n}\frac{ \partial_{\Sigma}g_{i}}{\partial g_{i}}.\]
However the operator \(\widetilde{T}_{\Sigma}^{\prime}\), unlike \(\widetilde{T}_{\Sigma}\), does not behave nicely with respect to conjugacy classes: the associated sums \(\varepsilon_{\Gamma}^{\prime}(X)\) defined as in (4) will depend on the choice of representatives \(\overline{\tau}\) made in defining \(T_{\Sigma}\). In fact we expect that \(\operatorname{tr}(T_{\Sigma},V)=\operatorname{tr}(T_{\Sigma}^{\vee},V)\) for \(V=V^{\Gamma}\), that is
\[\deg T_{\Sigma}^{\prime}=\deg T_{\Sigma}=|\Gamma\backslash\Sigma|,\]
with \(\deg A\) denoting the sum of the coefficients of \(A\in\mathbb{Z}[\Sigma]\). This would show that Proposition 10 holds if \(H^{2}(\Gamma,V)\simeq V\) as well, and in fact we expect it to hold for arbitrary \(V\), but we have been unable to prove it.
### A trace formula
For an element \(A=\sum c_{M}M\in\mathbb{Q}[\Sigma]\) and a subset \(S\subset\Sigma\), denote by \(\deg_{S}A=\sum_{M\in S}c_{M}\), the sum of the coefficients in \(A\) of elements from \(S\).
**Theorem 12**.: _Let \(\Gamma\) be a Fuchsian group of finite covolume, and assume either \(\Gamma\) has cusps, or it has neither cusps nor elliptic elements. Let \(V\) be a \(\widetilde{\Gamma}\)-module, and if \(\Gamma\) has no cusps assume also \(H^{2}(\Gamma,V)=0\)._
_(a) With the elements \(\widetilde{T}_{\Sigma}\) defined in Propositions 9 and 10, we have_
\[\sum_{i=0}^{2}(-1)^{i}\operatorname{tr}([\Sigma],H^{i}(\Gamma,V))=\sum_{X\subset \Sigma}\varepsilon_{\Gamma}(X)\operatorname{tr}(M_{X},V) \tag{21}\]
_where the sum is over \(\Gamma\)-conjugacy classes \(X\subset\Sigma\) and \(\varepsilon_{\Gamma}(X)=\deg_{X}\widetilde{T}_{\Sigma}\)._
_(b) The coefficients \(\varepsilon_{\Gamma}(X)\) are independent of the choice of representatives used to define \(T_{\Sigma}\)._
_(c) If \(X=\{M\}\) with \(\Gamma_{M}=\Gamma\), then_
\[\varepsilon_{\Gamma}(X)=-\frac{|\Gamma\backslash\mathcal{H}|}{2\pi},\]
_namely it is equal to the homological Euler-Poincare characteristic of \(\Gamma\)._
Proof.: (a) This follows immediately from Propositions 9 and 10, as \(\operatorname{tr}(M,V)\) is constant for \(M\) in a conjugacy class \(X\).
(b) We show that \(\deg_{X}\widetilde{T}_{\Sigma}\) is independent of the choice of coset representatives used to define \(T_{\Sigma}\). Any two choices of \(T_{\Sigma}\) differ by an element \(L\in\mathbb{Z}[\Sigma]\) with \(\deg_{\tau}L=0\) for all \(\tau\in\Gamma\backslash\Sigma\). Therefore we can write
\[L=\sum_{j=1}^{r}(1-g_{j})Y_{j} \tag{22}\]
for some \(Y_{j}\in\mathbb{Z}[\Sigma]\). Now the elements \(\partial_{\Sigma}g_{i}/\partial g_{j}\) for the two choices differ by elements \(L_{ij}\) such that \(L(1-g_{i})=\sum_{j=1}^{n}(1-g_{j})L_{ij}\).
Assume first that \(\Gamma\) has cusps, so \(\widetilde{T}_{\Sigma}\) is given by (16). It follows that the two corresponding \(\widetilde{T}_{\Sigma}\) differ by an element
\[\widetilde{L}=L-\sum_{i=1}^{n}L_{ii}+\sum_{i=1}^{l}\pi_{i}L_{ii}.\]
Multiplying (22) on the right by \((1-g_{i})\) and using Proposition 5 we obtain
\[L_{ii}=Y_{i}(1-g_{i})+\pi_{i}Z_{i}\text{ for }i\leqslant l,\quad L_{ii}=Y_{i}(1 -g_{i})\text{ for }i>l\;,\]
for some \(Z_{i}\in\mathbb{Q}[\Sigma]\). We conclude that
\[\widetilde{L}=\sum_{i=1}^{n}(Y_{i}g_{i}-g_{i}Y_{i})+\sum_{i=1}^{l}\pi_{i}Y_{i }(1-g_{i})\;,\]
and it is clear that each term in both sums has degree \(0\) over each conjugacy class.
Assume now \(\Gamma\) has no cusps nor elliptic elements, and \(\widetilde{T}_{\Sigma}\) is given by (19). Proposition 5 now gives
\[L_{ij}=Y_{j}(1-g_{i})+\frac{\partial r}{\partial g_{j}}Z_{i},\]
for some \(Z_{i}\in\mathbb{Z}[\Sigma]\). By Corollary 6 it follows that the elements \(\widetilde{T}_{\Sigma}^{\vee}\) corresponding to the two choices of representatives differ by \(L^{\vee}\) such that
\[\sum_{j=1}^{n}L_{ji}\frac{\partial r}{\partial g_{j}} =\frac{\partial r}{\partial g_{i}}L^{\vee}\] \[=\sum_{j}Y_{i}(1-g_{j})\partial r/\partial g_{j}+\sum_{j} \partial r/\partial g_{i}Z_{j}\partial r/\partial g_{j},\]
where in the second line we used the formula above for \(L_{ij}\). The first sum vanishes since \(r=1\) in \(\Gamma\), and we conclude that \(L^{\vee}-\sum_{j}Z_{j}\partial r/\partial g_{j}\) is annihilated by left multiplication by \(\partial r/\partial g_{i}\).
The explicit formula for \(r\) gives \(\partial r/\partial g_{i}=(1-g_{i^{\prime}})h_{j}\) for \(i^{\prime}=i\pm 1\) and \(h_{i}\in\Gamma\). If \(g\) is of infinite order, we have \(\operatorname{Ker}(1-g)=0\): if \((1-g)\sum c_{M}M=0\) in \(\mathbb{Q}[\Sigma]\), it follows that \(c_{M}=c_{Mg}=c_{Mg^{n}}\) for all \(n\), and if there is \(c_{M}\neq 0\), one would have infinitely many coefficients \(c_{Mg^{n}}\neq 0\), a contradiction.
It follows that \(L^{\vee}=\sum_{j}Z_{j}\partial r/\partial g_{j}\), and so the two corresponding \(\widetilde{T}_{\Sigma}\) differ by
\[\widetilde{L}=L+L^{\vee}-\sum_{i=1}^{n}L_{ii}=\sum_{i}(Y_{i}g_{i}-g_{i}Y_{i})+ \sum_{i}Z_{i}\frac{\partial r}{\partial g_{i}}-\frac{\partial r}{\partial g_{ i}}Z_{i},\]
and clearly \(\deg_{X}\widetilde{L}=0\) for every conjugacy class \(X\), finishing the proof.
(c) Since formula (21) is additive under taking union of double cosets, it is enough to prove the formula for \(\varepsilon_{\Gamma}(X)\) when \(\Sigma=\Gamma M\) and \(X=\{M\}\). In this case we have \(T_{\Sigma}=M\), and it follows that \(\partial_{\Sigma}g_{i}/\partial g_{i}=M\) for all \(i\). Also \(T_{\Sigma}^{\vee}=M\) if \(\Gamma\) has no cusps, and the coefficient of \(M\) in (16) or (19) gives the formula for \(\varepsilon_{\Gamma}(X)\), by the Gauss-Bonnet formula (3).
For the trivial double coset \(\Sigma=\Gamma\), the operators \(\widetilde{T}_{\Gamma}\) can be explicitly computed:
\[\widetilde{T}_{\Gamma}=1-n-\sum_{i=1}^{n}\pi_{i},\text{ or }\ \widetilde{T}_{\Gamma}=2-n\]
depending on whether \(\Gamma\) has cusps or not. If \(\Gamma\) has cusps, the elements \(g_{i}^{a}\) for \(1\leqslant i\leqslant l\), \(0\leqslant a<m_{i}\) form a set of representatives for conjugacy classes of elliptic elements, which we denote \(E(\Gamma)\), and we obtain.
**Corollary 13**.: _If \(\Gamma\) has cusps, let \(V\) be any \(\widetilde{\Gamma}\)-module, while if \(\Gamma\) has no cusps nor elliptic elements assume either \(H^{2}(\Gamma,V)=0\) or \(H^{2}(\Gamma,V)=V\). Then_
\[\sum_{i}(-1)^{i}\dim H^{i}(\Gamma,V)=-\frac{|\Gamma\backslash\mathcal{H}|}{2 \pi}\dim V+\sum_{g\in E(\Gamma)}\frac{1}{|\Gamma_{g}|}\operatorname{tr}(g,V),\]
Proof.: Only the case \(H^{2}(\Gamma,V)=V\) when \(\Gamma\) has no parabolic elements requires justification. In this case the formula follows from Remark 11.
### The modular group
For \(\Gamma=\operatorname{PSL}_{2}(\mathbb{Z})\), let \(g_{1},g_{2}\) be generators of orders \(2\), \(3\) respectively. Let \(\Sigma=\Sigma_{n}\) be the double coset of integral matrices of determinant \(n\geqslant 1\). In [11], together with Zagier we defined elements \(\widetilde{T}_{n}\) giving the action of the Hecke operator \([\Sigma_{n}]\) on period polynomials of modular forms, and satisfying an extra property that allowed us to prove a trace formula as in Proposition 9, with \(\widetilde{T}_{\Sigma}\) replaced by \(-\widetilde{T}_{n}\).
The element \(\widetilde{T}_{\Sigma}\) defined in (16) does not preserve the space of period polynomials for \(\Gamma\), but we can adjust it so that it does as follows. Let \(X_{i}\in\mathbb{Q}[\Gamma]\) such that \(1-\pi_{i}=(1-g_{i})X_{i}\) (that is \(X_{1}=\frac{1}{2}\), \(X_{2}=\frac{2+g_{2}}{3}\)), and define:
\[\widetilde{T}_{\Sigma}^{\prime}:=T_{\Sigma}-\sum_{i=1}^{2}(1-g_{i})\frac{ \partial_{\Sigma}g_{i}}{\partial g_{i}}X_{i}.\]
It is easily verified that \(\deg_{X}\widetilde{T}_{\Sigma}=\deg_{X}\widetilde{T}_{\Sigma}^{\prime}\) for all conjugacy classes \(X\). For \(i\neq j\in\{1,2\}\), we have by the definition (9)
\[\pi_{i}T_{\Sigma}(1-\pi_{j})=\pi_{i}T_{\Sigma}(1-g_{j})X_{j}=\pi_{i}(1-g_{j}) \frac{\partial_{\Sigma}g_{j}}{\partial g_{j}}X_{i}\]
so \(\pi_{i}\widetilde{T}_{\Sigma}^{\prime}=\pi_{i}T_{\Sigma}\pi_{j}\in\operatorname {Im}\pi_{j}\).
Since \(\deg_{\tau}\widetilde{T}_{\Sigma}^{\prime}=1\) for all cosets \(\tau\in\Gamma\backslash\Sigma\), it follows from [11, Corollary 2] that the adjoint of \(-\widetilde{T}_{\Sigma}^{\prime}\) satisfies properties (A), (B), (C) introduced for the modular group, so it is one of our previously constructed operators \(\widetilde{T}_{n}\). Therefore the constants \(\varepsilon_{\Gamma}(X)\) in Theorem 12 agree with \(w_{\Gamma}(X)\) defined in the introduction, as shown in [11] by giving an example of such an element \(\widetilde{T}_{n}\).
|
2310.12267 | Does Quarkonia Suppression serve as a probe for the deconfinement in
small systems? | In high multiplicity proton-proton $(p-p)$ collisions, the formation of a
deconfined state of quarks and gluons akin to Heavy Ion Collisions (HIC) has
been a subject of significant interest. In proton-proton ($p-p$) collisions,
the transverse size of the system is comparable to the longitudinal (Lorentz
contracted) dimension, unlike the case in Nucleus-Nucleus ($A-A$) collision,
leading to a hitherto unexplored effect of rapid decrease of temperature of the
medium on quark-antiquark bound states. This allows us to probe a unique
possibility of hadronization occurring before quarkonia dissociation within the
medium. In small systems, a rapid change in temperature also introduces sudden
changes in the Hamiltonian. This scenario prompts consideration of
non-adiabatic evolution, challenging the traditional adiabatic framework. We
demonstrate that non-adiabatic evolution may extend the longevity of
quark-anti-quark bound states in $p-p$ collisions, even at higher
multiplicities, offering new insights into the dynamics of strongly interacting
matter produced in smaller collision systems. | Partha Bagchi, Arpan Das, Ananta P. Mishra | 2023-10-18T19:04:35Z | http://arxiv.org/abs/2310.12267v1 | # Does Quarkonia Suppression serve as a probe for the deconfinement in small systems?
###### Abstract
In high multiplicity proton-proton (\(p-p\)) collisions, the formation of a deconfined state of quarks and gluons akin to Heavy Ion Collisions (HIC) has been a subject of significant interest. In proton-proton (\(p-p\)) collisions, the transverse size of the system is comparable to the longitudinal (Lorentz contracted) dimension, unlike the case in Nucleus-Nucleus (\(A-A\)) collision, leading to a hitherto unexplored effect of rapid decrease of temperature of the medium on quark-antiquark bound states. This allows us to probe a unique possibility of hadronization occurring before quarkonia dissociation within the medium. In small systems, a rapid change in temperature also introduces sudden changes in the Hamiltonian. This scenario prompts consideration of non-adiabatic evolution, challenging the traditional adiabatic framework. We demonstrate that non-adiabatic evolution may extend the longevity of quark-anti-quark bound states in \(p-p\) collisions, even at higher multiplicities, offering new insights into the dynamics of strongly interacting matter produced in smaller collision systems.
## I Introduction
Dissociation of various quarkonia states is sensitive to the medium temperature which makes the Quarkonia suppression a probe for the presence of thermalized deconfined matter [1]. In the deconfined medium, the conventional mechanism for quarkonia suppression [2] is the dissociation of quarkonia due to screening of the quark-antiquark potential in Quark-Gluon Plasma (QGP). Quarkonia can experience substantial yield modifications in the presence of QGP, primarily owing to Debye screening effects. When the temperature of the medium is higher than the dissociation temperature of the bound states, the potential between quark and antiquark gets fully screened, and the states will no longer be bound states. Hence, the yield of those states will be suppressed. This picture implicitly assumes that quarkonia have enough time to respond to the medium, and this gives rise to the adiabatic evolution of quarkonia as a quantum state under the time-dependent Hamiltonian. The _adiabatic_ evolution refers to gradually changing conditions allowing the system to adapt its configuration. In such a process, a state corresponding to an initial Hamiltonian \(H_{0}\) will evolve with time to the same eigenstate of the final Hamiltonian(\(H(t)\)). The condition for adiabatic evolution is that the Hamiltonian undergoes gradual changes over time. This allows the initial eigenstates ample time to adjust in response to the evolving Hamiltonian, preventing any transitions to different eigenstates. Qualitatively, for adiabatic evolution the time scale corresponding to the change in Hamiltonian (\(t_{\rm H}\sim\langle m|\dot{H}|n\rangle^{-1}\)) is sufficiently higher than the time scale associated with the transition to the nearest eigenstate (\(t_{\rm tr}\sim|E_{m}-E_{n}|^{-1}\)), i.e., \(t_{\rm H}>>t_{\rm tr}\)[3]. It is important to recognise that the change in Hamiltonian stems from the dynamics of plasma evolution and is sensitive to temporal variation of temperature. Depending upon the system under consideration the time evolution of the strongly interacting plasma can be quite rapid potentially leading to a situation where the condition \(t_{\rm H}>>t_{\rm tr}\) may not always be satisfied.
This necessarily demands a theoretical approach that incorporates a non-adiabatic evolution of bound states. Several attempts have been made to describe the evolution of quarkonia as non-adiabatic evolution ([4; 5; 6; 7; 8; 9]). In these investigations, it has been argued that due to non-adiabatic evolution arising from the rapid temperature evolution, the initial quarkonium states can make a transition to different excited states and also to the continuum states in QGP. Interestingly the presence of a transient magnetic field, which is expected to be produced in noncentral heavy ion collision, can also give rise to a non-adiabatic quarkonia evolution([10; 11; 12; 13; 14]). It is certain that the evolution must depend on the lifetime of QGP, mainly the temperature decay rate along with the medium's initial temperature. It is important to observe that for a rapid decrease in medium temperature, quarkonia may not get sufficient time to dissociate even if the initial temperature becomes more than the dissociation temperature.
Compelling data emerging from \(p-Pb\) collisions at a center-of-mass energy of \(\sqrt{s_{NN}}=5.02\) TeV [15] and from \(p-p\) collisions at \(\sqrt{s}=5\)TeV, \(7\)TeV, and \(13\)TeV at the Large Hadron Collider (LHC) [16], reveals non-zero elliptic flow coefficients indicating the presence of a thermalized partonic medium. A recent study suggested that the suppression of quarkonia could be a signal of QGP formation in \(p-p\) collisions [17]. Considering the adiabatic evolution of quarkonia in a boost invariant system it has been argued in Ref. [17] that if the quark-antiquark bound states are produced in a region where the effective temperature is higher than its dissociation temperature then the bound states will melt in the medium. Conversely, if the effective temperature is lower than the
dissociation temperature, then the dissociation is minimal. We argue that in \(p-p\) collisions small transverse size of the system can lead to a rapid decrease in temperature reducing the lifetime of the deconfined QCD medium. Moreover in small systems, a rapid change in temperature can also introduce sudden changes in the Hamiltonian allowing for a non-adiabatic evolution. In this paper, we study the quarkonia suppression considering non-adiabatic evolution for small systems. We argue that non-adiabatic evolution and fast temperature decay can extend the longevity of quark-anti-quark bound states in \(p-p\) collisions, even at higher multiplicities.
The rest of the paper is organized in the following manner. In Sec. II we discuss the dissociation probability of quark-antiquark bound state under non-adiabatic evolution within the framework of time-dependent perturbation theory. This discussion is followed by the modeling of temperature evolution both in the pre-hydrodynamic stage as well as in the hydrodynamic stage. In Sec. III we present the main outcome of the paper, where we show that for small systems the dissociation of \(J/\Psi\) can be suppressed primarily due to a shorter lifetime of the deconfined medium. In Sec. IV we conclude and summarize the results with an outlook to it.
## II Formalism
### Time-dependent perturbation theory: non-adiabatic evolution
Quarkonia are produced during the early stage (pre-equilibrium stage) of the collision. In a bottom-up thermalization approach, which is based on the QCD kinetic theory description, it can be argued that starting from an interacting out-of-equilibrium state one can achieve a thermalized medium at a later time \(\tau_{0}\). In the absence of a thermalized medium, we can determine the initial state of quarkonia by solving the _zero-temperature_ Hamiltonian (\(H_{0}=\vec{p}^{2}/2M+\sigma r-\frac{4}{3}\alpha_{s}/r\)) [18]. Here \(M\) denotes the reduced mass of the quark-antiquark system. However, as the medium achieves thermalization the zero-temperature Hamiltonian also evolves and transforms into its finite temperature counterpart denoted as \(H=\vec{p}^{2}/2M+\frac{\sigma}{n}(1-\exp(-\mu r))-\frac{4}{3}\alpha_{s}\exp(- \mu r)/r\)[19]. Here \(\alpha_{s}\) represents the strong coupling constant, \(\sigma\) stands for the string tension, and \(\mu\) represents the screening mass, which is temperature-dependent and determined by \(\mu=\sqrt{6\pi\alpha_{s}}T\). The time dependence in the Hamiltonian appears through the time dependence of temperature. As the system expands further the medium temperature eventually drops below the hadronization temperature, effectively reverting the Hamiltonian back to a zero-temperature state. We argue that the time evolution of the Hamiltonian can happen rapidly, within a time scale of the order of 1 fm to 2 fm time (denoted as \(t_{\rm H}\)) for small systems with a dominant transverse flow. However, the transition time scale \(t_{\rm tr}\), from the energy difference of the quarkonia ground state to its next excited state, is around 1 fm which is of the same order as \(t_{\rm H}\) invalidating the adiabatic approximation.
Since this transition occurs quite rapidly, the initial quarkonia states evolve non-adiabatically and may undergo transitions to other states that are orthogonal to the initial ones. Hence, the probability that the original states make a transition to other orthogonal states can be calculated by generalizing the time-dependent perturbation theory method discussed by Lev Landau and Evgeny Lifshitz [3; 20] for quantum systems. Let's begin by assuming that the initial state \(|i\rangle\) is an eigenstate of the unperturbed Hamiltonian \(H_{0}\) (at \(\tau=0\)), and it evolves to a generic state \(|\psi\rangle\) in response to the perturbation \(H^{\prime}(\tau)=H(\tau)-H_{0}\). We aim to find the transition probability from the initial state to all other states orthogonal to a generic state \(|m\rangle\) eigenstates of the unperturbed Hamiltonian. Here \(|m\rangle\) could be a set of quantum states also. To achieve this, we first introduce a projection operator \(Q=1-\sum_{m}|m\rangle\langle m|\), which projects out all states except for a few eigenstates represented by \(|m\rangle\). Note that the initial states \(|i\rangle\) can also belong to the set of \(|m\rangle\) states. Therefore any state that is orthogonal to all states \(|m\rangle\) can be expressed as \(Q|\psi\rangle\). The transition amplitude \(\cal A\) can be expressed as,
\[{\cal A}=\langle\psi|Q|\psi\rangle. \tag{1}\]
The evolved state \(|\psi\rangle\) can be written in terms of the eigenstates \(|n\rangle\) of the Hamiltonian \(H_{0}\).
\[|\psi\rangle=\sum_{n}c_{n}|n\rangle. \tag{2}\]
The coefficients \(c_{n}\) can be determined using perturbation theory, particularly from first-order perturbation theory [3]:
\[c_{n} = \delta_{ni}-i\Delta\tau\langle n|\left(\frac{1}{\Delta\tau}\int_{ 0}^{\Delta\tau}H^{\prime}(\tau)d\tau\right)|i\rangle \tag{3}\] \[= \delta_{ni}-i\Delta\tau\langle n|\vec{H^{\prime}}|i\rangle.\]
Here, \(\vec{H^{\prime}}\) is defined as 1:
Footnote 1: In the integral above, within the first order of perturbations, we do not consider any leading order temporal dependence of states \(|i\rangle\) and \(|n\rangle\).
\[\vec{H^{\prime}}=\frac{1}{\Delta\tau}\int_{0}^{\Delta\tau}H^{\prime}(\tau)d\tau \tag{4}\]
Using equations (1), (2), (3), and (4), we arrive at the transition amplitude \(\cal A\)[3]:
\[{\cal A}=\Delta\tau^{2}\langle i|\vec{H^{\prime}}Q\vec{H^{\prime}}|i\rangle+{ \cal O}((\Delta\tau\vec{H^{\prime}})^{3}). \tag{5}\]
It's important to note that the value of \(c_{n}\) in Equation 3 is accurate up to the first order, which corresponds to
the order of \(\Delta\tau\bar{H}^{\prime}\). Consequently, the value of \(\mathcal{A}\) is accurate up to the second order in \(c_{n}\), denoted as \((\Delta\tau\bar{H}^{\prime})^{2}\). Furthermore, we can express \(\langle i|\bar{H}^{\prime}Q\bar{H}^{\prime}|i\rangle\) as:
\[\langle i|\bar{H}^{\prime}Q\bar{H}^{\prime}|i\rangle=\langle i|\bar{H}^{\prime 2}|i\rangle-\sum_{m}\langle i|\bar{H}^{\prime}|m\rangle^{2}. \tag{6}\]
This allows us to express \(\mathcal{A}\) as:
\[\mathcal{A}=\Delta\tau^{2}\left(\langle i|\bar{H}^{\prime 2}|i\rangle-\sum_{m} \langle i|\bar{H}^{\prime}|m\rangle^{2}\right). \tag{7}\]
As stated above \(\mathcal{A}\) quantifies the transition amplitude from the initial state to all other states that are orthogonal to all the states represented by \(|m\rangle\). Certainly, for the present scenario, we consider the initial state \(|i\rangle\) to be \(|J/\Psi\rangle\) and the states in \(|m\rangle\) to be \(|J/\Psi\rangle\), \(|\chi\rangle\), and \(|\Psi^{\prime}\rangle\), then the transition amplitude \(\mathcal{A}\) represents the transition amplitude of \(J/\Psi\) into all the states other than \(J/\Psi\), \(\chi\), and \(\Psi^{\prime}\) in the Quark-Gluon Plasma (QGP). This transition amplitude for \(J/\Psi\) can be expressed as,
\[\mathcal{A} = \Delta\tau^{2}\bigg{(}\langle J/\Psi|\bar{H}^{\prime}{}^{2}|J/ \Psi\rangle-\langle J/\Psi|\bar{H}^{\prime}|J/\Psi\rangle^{2}\] \[\qquad\qquad-\langle J/\Psi|\bar{H}^{\prime}|\chi\rangle^{2}- \langle J/\Psi|\bar{H}^{\prime}|\Psi^{\prime}\rangle^{2}\bigg{)}.\]
Using the above expression one can obtain \(\Gamma\equiv|\mathcal{A}|^{2}\) which quantifies the dissociation probability of \(J/\Psi\) in the QGP.
### Pre-equilibrium dynamics: evolution of effective temperature
The key quantity that enters the expression of the transition probability is the perturbed Hamiltonian which carries the temporal dependence that originates from the time dependence of the system's temperature. modeling the time evolution of temperature is not trivial for the entire evolution of the plasma in heavy ion collision. Fortunately, hydrodynamic evolution plays a crucial role in the space-time evolution of the QCD medium after the partonic medium thermalizes starting from the pre-equilibrium stage. Irrespective of the initial stages, the hydrodynamic evolution unambiguously describes the bulk evolution of the medium. Therefore we can certainly choose a hydrodynamic evolution to model the temperature evolution or the evolution of the Hamiltonian. But to model the pre-equilibrium stages one may rely on the effective QCD kinetic theory description as has been discussed within the framework of bottom-up thermalization [21; 22; 23]. Qualitatively, in this approach, it has been argued that in non-expanding systems gauge bosons can rapidly achieve equilibrium (kinetic) among themselves followed by the equilibration of fermions. On the other hand, if the system undergoes a rapid longitudinal expansion, partons may remain out of equilibrium, but the system can be effectively described by the fluid dynamics. Without going into the details of the model we consider the following ansatz for the proper time evolution of the _effective_ temperature (\(T_{\rm eff}\)) 2,
Footnote 2: It is effective temperature because the temperature is strictly defined in equilibrium only.
\[\frac{T_{\rm eff}}{T_{\rm Hydro}}=\left(\frac{\tau}{\tau_{\rm Hydro}}\right)^{ \frac{1}{\tau}\frac{\alpha-1}{(\alpha+3)}} \tag{9}\]
The physical picture that prompts us to explore the above scaling is that the initial out-of-equilibrium partons scatter with each other to achieve the kinetic/thermal equilibrium. Therefore as the proper time approaches the thermalization time (\(\tau_{\rm Them}\)) system equilibrates as a whole. We also identify the thermalization time scale as the time scale when we can apply the hydrodynamic description (\(\tau_{\rm Hydro}\)). In principle, all these different time scales can form a hierarchy, but we expect that if the thermalization is achieved very fast then the difference between different scales may not be too large not affecting the system dynamics significantly. The parameter \(\alpha\) enters the above equation because the effective temperature can be defined through the \(\alpha\)-th moment of the Fermionic or Bosonic distribution function [21]. Physically the parameter \(\alpha\) determines how fast the system achieves hydronization (onset of hydrodynamic description) or thermalization. In the subsequent discussion, we appropriately choose \(T_{\rm Hydro}\), \(\tau_{\rm Hydro}\), and \(\alpha\) to model the pre-equilibrium dynamics. Instead of going into the microscopic description, naively one can also assume the temperature starts at zero at some initial time and increases linearly until it reaches a value \(T_{\rm Hydro}\) at time \(\tau_{\rm Hydro}\). Such simple approximations can be useful to calculate the average perturbation (as described in Equation 4) in the pre-thermalization stage.
### Gubser flow and the associated temperature evolution
Once we have a description of the pre-equilibrium effective temperature that also quantifies the pre-equilibrium dynamics of the Hamiltonian, now we can look into the temperature evolution due to the hydrodynamic flow dynamics. Notably, in \(p-p\) collisions, the size of the produced medium is expected to be quite small, approximately 1.5 fm [24] and transverse expansion can not be ignored, which is otherwise neglected in the Bjorken flow solution for large systems. To take into consideration the transverse expansion of the system in the present calculation we look into the Gubser flow, first explored by Gubser and Yarom [25]. This approach combines a "boost-invariant" longitudinal flow, akin to the Bjorken flow, with consideration for transverse flow. The evolution of key thermodynamic quantities, including energy
density (\(\epsilon\)) and shear stress (\(\pi\)), within the framework of Gubser flow with third-order viscous corrections, is detailed in [26; 27].
\[\frac{d\hat{\epsilon}}{d\rho} = -\left(\frac{8}{3}\hat{\epsilon}-\hat{\pi}\right)\tanh(\rho) \tag{10}\] \[\frac{d\hat{\pi}}{d\rho} = -\frac{\hat{\pi}}{\hat{\tau}_{\pi}}+\tanh(\rho)\left(\frac{4}{3} \hat{\beta}_{\pi}-\hat{\lambda}\hat{\pi}-\hat{\chi}\frac{\hat{\pi}^{2}}{\hat{ \beta}_{\pi}}\right) \tag{11}\]
The dimensionless quantities, \(\hat{\epsilon}\) and \(\hat{\pi}\), are expressed as \(\hat{\epsilon}=\hat{T}^{4}=\epsilon\tau^{4}=3\hat{P}\) and \(\hat{\pi}=\pi\tau^{4}\) where \(\tau\) is the proper time and \(\hat{T}\) is related to temperature. The parameters has been chosen [26] as \(\epsilon=\frac{3}{\pi^{2}}T^{4}\), \(\hat{\tau}_{\pi}(=c/\hat{T})\) is related to relaxation time, where \(c=5\frac{n}{s}\), \(\hat{\beta}_{\pi}=4\hat{P}/5\), \(\hat{\lambda}=46/21\) and the third order correction parameter \(\hat{\chi}=72/245\). The conformal time \(\rho\) can be written as
\[\rho=-\sinh^{-1}\left(\frac{1-q^{2}\tau^{2}+q^{2}x_{T}^{2}}{2q\tau}\right) \tag{12}\]
where \(q\) is an arbitrary energy scale, which is related to the transverse size of the medium (\(r_{T}\)) like \(q=\frac{1}{r_{T}}\), \(x_{T}\) is the position in the transverse plane. One can retrieve the Bjorken flow solution by taking the limit \(r_{T}\rightarrow\infty\) or \(q\to 0\). One can also use the \((3+1)\) dimensional hydrodynamic description for a more accurate description of non-boost invariant flow with nontrivial rapidity dependence. But considering the possible boost invariance in ultra-relativistic collisions we restrict ourselves to the analytically solvable hydrodynamic description with transverse expansion.
To demonstrate the effect of the transverse expansion on the evolution of temperature we solve the evolution equations (10) and (11) with initial conditions \(T=T_{\rm Hydro}=350\) MeV and \(\hat{\pi}=\frac{4}{3}\hat{\beta}_{\pi}\hat{\tau}_{\pi}\) at \(\tau=\tau_{\rm Hydro}=0.3\) fm for various system sizes (\(r_{T}\)). The results as shown in Fig.(1) indicate that, as \(r_{T}\) increases, the lifetime of the Quark-Gluon Plasma (QGP) increases. This is because the time scale over which the temperature (\(T\)) falls just below \(T_{c}\) (the QCD phase transition temperature which is considered as \(\sim 150\) MeV) increases with the increase in transverse size. At sufficiently large \(r_{T}\), the variation of temperature (\(T\)) with proper time (\(\tau\)) for Gubser flow closely resembles that of Bjorken flow. Additionally, from Fig. 2 we may observe that the decay of temperature is slower for the viscous case compared to the inviscid case (Fig. 1) for identical initial conditions. In Fig. 2, it is evident that the (\(T\) vs. \(\tau\)) plot for different orders of viscous corrections largely overlaps with each other for the lower bound of \(\frac{1}{s}\) (equal to \(\frac{1}{4\pi}\)). In contrast, the inviscid case consistently exhibits faster temperature decay compared to all viscous scenarios. The variation of temperature with system size clearly indicate that for small system the time evolution of the system can be rapid as compared to the large systems allowing us to explore the scenario of non-adiabatic evolution.
Figure 1: Proper time (\(\tau\)) evolution of temperature (\(T\)) for different values of the transverse size (\(r_{T}\)) of the medium. All lines represent the temperature evolution following the Gubser flow without any viscous corrections. For comparison, we also show the temperature evolution as obtained from the Bjorken flow in the absence of viscosity. It is clear that with a large system size (\(r_{T}=100\) fm) Gubser flow solution boils down to the Bjorken flow solution. With a smaller system size the lifetime of the deconfined medium decreases. In hydrodynamic evolution, we consider the temperature at the center of the transverse plane.
Figure 2: Effect of viscous corrections on the evolution of the medium temperature. In this case, we consider \(4\pi\eta/s=1\), \(r_{T}=1.5\) fm. In this plot, we have considered viscous corrections up to third-order corrections. It is clear that viscous correction slows down the decrease of temperature with proper time increasing the lifetime of the deconfined medium. Here first order means Naiver-Stokes limit where \(\pi\) is not an independent hydrodynamic variable. For the second order case \(\hat{\chi}=0\) and for the third order \(\hat{\chi}\neq 0\).
## III Results and Discussions
The factors that can affect the dissociation probability of \(J/\Psi\) are (1) the evolution profile of effective temperature in the pre-hydronization/ pre-thermalization stage, (2) the initial thermalization/hydronization temperature (\(T_{\rm Hydro}\)), (3) the time scale for hydronization (\(\tau_{\rm Hydro}\)), (4) the transverse size of the system, and (5) viscous correction to hydrodynamic flow. As mentioned above in this work we have considered two different scenarios for the evolution of the temperature in the pre-hydronization stage. One is the power law profile as indicated in Eq.(9) and the other one is a linear rise of temperature from zero to \(T_{\rm Hydro}\). The results for the power law profile and for the linear profile have been shown in Fig.(3), and Fig.(4) respectively. In both figures to see the effect of \(T_{\rm Hydro}\) on the dissociation probability, we vary \(T_{\rm Hydro}\) from \(T_{c}\) to \(3T_{c}\) for a fixed \(\tau_{\rm Hydro}=0.3\) fm, \(4\pi\eta/s=1\), and \(r_{T}=1.5\) fm. On the other hand, to observe the effect of \(\tau_{\rm Hydro}\) on the dissociation probability in both figures we consider \(T_{\rm Hydro}=350\) MeV, \(4\pi\eta/s=1\), \(r_{T}=1.5\) fm, and vary \(\tau_{\rm Hydro}\) within the range of 0.1 fm to 0.5 fm. Similarly, we keep \(T_{\rm Hydro}=350\) MeV, \(\tau_{\rm Hydro}=0.3\) fm \(4\pi\eta/s=1\) fixed and vary \(r_{T}\) within the range \(1.0-5.0\) fm to demonstrate the effect of the transverse size of the system. Finally, to isolate the effect of viscosity on the \(J/\Psi\) dissociation probability in both figures we only vary the shear viscous coefficient \(4\pi\eta/s\) within the range \(1-5\), keeping all the other parameters fixed, i.e., \(T_{\rm Hydro}=350\) MeV, \(\tau_{\rm Hydro}=0.3\) fm, \(r_{T}=1.5\) fm. We choose this set of parameters to determine the highest possible dissociation of \(J/\Psi\). Moreover in our calculations, we have assumed \(J/\Psi\) is produced at the centre of the medium (\(x_{T}=0\)), with no initial transverse momentum (\(p_{T}\)) facing the QGP medium up to hadronisation. By setting up this scenario, we aim to maximize the time the produced \(J/\Psi\) spends within the QGP, which in turn should maximize the probability of it undergoing dissociation.
In Figs.3 and 4 we plotted the dissociation probability with respect to the dimensionless ratio \(R_{s}\), where \(R_{s}\) indicates \(T_{\rm Hydro}/T_{c}\) ratio for the black line, \(\tau_{\rm Hydro}({\rm fm})/0.1({\rm fm})\) for the red line, \(r_{T}({\rm fm})/1({\rm fm})\) for the blue line, and \(4\pi\eta/s\) for the brown line. Looking at these figures, we can observe some important trends. First, the dissociation probability tends to increase gradually with both \(\tau_{\rm Hydro}\) and viscosity. However, this increase is not very steep. The dependence of the dissociation probability on \(\tau_{\rm Hydro}\) is rather convoluted. Because both the pre-equilibrium and the hydrodynamic stages depend on \(\tau_{\rm Hydro}\). On the other hand, the dissociation probability rises quite rapidly as the system size (\(r_{T}\)) increases. This system size scaling of the \(J/\Psi\) dissociation probability indicates that for small systems quark-antiquark bound states can survive more with respect to the large systems (if we ignore any dissociation mechanism due to hadronic scattering in hot and dense hadron gas). This is predominantly because for larger \(r_{T}\) the lifetime of the deconfined medium is larger. Furthermore, the rate of increase becomes even more pronounced with higher viscosity (\(\eta/s\)). This is because the viscous effect tends to reduce the rate
Figure 3: Dissociation Probability of \(J/\Psi\) as a function of the dimensionless quantity \(R_{s}\) (see text for a detailed description). \(R_{s}\) indicates \(T_{\rm Hydro}/T_{c}\) ratio for the black line, \(\tau_{\rm Hydro}({\rm fm})/0.1({\rm fm})\) for the red line, \(r_{T}({\rm fm})/1({\rm fm})\) for the blue line, and \(4\pi\eta/s\) for the brown line. In this case, the evolution of effective temperature in the pre-equilibrium stage has been modeled using the power law in Eq.(9) with \(\alpha=2\). We do not find any significant change in the results for different values of \(\alpha\), e.g., \(\alpha=3\) and 4. We have taken the suitable value of \(\alpha\) as given in Ref. [21].
Figure 4: Dissociation Probability of \(J/\Psi\) as a function of the dimensionless quantity \(R_{s}\) (see text for a detailed description). \(R_{s}\) indicates \(T_{\rm Hydro}/T_{c}\) ratio for the black line, \(\tau_{\rm Hydro}({\rm fm})/0.1({\rm fm})\) for the red line, \(r_{T}({\rm fm})/1({\rm fm})\) for the blue line, and \(4\pi\eta/s\) for the brown line. In this case, the evolution of effective temperature in the pre-equilibrium stage has been modeled using the linear rise of effective temperature starting from zero to \(T_{\rm Hydro}\).
of fall of temperature due to hydrodynamic expansion. However, it's important that even with these increases, as long as the transverse size of the system remains less than 3 fm, the dissociation probability stays of the order of 20%. For a system size of order 1.5 fm the dissociation probability can be as small as 10%. Therefore even in the presence of deconfined matter the dissociation probability of \(J/\Psi\) is not significantly large for small systems. From these plots, one can also conclude that the dissociation probability can be sensitive to the modeling of the effective temperature for the pre-equilibrium dynamics.
We emphasize that we have calculated the probability of dissociation of \(J/\Psi\) with no transverse momentum and produced at the center of the medium (\(x_{T}=0\)). However, the \(J/\Psi\) can have finite \(p_{T}\) and it can be produced anywhere in the transverse plane. In this scenario, the produced \(J/\Psi\) will face QGP for a much shorter time and can get out of the medium well before hadronization. Consequently, the probability of \(J/\Psi\) dissociation in such cases will be significantly lower compared to our previous calculations. Therefore, the overall probability of dissociation may be insignificant in these more realistic conditions.
## IV Summary
In this paper, we estimate an upper bound on the \(J/\Psi\) dissociation probability for small systems. With realistic modeling of the pre-hydronization stage along with a hydrodynamic evolution with the transverse expansion we showed that the survival probability of quark-antiquark bound states can be significantly large in small systems even for large multiplicity. Due to a finite transverse system size in small systems, non-adiabatic evolution can affect the dissociation probability of \(J/\Psi\). Therefore quarkonia suppression may not be a clear signature for deconfinement in small systems even with large multiplicity. The technique we've employed to calculate the dissociation probability of \(J/\Psi\) has broader applications beyond high-energy physics. It can be adapted for use in other fields like atomic or molecular physics and chemistry. For example, one can easily generalize this approach to calculate the ionization of hydrogen-like atoms caused by electric impulses. Moreover, this technique can be readily extended to calculate the ionization of atoms or molecules when exposed to an electromagnetic pulse, making it a versatile tool in various scientific domains.
## Acknowledgements
We thank the organizers of the ICPAQGP 2023 at Puri, India, and the 2nd workshop on the 'Dynamics of QCD matter' organized at the National Institute of Science Education and Research Bhubaneswar (NISER), India for the kind hospitality and for creating the opportunity for fruitful discussions and further development related to this work. In these conferences, this work has been presented by APM and PB. We would also like to acknowledge Ashutosh Dash, Amaresh Jaiswal, Surasree Mazumder, Sukanya Mitra, Tamal K. Mukherjee, and Victor Roy for illuminating discussions.
|
2308.07346 | Py-Tetrad and RPy-Tetrad: A New Python Interface with R Support for
Tetrad Causal Search | We give novel Python and R interfaces for the (Java) Tetrad project for
causal modeling, search, and estimation. The Tetrad project is a mainstay in
the literature, having been under consistent development for over 30 years.
Some of its algorithms are now classics, like PC and FCI; others are recent
developments. It is increasingly the case, however, that researchers need to
access the underlying Java code from Python or R. Existing methods for doing
this are inadequate. We provide new, up-to-date methods using the JPype
Python-Java interface and the Reticulate Python-R interface, directly solving
these issues. With the addition of some simple tools and the provision of
working examples for both Python and R, using JPype and Reticulate to interface
Python and R with Tetrad is straightforward and intuitive. | Joseph D. Ramsey, Bryan Andrews | 2023-08-13T16:29:05Z | http://arxiv.org/abs/2308.07346v1 | [
###### Abstract
We give novel Python and R interfaces for the (Java) Tetrad project for causal modeling, search, and estimation. The Tetrad project is a mainstay in the literature, having been under consistent development for over 30 years. Some of its algorithms are now classics, like PC and FCI; others are recent developments. It is increasingly the case, however, that researchers need to access the underlying Java code from Python or R. Existing methods for doing this are inadequate. We provide new, up-to-date methods using the JPype Python-Java interface and the Reticulate Python-R interface, directly solving these issues. With the addition of some simple tools and the provision of working examples for both Python and R, using JPype and Reticulate to interface Python and R with Tetrad is straightforward and intuitive.
T 1-12, 2023 Causal Analysis Workshop Series (CAWS)
P y-Tetrad and RPy-Tetrad]Py-Tetrad and RPy-Tetrad: A New Python Interface with R Support for Tetrad Causal Search
J. D. [email protected] Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA
Bryan Andrews and R. Andrews]Joseph D. Ramsey [email protected] Department of Psychiatry & Behavioral Sciences, University of Minnesota, Minneapolis, MN
## 1 Introduction
Tetrad (Ramsey et al., 2018)1 is a well-established package for causal modeling, search, and estimation that has been continuously developed since the early 1990s. Not only is it the source for several now-classic algorithms, such as the celebrated Peter/Clark (PC) (Spirtes et al., 2000) and Fast Causal Inference (FCI) (Spirtes et al., 2000) algorithms, but it contains implementations of more recent state-of-the-art algorithms as well. Algorithms in Tetrad include, but are not limited to:
Footnote 1: [https://github.com/cmu-phil/tetrad](https://github.com/cmu-phil/tetrad)
* Fast Greedy Equivalence Search (FGES) (Ramsey et al., 2017), an implementation of the Greedy Equivalent Search (GES) (Chickering, 2002) algorithm;
* Build Pure Clusters (BPC) (Silva et al., 2006), an algorithm for searching over latent variable structures using tests of variable tetrads;
* Greedy Relaxations of Sparsest Permutation (GRaSP) (Lam et al., 2022), a permutation-based algorithm that provides high-accuracy output for dense graphs.
However, it is increasingly the case that users need to access the underlying code of Tetrad. This can present an obstacle to those unfamiliar with Java since most data scientists now primarily use Python (VanRossum and Drake, 2010) or R (R Core Team, 2023). Python is beneficial for machine learning and is also increasingly used in the sciences. At the same time, R has been a mainstay of scientific and statistical research for many years. Python and R are scripting languages that can mock up ideas quickly, wrangle data, generate plots, do statistical analyses, etc. In addition, Python has become a go-to language for scientific algorithmic development, so a great deal of software has become readily available for general use in Python that is not readily accessible from Java.
The Python project, JPype (Nelson and Scherer, 2020), provides a ready solution for accessing the underlying code of Tetrad from Python; it allows easy access, using Python-native syntax, to any Java class, method, or field, so we have created a package, py-tetrad,2 to show how to use JPype to interact with Tetrad Java code. Py-tetrad provides three things. First, it provides simple tools for translating datasets and graphs between Python and Tetrad. Second, it provides a class, TetradSearch, that handles everyday operations without needing to resort to JPype programming explicitly. And third, it provides numerous examples of using TetradSearch and JPype to interface Python with Tetrad.
Footnote 2: [https://github.com/cmu-phil/py-tetrad](https://github.com/cmu-phil/py-tetrad)
To access Tetrad from R, we take advantage of the Reticulate R package,3 which provides indirect access to Tetrad through py-tetrad. We call the resulting project rpy-tetrad; it is located in a subdirectory of the py-tetrad project.4 R through Reticulate can share data and graphs directly with py-tetrad, and py-tetrad can then translate them to Java. Routing the connection from R to Java through Python has the advantage of fewer "moving parts"; updates to py-tetrad are immediately accessible to R, so the projects do not need to be updated separately. As with py-tetrad, rpy-tetrad provides numerous examples.
Footnote 3: [https://rstudio.github.io/reticulate/](https://rstudio.github.io/reticulate/)
Installation instructions for py-tetrad and rpy-tetrad may be found in the ReadMe files on their respective GitHub pages, and example files for use may also be found in these directories on GitHub. These instructions will be kept up to date if changes to Java, Tetrad, or Python require it. The example files in py-tetrad and rpy-tetrad will also be kept up to date as changes to Java, Tetrad, or Python happen.
It is worth noting that the idea of interfacing Python and R with Tetrad is not new; previous work had produced the packages py-causal (for Python) and r-causal (for R) using the Causal Command tool5 (see the comparison in Table 1). For those using py-causal or r-causal, we recommend transitioning to py-tetrad and rpy-tetrad for two reasons. First, the Java versions used in py-causal and r-causal are now outdated and no longer receive updates. Using py-tetrad and rpy-tetrad, users can benefit from the significant improvements made to Tetrad in recent years. Second, JPype enables access to the entire Tetrad codebase from Python, not just a select portion. While this may not be essential for users interested in specific methods already supported by py-causal or r-causal, it dramatically facilitates exploring aspects of Tetrad that are not directly supported.6
Footnote 5: [https://github.com/bd2kccd/causal-cmd](https://github.com/bd2kccd/causal-cmd)
## 2 Prepackaged Tools
Prepackaged methods are provided in the 'tools' directory of the project7 to translate datasets between Tetrad and Python and to translate graphs from Tetrad to Python. A class, TetradSearch,8 is provided to give access to Tetrad without using JPype calls; this class encapsulates a wide swath of Tetrad functionality and is accessible from R. These are not necessarily the only tools that will eventually be provided, but they are helpful for a wide variety of tasks.
Footnote 7: [https://github.com/cmu-phil/py-tetrad/tree/main/pytetrad/tools](https://github.com/cmu-phil/py-tetrad/tree/main/pytetrad/tools)
Footnote 8: [https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/tools/TetradSearch.py](https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/tools/TetradSearch.py)
The dataset translators serve two purposes. First, they provide translations of data between Tetrad and Python in a way that makes them useful for most purposes. They are simple, fast, and effective; even large datasets are translated quickly. They can translate data that is continuous, discrete, or a mixture of continuous and discrete columns. From Python to Tetrad, discrete columns are detected by the column type; these column types can be set directly in the data frame if the Python data loader does not already set them to the appropriate type. Second, Python has an abundance of tools for data preprocessing and wrangling that researchers can now directly incorporate into their pipelines.
Several graph translation methods from Tetrad to Python are provided to handle a variety of purposes. Users can, of course, write their own graph translators for special purposes if the provided formats are inadequate. One can do this by wrapping one of these methods and extending its functionality.
A graph can be retrieved from Tetrad in the following formats:
1. As a GeneralGraph object in causal-learn. The causal-learn package9 supports a graph object in Python compatible with Tetrad's EdgeListGraph; the translation is direct, and the returned graph can be manipulated directly in Python. Footnote 9: [https://causal-learn.readthedocs.io/en/latest/index.html](https://causal-learn.readthedocs.io/en/latest/index.html)
2. As an edge matrix in PCALG (Kalisch et al., 2012). This is an edge matrix in which tail, arrow, star, circle, and null (no endpoint) endpoints are represented as distinct integers, which is compatible with Tetrad's EdgeListGraph. Footnote 9: [https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/tools/TetradSearch.py](https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/tools/TetradSearch.py)
3. As a simple DOT string. This is the form of a graph used by Graphviz (Ellson et al., 2002) to plot graphs. Graphviz is a sophisticated graph plotting tool with
\begin{table}
\begin{tabular}{|c|c|c c|c|} \hline & py-causal & r-causal & py-tetrad & rpy-tetrad \\ \hline Tetrad Version & 6.8.0 & 6.8.0 & 7.5.0+ & 7.5.0+ \\ Last Developed & 2017 & 2017 & 2023+ & 2023+ \\ Tetrad Access & javabridge & rjava & JPype & JPype \\ Python Version & 2.7+ & - & 3.5+ & 3.5+ \\ R Version & - & 3.3.0+ & - & 4+ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of py-causal, r-causal, py-tetrad, rpy-tetrad
many options; our DOT format gives a basic DOT output. For more nuanced DOT outputs, the user can implement a similar method in Python (as we do for our Python example below).
4. As a model specification for the lavaan (Rosseel et al., 2017) R package. Any directed and acyclic graph can be saved in this format.
If one uses JPype directly, the entire codebase of Tetrad is at one's fingertips; code examples are given in the py-tetrad project to show how to do this, but we will give an example below to show how powerful this can be. Of course, when accessing the Tetrad codebase, it helps to know the various classes, methods, and fields that one can use in one's JPype code. For this, it helps to have the documentation for the Tetrad package available for reference. This is given in the usual Java format as a set of Javadocs,10 which can be accessed online, though one should check for updates;11 the current Tetrad version for these is 7.4.0, though these are updated as revisions to Tetrad are made.
Footnote 10: The version of Javadocs at the time of writing of this report is at [https://www.phil.cmu.edu/tetrad-javadocs/7.4.0/](https://www.phil.cmu.edu/tetrad-javadocs/7.4.0/)
It should be noted as well that JPype allows for Java interfaces to be implemented in Python, so it is possible to define tests or scores in Python for use in Java search methods. An example of this is provided in the py-tetrad package.12
Footnote 11: [https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/general_scoring_example.py](https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/general_scoring_example.py).
## 3 A Code Example in Python
A typical py-tetrad workflow starts by loading a dataset, wrangling the data, and then optionally plotting histograms and scatter plots. Python provides robust tools for these tasks, so there is no need to rely on Tetrad for these steps. One then converts the data into a Tetrad dataset object before passing it to a Tetrad search procedure. The procedure returns a graph in Tetrad format, which can optionally be converted back to Python for further processing, such as calculating statistics in a simulation study or aggregating multiple runs in a bootstrapped study; examples of both are available in py-tetrad.
We give an example of a bootstrapped study below. Tetrad has built-in bootstrapping facilities, which we will showcase in our subsequent R example for comparison, but we will do the bootstrapping in Python to show how it can be done. The bootstrapped study uses the Apple Watch Fitbit data (Fuller et al., 2020), which is available in easily loaded format at the indicated location.13 This code example is included in the py-tetrad repository.14
Footnote 13: [https://github.com/cmu-phil/example-causal-datasets/tree/main/real/apple-watch-fitbit](https://github.com/cmu-phil/example-causal-datasets/tree/main/real/apple-watch-fitbit)
Footnote 14: [https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/jpype_example.py](https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/jpype_example.py)
First, we load the data and make a knowledge object. This example has a mixture of continuous and discrete columns, so we need to correctly set the column types for our variables so they will be translated to Tetrad properly. We also use Tetrad's knowledge facility to put variables into "knowledge tiers" to ensure that variables in later tiers cannot cause variables in earlier tiers. We use the Degenerate Gaussian mixed-type score (Andrews
et al., 2019) with the SP-FCI algorithm (see Raskutti and Uhler (2018) and Ogarrio et al. (2016)).15
Footnote 15: The SP algorithm, since it looks at all permutations of the variables, is exponential and so scales in Tetrad comfortably only to about 11 variables (we have 10 here). Still, the implementation in Tetrad allows up to 11 variables per knowledge tier, so we are able to extend its functionality to more variables. SP is being substituted in SP-FCI for FGES in the GFCI algorithm.
import pandas as pd import graphviz as gviz
import jpype imports jpype.startJVM(classpath=[f"resources/tetrad-gui-current-launch.jar"])
import pytetrad.tools.translate as ptt import pytetrad.tools.visualize as ptv import edu.cmu.tetrad.search as ts import edu.cmu.tetrad.data as td
tiers = [['age', 'gender', 'height', 'weight','resting_heart', 'device', 'activity'], ['steps', 'heart_rate', 'calories', 'distance']]
df = pd.read_csv("resources/aw-fb-pruned18.data.mixed.numeric.txt", sep="\t") df = df[tiers[0] + tiers[1]] df = df.astype({col: int for col in ["gender", "device", "activity"]})
knowledge = td.Knowledge() knowledge.setTierForbiddenWithin(0, True) for col intiers[0]: knowledge.addToTier(0, col) for col intiers[1]: knowledge.addToTier(1, col)
Now we bootstrap and record the results. Note that since some columns are discrete and some continuous, we need to use a score suitable for mixed data types. Finally, we visualize and print the results. The bootstrapped results are shown in Table 2; the graph plot is shown in Figure 1.
reps = 10 graphs = [] for rep in range(reps): data = ptt.pandas_data_to_tetrad(df.sample(frac=1, replace=True)) score = ts.score.DegenerateGaussianScore(data, True) score.setPenaltyDiscount(2) test = ts.test.ScoreIndTest(score, data)
alg = ts.SpFci(test, score) alg.setKnowledge(knowledge) graphs.append(alg.search())
probs = ptv.graphs_to_probs(graphs) graph_attr = {"viewport": "600", "outputorder": "edgesfirst"} gdot = gviz.Graph(format="pdf", engine="neato", graph_attr=graph_attr)
gdot = ptv.write_gdot(gdot, probs, length=2) gdot.render(filename="apple_fitbit", cleanup=True, quiet=True) gdot.clear()
## 4 A Code Example in R
We provide support for R via py-tetrad and the R project, Reticulate, in a project we call "rpy-tetrad," located in the 'R' subdirectory of the py-tetrad project. Instructions for setting up the project to run in R or RStudio; these instructions, and rpy-tetrad itself, have been tested and shown to work on Mac, Linux, and Windows. Note that the interface in R to Tetrad is limited to what is available in the TetradSearch class in py-tetrad, so if more methods are needed in R, more methods must be added to the py-tetrad class so R can access them; this is straightforward. One cannot use JPype commands directly from R; these must be routed through Python.
We give an example in this section; more examples are available in the 'R' subdirectory of py-tetrad. We use data from a NASA Airfoil Self-Noise experiment (Brooks et al., 2014)
Figure 1: Graph for the JPype example in the text for ”Apple Watch Fitbit.”
with six variables. In this example, the data loader in R loads the data as a mixture of continuous and discrete (integer) variables, which the converter in py-tetrad will translate as a mixture of continuous data in Tetrad. To enforce the constraint that all variables are interpreted as continuous, we need first to tell R that the columns in this data frame are all to be interpreted as 'numeric'; a line in the code does this. Then a test defined over continuous variables ('Fisher Z') and a score defined over continuous variables ("SEM BIC") can be used to run the search algorithm. TetradSearch will report if the search, test, or score is for one data type but another data type is given.
As in the Python example, we here appeal to background knowledge. We use the feature in the TetradSearch class that allows us to specify background knowledge as 'temporal tiers,' where variables in later tiers are known not to cause variables in earlier tiers. For this example, we know that 'pressure' (sound pressure) in this NASA experiment cannot cause any other experimental or airfoil design variables, so we put it in a later tier. (One may also forbid or require particular edges as part of background knowledge.) If we use such background knowledge in a search, we expect the algorithm to honor it. Not all algorithms in Tetrad are designed to honor background knowledge; if one chooses an algorithm that does not and provides background knowledge, an error will be reported. Examples of algorithms that are not currently designed to honor background knowledge include ICA LiNGAM, Direct LiNGAM, and CCD, though this may change.
\begin{table}
\begin{tabular}{l c c c c c} Adjacency & \(\leftrightarrow\) & \(\circ\)\(\rightarrow\) & \(\rightarrow\) & \(\leftarrow\)\(\circ\) & \(\leftarrow\) \\ \hline (activity, calories) & 0.93 & 0.07 & - & - & - \\ (activity, heart rate) & 0.93 & 0.07 & - & - & - \\ (calories, device) & 0.22 & - & - & 0.30 & 0.48 \\ (device, distance) & 0.70 & 0.30 & - & - & - \\ (device, heart rate) & - & 0.45 & - & 0.30 & 0.25 \\ (device, steps) & - & 0.19 & 0.51 & 0.30 & - \\ (distance, resting heart) & 0.80 & - & - & 0.20 & - \\ (heart rate, height) & 0.71 & - & - & 0.29 & - \\ (heart rate, resting heart) & 0.80 & - & - & 0.20 & - \\ (distance, weight) & - & - & - & 0.86 & - \\ (distance, height) & 0.71 & - & - & 0.03 & - \\ (heart rate, steps) & - & 0.78 & - & 0.02 & 0.01 \\ (gender, steps) & - & 0.72 & - & - & - \\ (calories, gender) & - & - & - & 0.71 & - \\ (calories, distance) & 0.06 & - & 0.47 & - & - \\ (age, distance) & - & 0.29 & - & - & - \\ (age, calories) & - & 0.05 & - & - & - \\ (age, heart rate) & - & 0.01 & - & - & - \\ (heart rate, weight) & - & - & - & 0.01 & - \\ \end{tabular}
\end{table}
Table 2: Results of a 100-fold bootstrapping of the Apple Watch Fitbit data, as described in the text. Frequencies for each edge type encountered over 100 folds are given.
This example is written to work in RStudio, so that a histogram/scatterplot plot matrix (Figure 2) is displayed in the Plots window, and the plot of the graph returned by the algorithm is displayed in the Viewer window (e.g., Figure 3). We have included this code example in the py-tetrad repository.16
Footnote 16: [https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/R/sample_r_code7.R](https://github.com/cmu-phil/py-tetrad/blob/main/pytetrad/R/sample_r_code7.R)
Tetrad also has a bootstrapping facility that TetradSearch makes available if one prefers to use it. (As in the Python example, the user may prefer to write their own bootstrapping code.) If automatic bootstrapping is used, a graph is plotted that shows frequencies of occurrence of each edge observed in any bootstrapping models. We show bootstrapping results with 30 folds (Figure 4).17 We use here the GRaSP algorithm, a permutation algorithm that relaxes the faithfulness assumption, with the linear Gaussian BIC score.
Footnote 17: It should be noted that the TetradSearch class can also be used in Python as well for those who don’t wish to make JPype calls themselves. Examples of this are provided in the py-tetrad package.
pairs.panels(data, method = "pearson")
source_python("tools/TetradSearch.py") ts <- TetradSearch(data)
ts$add_to_tier(1, "Attack") ts$add_to_tier(1, "Chord") ts$add_to_tier(1, "Velocity") ts$add_to_tier(1, "Displacement") ts$add_to_tier(1, "Frequency") ts$add_to_tier(2, "Pressure")
ts$use_sem_bic(penalty_discount=2) ts$use_fisher_z() ts$run_grasp()
print(ts$get_string())
library(DiagrammeR) dot <- ts$get_dot() grViz(dot)
ts$set_bootstrapping(numberResampling = 30) ts$run_grasp() dot <- ts$get_dot() grViz(dot)
Figure 3: Plot of the graph for the R code ”NASA Airfoil Self-Noise” example; this is displayed in the Viewer window in RStudio. This plot is produced by Graphviz using the DiagrammaR package in R.
## 5 Conclusion
In Python, the JPype package allows access to arbitrary code in Tetrad. In both Tetrad and R, the TetradSearch class allows users to access Tetrad's most commonly used functionality without directly using JPype. If one wishes to do JPype programming, the entire codebase of Tetrad becomes available for Python scripts. These Python or R scripts can be published, shared, and reused. Despite its performance advantages over Python in many areas, Java is not a good scripting language, so publishing Java classes as scripts is a bit forced. Also, accessing other languages from Java is tricky, whereas accessing Java from Python with JPype is not, as we have shown. So py-tetrad scripts can easily take cognizance of the entire functionality available in Python, including functionality from Java.
One known issue is that the installation process can be a bit cumbersome. We hope to simplify this process so users can install py-tetrad using pip and rpy-tetrad using CRAN. Also, initial responses to these tools have been overwhelmingly positive, but feedback is welcome for issues encountered in py-tetrad or rpy-tetrad. Installing Grasphviz in Python
Figure 4: Plot of the 30-fold bootstrapping graph for the R code ”NASA Airfoil Self-Noise” example displayed in the Viewer window in RStudio. This plot is produced by Graphviz using the DiagrammaR package in R.
is also challenging but must be left as an exercise for the reader. Suggestions for new features to include in py-tetrad, rpy-tetrad, or Tetrad are welcome.18
Footnote 18: Bug reports and feature suggestions may be sent to us by email or, preferably, submitted to our GitHub Issue Tracker at [https://github.com/cmu-phil/pytetrad/issues](https://github.com/cmu-phil/pytetrad/issues).
## 6 Citations and Bibliography
### Acknowledgments
We thank our anonymous reviewers for their detailed comments. We thank Yasuhiro Shimodaira and Kelvin Lim for their feedback, especially with our R implementation, and for ensuring our projects run well on Windows. We also thank Peter Spirtes for encouraging this project and setting the goal of breaking down barriers between projects. We especially thank the authors of JPype and Reticulate for their excellent tools. Ramsey's work on this project was partly funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. Andrews' work on this project was funded and supported by the Comorbidity: Substance Use Disorders and Other Psychiatric Conditions Training Program T32DA037183.
|
2307.06805 | Path-Integral Formula for Computing Koopman Eigenfunctions | The paper is about the computation of the principal spectrum of the Koopman
operator (i.e., eigenvalues and eigenfunctions). The principal eigenfunctions
of the Koopman operator are the ones with the corresponding eigenvalues equal
to the eigenvalues of the linearization of the nonlinear system at an
equilibrium point. The main contribution of this paper is to provide a novel
approach for computing the principal eigenfunctions using a path-integral
formula. Furthermore, we provide conditions based on the stability property of
the dynamical system and the eigenvalues of the linearization towards computing
the principal eigenfunction using the path-integral formula. Further, we
provide a Deep Neural Network framework that utilizes our proposed
path-integral approach for eigenfunction computation in high-dimension systems.
Finally, we present simulation results for the computation of principal
eigenfunction and demonstrate their application for determining the stable and
unstable manifolds and constructing the Lyapunov function. | Shankar A. Deka, Sriram S. K. S. Narayanan, Umesh Vaidya | 2023-07-13T15:15:41Z | http://arxiv.org/abs/2307.06805v1 | # Path-Integral Formula for Computing Koopman Eigenfunctions
###### Abstract
The paper is about the computation of the principal spectrum of the Koopman operator (i.e., eigenvalues and eigenfunctions). The principal eigenfunctions of the Koopman operator are the ones with the corresponding eigenvalues equal to the eigenvalues of the linearization of the nonlinear system at an equilibrium point. The main contribution of this paper is to provide a novel approach for computing the principal eigenfunctions using a path-integral formula. Furthermore, we provide conditions based on the stability property of the dynamical system and the eigenvalues of the linearization towards computing the principal eigenfunction using the path-integral formula. Further, we provide a Deep Neural Network framework that utilizes our proposed path-integral approach for eigenfunction computation in high-dimension systems. Finally, we present simulation results for the computation of principal eigenfunction and demonstrate their application for determining the stable and unstable manifolds and constructing the Lyapunov function.
## I Introduction
The Koopman operator theory is emerging as a powerful tool for the analysis and synthesis of nonlinear systems [1, 2, 3, 4, 5, 6, 7]. The linear lifting of a nonlinear system provided by the Koopman operator in the space of functions is successfully exploited for control design [8, 9], prediction [10, 11], and uncertainty propagation [12, 13] in a dynamical system. However, the spectral properties, i.e., the eigenvalues and eigenfunctions, of the Koopman operator still need to be explored, especially for control [9, 14].
In this paper, we are specifically interested in identifying the principal eigenfunctions of the Koopman operator. The principal eigenfunctions are associated with the eigenvalues of the linearization of the nonlinear system at an equilibrium point. The principal eigenfunctions provide a powerful tool for analyzing and synthesizing controllers for nonlinear systems. These eigenfunctions can be used as a change of coordinates for the linear representation of a nonlinear system over a large region of the state space [1, 15]. The extent of validity of these eigenfunctions determines the size of the domain over which the linear representation is valid. For example, in a system with a stable equilibrium point, these eigenfunctions are well defined in the domain of attraction of the equilibrium point. The zero-level curves of the eigenfunction are used to identify the stable and unstable manifolds of the dynamical system. More recently, the connection between the principal eigenfunctions of the Koopman operator and the solution of the Hamilton Jacobi equation has been established [9]. This connection provides a systematic approach for formulating and solving various control problems, including optimal control, robust control, and input-output gain analysis of a nonlinear system [16]. For all these reasons, it becomes imperative to develop systematic and robust computational methods for determining the principal spectrum of the Koopman operator. In [17], Taylor and Bernstein's polynomials were used to approximate the eigenfunctions. To reduce the computation cost for high dimensional systems, [18] proposed to decompose the system as a set of interconnected systems and exploit its sparsity structure. A convex formulation to approximate the principal eigenfunctions is provided in [19]. However, these methods cannot be easily extended to a general high-dimensional system.
The main contribution of this paper is to provide a novel approach for the computation of the principal eigenfunctions of the Koopman operator. The approach relies on decomposing principal eigenfunctions into linear and purely nonlinear parts. The linear part of the eigenfunction is obtained as the left eigenvector of the linearization of system dynamics at the equilibrium point. The nonlinear part is shown to satisfy a linear partial differential equation (PDE). The solution of this linear PDE is obtained using a path-integral formulation. In particular, the value of the eigenfunction at any given point \(\mathbf{x}_{0}\), is obtained by integrating a known function along the system trajectory forward in time with \(\mathbf{x}_{0}\) as the initial state. We provide conditions based on the stability properties of the system for the path-integral formula to work. The path-integral approach does not involve a choice of basis function, making it attractive for complex systems. Furthermore, we present a DNN framework to approximate the solution of the PDE for high-dimensional systems. Finally, we demonstrate the application of the developed framework for the computation of stable/unstable manifolds and the construction of Lyapunov functions.
## II Preliminaries and Notations
Consider the continuous-time dynamical system
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}),\ \ \ \mathbf{x}\in\mathbf{X}\subset \mathbb{R}^{n}. \tag{1}\]
The following assumption is made on the vector field in the rest of the paper.
**Assumption 1**.: We assume that the vector field \(\mathbf{f}(\mathbf{x})\) is at least \(\mathcal{C}^{2}(\mathbf{X})\) (twice continuously differentiable) and \(\mathbf{x}=0\) is a hyperbolic equilibrium point of the system, i.e., \(\mathbf{A}:=\frac{\partial\mathbf{f}}{\partial\mathbf{x}}(0)\) has no eigenvalues on the imaginary axis.
**Definition 1** (Koopman Operator).: _Let \(\mathbf{s}_{t}(\mathbf{x})\) be the solution of the dynamical system (1) at time \(t\) starting from the initial condition \(\mathbf{x}\). The Koopman operator \(\mathbb{U}_{t}:\mathcal{L}_{\infty}(\mathbf{X})\rightarrow\mathcal{L}_{\infty}( \mathbf{X})\) associated with the dynamical system (1) is defined as_
\[[\mathbb{U}_{t}\psi](\mathbf{x})=\psi(\mathbf{s}_{t}(\mathbf{x})), \tag{2}\]
_where \(\psi\) (commonly referred to as an observable function) is defined on \(\mathcal{L}_{\infty}(\mathbf{X})\), which is the space of essentially bounded functions on \(\mathbf{X}\). The infinitesimal generator \(\mathcal{K}_{\mathbf{f}}\) for the Koopman operator is given by_
\[\lim_{t\to 0}\frac{(\mathbb{U}_{t}-I)\psi}{t}=\frac{\partial\psi}{ \partial\mathbf{x}}\mathbf{f}(\mathbf{x})=:\mathcal{K}_{\mathbf{f}}\psi,\ \ t\geq 0. \tag{3}\]
**Definition 2** (Eigenvalues and Eigenfunctions).: _A function \(\phi(\mathbf{x})\in\mathcal{C}^{1}(\mathbf{X})\) is said to be an eigenfunction of the Koopman operator associated with eigenvalue \(\lambda\) if_
\[[\mathbb{U}_{t}\phi](\mathbf{x})=e^{\lambda t}\phi(\mathbf{x}),\ \ t\geq 0. \tag{4}\]
_Using the Koopman generator, equation (4) can be written as_
\[\mathcal{K}_{\mathbf{f}}\phi=\frac{\partial\phi}{\partial\mathbf{ x}}\mathbf{f}(\mathbf{x})=\lambda\phi(\mathbf{x}). \tag{5}\]
Notice that equations (4) and (5) provide a "global" definition of Koopman spectrum in the sense that it holds for all \(t\in[0,\infty)\) and all \(x\in\mathbf{X}\). However, the spectrum can be defined over finite time or over a subset of the state space and is of interest to us in this paper. Furthermore, in this paper, we are also interested in computing the spectrum associated with the eigenvalues of the linearization of the nonlinear system at an equilibrium point.
**Definition 3** (Open Eigenfunction [1]).: _Let \(\phi:\boldsymbol{\mathcal{C}}\rightarrow\mathbb{C}\), where \(\boldsymbol{\mathcal{C}}\subset\mathbf{X}\) is not an invariant set. Let \(\mathbf{x}\in\boldsymbol{\mathcal{C}}\), and \(\tau\in(\tau^{-}(\mathbf{x}),\tau^{+}(\mathbf{x}))=I_{\mathbf{x}}\), a connected open interval such that \(\mathbf{s}_{\tau}(\mathbf{x})\in\boldsymbol{\mathcal{C}}\) for all \(\tau\in I_{\mathbf{x}}\). If_
\[[\mathbb{U}_{\tau}\phi](\mathbf{x})=\phi(\mathbf{s}_{\tau}( \mathbf{x}))=e^{\lambda\tau}\phi(\mathbf{x})\quad\forall\tau\in I_{\mathbf{x}},\]
_then \(\phi\) is called an open eigenfunction of the Koopman operator family \(\mathbf{U}_{t}\), for \(t\in\mathbb{R}\) with eigenvalue \(\lambda\)._
If \(\boldsymbol{\mathcal{C}}\) is a proper invariant subset of \(\mathbf{X}\) in which case \(I_{\mathbf{x}}=\mathbb{R}\) for every \(\mathbf{x}\in\boldsymbol{\mathcal{C}}\), then \(\phi\) is called a subdomain eigenfunction. If \(\boldsymbol{\mathcal{C}}=\mathbf{X}\), then \(\phi\) will be an ordinary eigenfunction associated with eigenvalue \(\lambda\) as defined in (4). When \(\boldsymbol{\mathcal{C}}\) is open, the open eigenfunctions as defined above can be extended from \(\boldsymbol{\mathcal{C}}\) to a larger set which is the backward-reachable from the closure of \(\boldsymbol{\mathcal{C}}\), based on the construction procedure outlined in [1, Definition 5.2, Lemma 5.1]. Following Assumption 1, let \(\mathcal{D}\) be the domain of attraction of the equilibrium point at the origin. Our interest is in computing the Koopman eigenfunctions which are defined over this domain \(\mathcal{D}\). Furthermore, these eigenfunctions are associated with the eigenvalues of the dynamic matrix \(\mathbf{A}\) of the linearized system around the equilibrium \(\mathbf{x}=0\). These principal eigenfunctions are connected to the diffeomorphism as established in the famous Hartman Grobman theorem, which transforms the nonlinear system into a linear system in a small neighborhood around the equilibrium point [15, 20]. In fact, these eigenfunctions can be essentially viewed as the extension of the Hartman Grobman diffeomorphism from the local neighborhood around the origin to the entire domain of attraction \(\mathcal{D}\)[1, Theorem 5.6].
## III Main Results
Following Assumption 1, we can write the system dynamics (1) as
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})=\mathbf{A}\mathbf{x}+ \mathbf{f}_{n}(\mathbf{x}), \tag{6}\]
where \(\mathbf{A}\mathbf{x}:=\frac{\partial\mathbf{f}}{\partial\mathbf{x}}(0) \mathbf{x}\) is the linear part and \(\mathbf{f}_{n}(\mathbf{x}):=\mathbf{f}(\mathbf{x})-\mathbf{A}\mathbf{x}\) is the purely nonlinear part of the vector field \(\mathbf{f}(\mathbf{x})\). Let \(\lambda\) be an eigenvalue of the linearization, i.e., \(\mathbf{A}\), and let \(\varphi_{\lambda}(\mathbf{x})\) be the eigenfunction associated with the eigenvalue \(\lambda\) (such eigenfunctions are called principal eigenfunctions). Similar to the system decomposition into linear and nonlinear parts, the principal eigenfunction, \(\varphi_{\lambda}(\mathbf{x})\), also admits a decomposition into linear and nonlinear terms as follows:
\[\varphi_{\lambda}(\mathbf{x})=\mathbf{w}_{\lambda}^{\top}\mathbf{x}+h_{ \lambda}(\mathbf{x}), \tag{7}\]
where \(\mathbf{w}^{\top}\mathbf{x}\) is the linear part and \(h_{\lambda}(\mathbf{x})\) is the purely nonlinear term and hence satisfies \(\frac{\partial h}{\partial\mathbf{x}}(0)=0\). Substituting (7) in equation (5) and comparing terms, we obtain
\[\mathbf{w}_{\lambda}^{\top}\mathbf{A}=\lambda\mathbf{w}_{\lambda}^{ \top}, \tag{8}\]
i.e., \(\mathbf{w}_{\lambda}\) is the left eigenvector of \(\mathbf{A}\) with eigenvalue \(\lambda\). Similarly, the nonlinear part, \(h_{\lambda}(\mathbf{x})\), of the eigenfunction satisfies the following linear partial differential equation (PDE)
\[\frac{\partial h_{\lambda}}{\partial\mathbf{x}}\mathbf{f}(\mathbf{x})-\lambda h _{\lambda}(\mathbf{x})+\mathbf{w}_{\lambda}^{\top}\mathbf{f}_{n}(\mathbf{x})=0. \tag{9}\]
The main results of this section on the computation of principal eigenfunctions of the Koopman operator present an approach for solving equation (9). We present two different approaches for the computation of the nonlinear part of the principal eigenfunctions. Our first approach relies on the path-integral formula for the computation of principal eigenfunctions. Our second approach relies on the use of a Deep Neural Network for solving the linear PDE (9).
### _Path-Integral Approach for Computation_
Our first results on the path-integral approach for eigenfunction computation provide a solution formula for the linear PDE (9) using the method of characteristics.
**Theorem 1**.: _The solution formula for the first order linear PDE (9) can be written as_
\[h_{\lambda}(\mathbf{x})=e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}( \mathbf{x}))+\int_{0}^{t}e^{-\lambda t}\mathbf{w}_{\lambda}^{\top}\mathbf{f}_{ n}(\mathbf{s}_{\tau}(\mathbf{x}))d\tau, \tag{10}\]
_where \(\mathbf{s}_{t}(\mathbf{x})\) is the solution of the system (6)._
Proof.: The PDE (9) can be written as
\[\frac{dh_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))}{dt}-\lambda h_{ \lambda}(\mathbf{s}_{t}(\mathbf{x}))+\mathbf{w}_{\lambda}^{\top}\mathbf{f}_{ n}(\mathbf{s}_{t}(\mathbf{x}))=0. \tag{11}\]
Multiplying throughout by \(e^{-\lambda t}\), we obtain
\[\frac{d(e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x})))}{dt}+e^{-\lambda t} \mathbf{w}_{\lambda}^{\top}\mathbf{f}_{n}(\mathbf{s}_{t}(\mathbf{x}))=0.\]
Next, we integrate the above from \(0\) to \(t\), thus obtaining
\[e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))-h_{\lambda} (\mathbf{x})+\int_{0}^{t}e^{-\lambda\tau}h_{\lambda}(\mathbf{s}_{\tau}(\mathbf{ x}))d\tau=0,\] \[\implies h_{\lambda}(\mathbf{x})=e^{-\lambda t}h_{\lambda}(s_{t}( \mathbf{x}))+\int_{0}^{t}e^{-\lambda\tau}\mathbf{w}^{\top}\mathbf{f}_{n}(s_{ \tau}(\mathbf{x}))d\tau.\]
This completes our proof.
Our first main result establishes conditions under which the solution of the PDE (9) is nonlinear.
**Theorem 2**.: _For the dynamical system (6) that satisfies Assumption 1, let the origin be an asymptotically stable equilibrium point with the domain of attraction \(\mathcal{D}\) and let \(\mathbf{A}\) be Hurwitz. Furthermore, all the eigenvalues of the \(\mathbf{A}\) satisfy_
\[-\mathrm{Re}(\lambda)+2\mathrm{Re}(\lambda_{max})<0, \tag{12}\]
_where \(\lambda_{max}\) is the eigenvalue closest to the \(j\omega\) axis and in the left half plane. Let \(h_{\lambda}\) be the solution of PDE (9) as given in (10). Then,_
\[\lim_{t\to\infty}e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))=0, \quad\forall\mathbf{x}\in\mathcal{D} \tag{13}\]
_if \(h_{\lambda}(\mathbf{x})\) is purely nonlinear function of \(\mathbf{x}\) i.e., \(\frac{\partial h_{\lambda}}{\partial\mathbf{x}}(0)=0\)._
Proof.: We show that if \(h_{\lambda}\) is nonlinear then (13) is true. Since \(h_{\lambda}\) is purely nonlinear, \(\nabla_{x}h_{\lambda}(0)=0\) and by construction \(h_{\lambda}(0)=0\). Next, we show that for every \(\varepsilon>0\), there exists \(c_{\varepsilon}>0\) such that
\[\|h_{\lambda}(\mathbf{x})\|\leq c_{\varepsilon}\|\mathbf{x}\|^{2}\]
for all \(\|\mathbf{x}\|\leq\varepsilon\). By applying the mean value theorem inside \(\|\mathbf{x}\|\leq\varepsilon\), we have
\[h_{\lambda}(\mathbf{x}) =h_{\lambda}(0)+\nabla_{\mathbf{x}}h_{\lambda}(0)\mathbf{x}+ \mathbf{x}^{T}\nabla_{\mathbf{x}}^{2}h_{\lambda}(\mathbf{z})\mathbf{x}\] \[\implies\|h_{\lambda}(\mathbf{x})\|\leq\|\nabla_{\mathbf{x}}^{2} h_{\lambda}(\mathbf{z})\|\cdot\|\mathbf{x}\|^{2}\]
for some point \(\mathbf{z}\) on the line segment joining \(0\) and \(\mathbf{x}\). Since \(h_{\lambda}\) is smooth over the compact domain \(\|\mathbf{x}\|\leq\varepsilon\), we can define a constant \(c_{\varepsilon}:=\sup_{\|\mathbf{x}\|\leq\varepsilon}\|\nabla_{\mathbf{x}}^{2} h_{\lambda}(\mathbf{x})\|\), and obtain the uniform bound \(\|h_{\lambda}(\mathbf{x})\|\leq c_{\varepsilon}\|\mathbf{x}\|^{2}\) in the region \(\|\mathbf{x}\|\leq\varepsilon\), where \(c_{\varepsilon}:=\left(\sum_{i}c_{\varepsilon,i}^{2}\right)^{\frac{1}{2}}\). Now for \(\|\mathbf{x}\|\leq\varepsilon\), there exists, by Hartman Grobman theorem, a near identity change of coordinates with inverse in the small neighborhood around the origin, say of size \(\|\mathbf{x}\|\leq\epsilon\), of the form
\[\mathbf{z}=\mathbf{x}+\mathbf{d}(\mathbf{x})=\mathbf{D}(\mathbf{x})\iff \mathbf{x}=\mathbf{D}^{-1}(\mathbf{z})=\mathbf{z}+\bar{\mathbf{d}}(\mathbf{z}), \tag{14}\]
with \(\mathbf{d}(\mathbf{x})\) and \(\bar{\mathbf{d}}(\mathbf{z})\) purely nonlinear such that the nonlinear system is transformed into linear system i.e., \(\dot{\mathbf{x}}=\mathbf{A}\mathbf{x}+\mathbf{f}_{n}(\mathbf{x})\implies\dot{ \mathbf{z}}=\mathbf{A}\mathbf{z}\) and hence
\[\mathbf{s}_{t}(\mathbf{x}) =\mathbf{D}^{-1}(e^{\mathbf{A}t}\mathbf{D}(\mathbf{x}))\implies \mathbf{s}_{t}(\mathbf{x})=\mathbf{D}^{-1}(e^{\mathbf{A}t}(\mathbf{x}+ \mathbf{d}(\mathbf{x})))\] \[=e^{\mathbf{A}t}\mathbf{x}+e^{\mathbf{A}t}\mathbf{d}(\mathbf{x}) +\bar{\mathbf{d}}(e^{\mathbf{A}t}\mathbf{x}+e^{\mathbf{A}t}\mathbf{d}(\mathbf{x })).\]
In the above, we have used (14) for \(\mathbf{D}^{-1}\). Since \(\bar{\mathbf{d}}(\mathbf{z})\) is purely nonlinear, for \(\|\mathbf{x}\|\leq\epsilon\), we can get using mean value theorem
\[\|\bar{\mathbf{d}}(\mathbf{z})\|\leq c_{d}\|\mathbf{z}\|^{2}, \quad\|\mathbf{d}(\mathbf{x})\|\leq c_{d}\|\mathbf{x}\|^{2}.\]
Using the above inequality, Cauchy Schwartz inequality, and the fact that \(\|\mathbf{x}\|\leq\epsilon\), we obtain
\[\|\mathbf{s}_{t}(\mathbf{x})\|\leq c_{1}e^{\mathrm{Re}(\lambda_{max}t)} \implies\|\mathbf{s}_{t}(\mathbf{x})\|^{2}\leq c_{1}^{2}e^{\mathrm{Re}(2 \lambda_{max}t)}\]
for some constant \(c_{1}\) that depends on \(\epsilon\), \(c_{d}\), and \(\bar{c}_{d}\). Now
\[\|h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))\|\leq c_{\varepsilon}\|\mathbf{s}_{t }(\mathbf{x})\|^{2}\leq c_{2}e^{\mathrm{Re}(2\lambda_{max}t)},\]
where \(c_{2}=c_{\varepsilon}c_{1}^{2}\). Then, the limit in equation (13) follows by noting that
\[\|e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))\|\leq c_{2}e^{(- \mathrm{Re}(\lambda)+2\mathrm{Re}(\lambda_{max}))t}.\]
Using the results of the above theorem we have the following results for the computation of Koopman eigenfunctions under the stability assumption on the system dynamics.
**Theorem 3**.: _Consider the dynamical system (6) with origin asymptotically stable and with the domain of attraction \(\mathcal{D}\). Let the eigenvalue \(\lambda\) of matrix \(\mathbf{A}\) satisfy condition (12). Then the principal eigenfunction, \(\phi_{\lambda}\), corresponding to eigenvalue \(\lambda\), is well defined in the domain \(\mathcal{D}\) and is given by following path-integral formula:_
\[\phi_{\lambda}(\mathbf{x})=\mathbf{w}_{\lambda}^{\top}\mathbf{x}+\int_{0}^{ \infty}e^{-\lambda t}\mathbf{w}_{\lambda}^{\top}\mathbf{f}_{n}(\mathbf{s}_{t }(\mathbf{x}))dt \tag{15}\]
_where \(\mathbf{w}_{\lambda}\) satisfies \(\mathbf{w}_{\lambda}^{\top}\mathbf{A}=\lambda\mathbf{w}_{\lambda}^{\top}\)._
Proof.: The eigenfunction corresponding to eigenvalue \(\lambda\) admits a decomposition into linear and nonlinear parts as given in Eqs. (7) and (8). Since \(h_{\lambda}\) is assumed to be nonlinear, the results of Theorem 2 applies and hence \(\lim_{t\to\infty}e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))=0\) for all \(\mathbf{x}\in\mathcal{D}\). The result then follows by applying Theorem 1 on the solution formula of linear PDE.
**Remark 1**.: The eigenfunctions \(\phi_{\lambda_{i}}\) for \(i=1,\ldots,n\) can be used as diffeomorphism for the linearization of nonlinear system valid within the domain of attraction \(\mathcal{D}\). In [15, 1], the authors propose an approach for the construction of such diffeomorphism valid within the domain of attraction based on the extension of the Hartman Grobman diffeomorphism, which is known to exist in a small neighborhood of the origin.
The results of Theorem 3 can be extended to compute the Koopman spectrum for the system with linearization having all its eigenvalues in the right half plane by time reversing the vector field. We have the following Corollary in this direction.
**Corollary 1**.: _Consider the dynamical system (6) satisfying Assumption 1. Let the matrix \(\mathbf{A}\) for the linearization of
system dynamics have all its eigenvalues in the strict right half plane with eigenvalue, \(\lambda\), satisfying the condition_
\[\mathrm{Re}(\lambda)-2\mathrm{Re}(\lambda_{max})<0. \tag{16}\]
_The principal eigenfunction, \(\phi_{\lambda}\), with eigenvalue \(\lambda\), are well defined in the domain \(\bar{\mathcal{D}}:=\{\mathbf{x}\in\mathbf{X}:\lim_{t\to\infty}\mathbf{s}_{-t}( \mathbf{x})=0\}\) and is given by the following formula_
\[\phi_{\lambda}(\mathbf{x})=\mathbf{w}_{\lambda}^{\top}\mathbf{x}+\int_{0}^{ \infty}e^{\lambda t}\mathbf{w}_{\lambda}^{\top}\mathbf{f}(\mathbf{s}_{-t}( \mathbf{x}))dt, \tag{17}\]
_where \(\mathbf{w}_{\lambda}\) satisfies \(\mathbf{w}_{\lambda}^{\top}\mathbf{A}=\lambda\mathbf{w}_{\lambda}^{\top}\)._
Theorem 3 and Corollary 1 provide an approach for computing the Koopman principal eigenfunctions for the cases when the equilibrium point is stable and anti-stable. It is important to emphasize that the results of Theorem 3 and Corollary 1 rely on the sufficient condition that can be verified for the computation of principal eigenfunction. The following theorem for principal eigenfunction computation applies to a system with a saddle-type equilibrium point.
**Theorem 4**.: _Consider the dynamical system (6) satisfying Assumption 1 with \(\lambda\) as an eigenvalue of \(\mathbf{A}\) such that \(\mathrm{Re}(\lambda)>0\). Assume that \(h_{\lambda}(\mathbf{x})\), the nonlinear part of the principal eigenfunction corresponding to eigenvalue \(\lambda\) satisfy_
\[\lim_{t\to\infty}|h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))|\leq M \tag{18}\]
_for some constant \(M\) and for all \(\mathbf{x}\) in some set \(\mathbf{X}_{1}\subseteq\mathbf{X}\). Then the eigenfunction corresponding to eigenvalue \(\lambda\) can be computed using the following path-integral formula for all \(\mathbf{x}\in\mathbf{X}_{1}\) :_
\[\phi_{\lambda}(\mathbf{x})=\mathbf{w}_{\lambda}^{\top}\mathbf{x}+\int_{0}^{ \infty}e^{-\lambda t}\mathbf{w}_{\lambda}^{\top}\mathbf{f}(\mathbf{s}_{t}( \mathbf{x}))dt. \tag{19}\]
Proof.: The condition (18) combined with the fact that \(\mathrm{Re}(\lambda)>0\) ensure that \(\lim_{t\to\infty}e^{-\lambda t}h_{\lambda}(\mathbf{s}_{t}(\mathbf{x}))=0\). The expression (19) then follows from (7) and the PDE solution (10) by taking \(t\to 0\).
Note that the main issue with applying the results from the above Theorem is that the condition (18) cannot be easily verified. For a system with saddle-type equilibrium point, computation of eigenfunctions corresponding to eigenvalues with negative real part can be similarly done by applying the results of Theorem 4 for the time-reversed vector field. We would like to emphasize that in applications such as optimal control, it is of interest to compute only part of eigenfunctions corresponding to unstable eigenvalues [9]
### _Deep Neural Network for Principal Eigenfunction_
Deep learning techniques have been successfully applied in literature towards computation of the Koopman operator and its associated eigenfunctions [21, 22]. In all of these prior works, the main approach is to parameterize the eigenfunctions (or nonlinear 'lifting' functions in other cases) using autoencoders and then utilizing sampled trajectory data to compute the loss function for training.
Let \(\mathcal{P}=\left\{(\mathbf{x}_{i},\mathbf{y}_{i})\right\}_{i\in\mathcal{I}}\) be a set of points along system trajectories sampled at a uniform time interval \(\tau\), that is,
\[\mathbf{y}_{i}=\mathbf{s}_{\tau}(\mathbf{x}_{i}),\ i\in\mathcal{I}.\]
Then, the DNN parameterized vector of eigenfunctions or lifting functions \(\psi_{\theta}\) is typically learned by minimizing the loss
\[\min_{K,\theta,\omega}\left[\begin{array}{c}\mathop{\mathbb{E}}_{(\mathbf{x },\mathbf{y})\sim\mathcal{D}}\big{[}\|\psi_{\theta}(\mathbf{y})-K\psi_{\theta }(\mathbf{x})\|\big{]}+\\ \mathop{\mathbb{E}}_{x\sim\mathbf{X}}\big{[}\|\mathbf{x}-\eta_{\omega}\big{(} \psi_{\theta}(\mathbf{x})\big{)}\|\big{]}\end{array}\right], \tag{20}\]
where \(\mathbb{E}[\cdot]\) denotes the expected value with respect to the data distribution specified. The function \(\eta_{\omega}\) is a decoder network parameterized by \(\omega\), which maps points from the lifted Koopman space back to the original state-space and \(K\) is the finite-dimensional approximation of the Koopman operator. The second term in the equation above is the auto-encoder loss and is needed to ensure that the DNN does not learn a trivial solution \(\psi_{\theta}\equiv 0\). In place of the first term, it is also common to use Koopman PDE (2) in the loss function, wherein one penalizes the violation in the PDE satisfaction. In the case where the DNN parameterizes the lifting function, one needs to indirectly extract the eigenfunctions using the learned \(K\) matrix and \(\psi_{\theta}\).
Our approach using path-integral can be used to learn the principal Koopman eigenfunctions in a more direct fashion, using the equation (7) to create a labeled training dataset \(\mathcal{D}^{\prime}=\left\{(\mathbf{x}_{i},\phi_{\lambda}(\mathbf{x}_{i})) \right\}_{i\in\mathcal{I}}\), thus leading to the following supervised learning problem:
\[\min_{\theta}\mathop{\mathbb{E}}_{(\mathbf{x},\mathbf{x})\sim\mathcal{D}^{ \prime}}\left[\|\mathbf{z}-\mathbf{w}^{\top}\mathbf{x}-\hat{h}_{\theta}( \mathbf{x})\|\right], \tag{21}\]
where \(\theta\) parameterizes the nonlinear part of the principal eigenfunction using the DNN \(\hat{h}_{\theta}\). Additionally, one can introduce the following secondary term in the loss function for regularization:
\[\mathop{\mathbb{E}}_{\mathbf{x}\in X}\left[\left\|\frac{\partial\hat{h}_{ \theta}}{\partial\mathbf{x}}\mathbf{f}(\mathbf{x})-\lambda\hat{h}_{\theta}( \mathbf{x})+\mathbf{w}^{\top}\mathbf{f}_{n}(\mathbf{x})\right\|\right]. \tag{22}\]
This ensures that the network does not overfit to the dataset \(\mathcal{D}^{\prime}\). Note that this secondary term (22) is much cheaper to evaluate compared to the loss term in (21) due to offline computations involved in the generation of labeled dataset \(\mathcal{D}^{\prime}\). Moreover, since PDE (9) does not admit a trivial solution (unlike PDE (2)), we do not need an additional auto-encoder loss term like in equation (20).
## IV Simulation Results
**Analytical Example 1**: Consider the dynamics of a one-dimensional system given by
\[\dot{x}=\alpha(x-x^{3}).\]
The principal eigenfunctions for this system can be computed analytically as \(\phi(x)=\frac{x}{\sqrt{1-x^{2}}}\). Note that \(\phi(x)\) is well-defined within the domain \(x\in(-1,1)\). For \(\alpha=-1\), the system has a stable equilibrium point at the origin (with eigenvalue \(\lambda=-1\)). Although \(\phi(x)\) blows up as \(x\to(-1,1)\)
since \(\mathbf{s}_{t}(x)\to 0\), condition in Eq. (13) is satisfied. The corresponding eigenfunction can be estimated using Theorem 3 as shown in Fig. 1a. For \(\alpha=1\), the origin is unstable, and hence the results of Theorem 3 do not apply. But the results of Corollary 1 apply, and the estimated eigenfunction using Eq. (17) matches perfectly with the analytical solution.
**Analytical Example 2:** Consider the dynamics of a two-dimensional system given by
\[\dot{x}_{1}=-2\lambda_{2}x_{2}(x_{1}^{2}-x_{2}-2x_{1}x_{2}^{2}+x_ {2}^{4})\] \[+\lambda_{1}(x_{1}+4x_{1}^{2}x_{2}-x_{2}^{2}-8x_{1}x_{2}^{3}+4x_ {2}^{5})\] \[\dot{x}_{2}=2\lambda_{1}(x_{1}-x_{2}^{2})^{2}-\lambda_{2}(x_{1}^{ 2}x_{2}-2x_{1}x_{2}^{2}+x_{2}^{4})\]
where \(\lambda_{1},\lambda_{2}\) are the eigenvalues of the system when linearized about the origin [23]. For this system, the eigenfunctions can be computed analytically as \(\phi_{\lambda_{1}}(x)=x_{1}-x_{2}^{2}\) and \(\phi_{\lambda_{2}}(x)=-x_{1}^{2}+x_{2}+2x_{1}x_{2}^{2}-x_{2}^{4}\). We pick the eigenvalues \(\lambda_{1}=-1\) and \(\lambda_{2}=3\) such that the system has a saddle equilibrium at the origin. The analytical eigenfunction corresponding to \(\lambda_{2}=3\) is shown in Fig. 2a. The eigenfunction corresponding to the unstable eigenvalues can be estimated accurately using Theorem 4 as shown in Figure 2b.
**Duffing Oscillator**: The Duffing oscillator dynamics is
\[\dot{x}_{1}=x_{2},\quad\dot{x}_{2}=x_{1}-\delta x_{2}-x_{1}^{3}\]
For eigenfunction computation, we use \(\delta=0.5\). The equilibrium point at the origin is a saddle point. Fig. 3a shows the eigenfunction corresponding to the unstable eigenvalue obtained for the equilibrium point at the origin after \(t=20\) s using Theorem 4. Since the eigenfunctions remain bounded, equation (13) is satisfied. The stable manifold (shown in yellow in Fig. 3b) is obtained as the zero-level set of this eigenfunction. The magnitude of the (complex) eigenfunction corresponding to the stable eigenvalue obtained after \(t=20s\) for the equilibrium point at [1,0] is shown in Fig. 3c.
The Lyapunov function verifying the stability of the equilibrium dynamics is constructed as \(V(\mathbf{x})=\dot{\Phi}^{\top}(\mathbf{x})\mathbf{P}\Phi(\mathbf{x})\), where \(\mathbf{P}\) is a positive matrix obtained as the solution of the following Lyapunov equation \(\Lambda^{\top}\mathbf{P}+\mathbf{P}\Lambda<0\)[17]. The Lyapunov function for this system is shown in Fig. 3d.
**Two Link Robotic Arm:** Consider the following Euler-Lagrange dynamics representing a 2-link manipulator:
\[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{ q}})\dot{\mathbf{q}}+\mathbf{G}(\mathbf{q})=\mathbf{B}\dot{\mathbf{q}} \tag{23}\]
where \(\mathbf{q}\in\mathbb{R}^{2}\) represents the generalized coordinates of the manipulator. Specifically, we take
\[\mathbf{M}(\mathbf{q})=\left[\begin{array}{cc}2\cos(q_{2})+8.33&\cos(q_{2} )+0.33\\ \cos(q_{2})+0.33&0.33\end{array}\right]\]
\[\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})=\left[\begin{array}{cc}-2\dot{q}_{ 2}\sin(q_{2})&-\dot{q}_{2}\sin(q_{2})\\ \dot{q}_{1}\sin(q_{2})&0\end{array}\right]\]
\[\mathbf{G}(\mathbf{q})=\left[\begin{array}{cc}50\sin(q_{1})+5\sin(q_{1}+q_ {2})\\ 5\sin(q_{1}+q_{2})\end{array}\right]\]
and \(\mathbf{B}=diag[5.5,\quad 0.001]\), where \(diag\) represents a diagonal matrix. We take the \(4\)-dimensional state to be \(\mathbf{x}=[q_{1},q_{2},\dot{q}_{1},\dot{q}_{2}]\), and rewrite the dynamics (23) in standard
Fig. 1: Analytical example 1: (a) eigenfunction corresponding to stable eigenvalue estimated using Theorem 3 (b) eigenfunction corresponding to unstable eigenvalue estimated using Corollary 1.
Fig. 3: Duffing Oscillator: (a) eigenfunction (real) for \(\lambda=0.78\) at the (0,0); (b) zero level set representing the stable manifold; (c) magnitude of the eigenfunction (complex) for \(\lambda=-0.25\pm 1.39\) at \((1,0)\); (d) Lyapunov function obtained from (c)
Fig. 2: Analytical Example 2 with saddle equilibrium point: Eigenfunction corresponding to \(\mathrm{Re}(\lambda)>0\). (a) analytical (b) estimated using Theorem 3.
form as \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\). The linearized system about the stable equilibrium \(\mathbf{x}=0\) has complex eigenvalues \(\lambda_{1,2}=-0.23\pm 2.29j\) and \(\lambda_{3,4}=-0.32\pm 5.32j\), thus leading to complex eigenfunctions. We pick a domain \([-\frac{\pi}{12},\frac{\pi}{12}]^{4}\) over which we compute the path integrals and create a dataset \(\mathcal{D}^{\prime}\) as described in Subsection III-B. This dataset, along with the sum of losses (21) and (22), is then used to train a multi-layer perceptron network (MLP) with a sinusoidal activation function. The MLP has 3 hidden layers, each with 128 neurons. The input layer is of size 4, and the output layer has a size 2, corresponding to the real and imaginary parts of the eigenfunction being learned. Fig. 4 shows the magnitude and phase of the complex eigenfunction along the system trajectory starting at random initial conditions within the domain. It can be seen that the magnitude of the eigenfunction goes to zero along the stable trajectory.
## V Conclusions
We provide a novel approach for the computation of principal eigenfunctions of the Koopman operator based on the path-integral formula. Furthermore, the path-integral formula is used to formulate the DNN-based approach for computing the eigenfunctions. Simulation results show that the path-integral-based approach accurately approximates the principal eigenfunctions of systems with complex dynamics. We demonstrate the applications of eigenfunctions for the computation of stable/unstable manifolds and the Lyapunov function. Simulation results involving analytical examples, duffing oscillator, and two links robotic arm are presented to show the efficacy of the developed framework. Future research will focus on a data-driven approach for the computation of principal eigenfunctions and its extension to discrete-time dynamical systems.
|
2304.01384 | Large Deviations for Empirical Measures of Self-Interacting Markov
Chains | Let $\Delta^o$ be a finite set and, for each probability measure $m$ on
$\Delta^o$, let $G(m)$ be a transition probability kernel on $\Delta^o$. Fix
$x_0 \in \Delta^o$ and consider the chain $\{X_n, \; n \in \mathbb{N}_0\}$ of
$\Delta^o$-valued random variables such that $X_0=x$, and given $X_0, \ldots ,
X_n$, the conditional distribution of $X_{n+1}$ is $G(L^{n+1})(X_n, \cdot)$,
where $L^{n+1} = \frac{1}{n+1} \sum_{i=0}^{n} \delta_{X_i}$ is the empirical
measure at instant $n$. Under conditions on $G$ we establish a large deviation
principle for the empirical measure sequence $\{L^n, \; n \in \mathbb{N}\}$. As
one application of this result we obtain large deviation asymptotics for the
Aldous-Flannery-Palacios (1988) approximation scheme for quasistationary
distributions of irreducible finite state Markov chains. The conditions on $G$
cover various other models of reinforced stochastic evolutions as well,
including certain vertex reinforced and edge reinforced random walks and a
variant of the PageRank algorithm. The particular case where $G(m)$ does not
depend on $m$ corresponds to the classical results of Donsker and Varadhan
(1975) on large deviations of empirical measures of Markov processes. However,
unlike this classical setting, for the general self-interacting models
considered here, the rate function takes a very different form; it is typically
non-convex and is given through a dynamical variational formula with an
infinite horizon discounted objective function. | Amarjit Budhiraja, Adam Waterbury, Pavlos Zoubouloglou | 2023-04-03T21:17:35Z | http://arxiv.org/abs/2304.01384v2 | # Large Deviations for Empirical Measures of Self-Interacting Markov Chains
###### Abstract
Let \(\Delta^{o}\) be a finite set and, for each probability measure \(m\) on \(\Delta^{o}\), let \(G(m)\) be a transition probability kernel on \(\Delta^{o}\). Fix \(x_{0}\in\Delta^{o}\) and consider the chain \(\{X_{n},\;n\in\mathbb{N}_{0}\}\) of \(\Delta^{o}\)-valued random variables such that \(X_{0}=x\), and given \(X_{0},\ldots,X_{n}\), the conditional distribution of \(X_{n+1}\) is \(G(L^{n+1}(X_{n},\cdot)\), where \(L^{n+1}=\frac{1}{n+1}\sum_{i=0}^{n}\delta_{X_{i}}\) is the empirical measure at instant \(n\). Under conditions on \(G\) we establish a large deviation principle for the empirical measure sequence \(\{L^{n},\;n\in\mathbb{N}\}\). As one application of this result we obtain large deviation asymptotics for the Aldous-Flannery-Palacios (1988) approximation scheme for quasi-stationary distributions of irreducible finite state Markov chains. The conditions on \(G\) cover various other models of reinforced stochastic evolutions as well, including certain vertex reinforced and edge reinforced random walks and a variant of the PageRank algorithm. The particular case where \(G(m)\) does not depend on \(m\) corresponds to the classical results of Donsker and Varadhan (1975) on large deviations of empirical measures of Markov processes. However, unlike this classical setting, for the general self-interacting models considered here, the rate function takes a very different form; it is typically non-convex and is given through a dynamical variational formula with an infinite horizon discounted objective function.
**Keywords: reinforced random walks, quasi-stationary distributions, empirical measure, large deviations, stochastic approximations, self-interacting Markov chains, multiscale systems.**
## 1 Introduction
In this work we are interested in the large deviations behavior of certain types of self-interacting Markov chains. The terminology 'Markov chain' is in fact a misnomer as these processes are very far from being Markovian, and the conditional law of the state at the next time instant, given the past, depends on the whole history of the process through its empirical distribution. The general setting is as follows. Consider a finite set \(\Delta^{o}\doteq\{1,\ldots,d\}\) and let \(G\) be a map from \(\mathcal{P}(\Delta^{o})\) (the space of probability measures on \(\Delta^{o}\)) to the space \(\mathcal{K}(\Delta^{o})\) of transition probability kernels on \(\Delta^{o}\). Fix \(x_{0}\in\Delta^{o}\) and let \(\{X_{n},\;n\in\mathbb{N}_{0}\}\) be a sequence of \(\Delta^{o}\)-valued random variables defined recursively as follows: \(X_{0}=x_{0}\), and given \(X_{0},\ldots,X_{n}\), the conditional law of \(X_{n+1}\) is \(G(L^{n+1})(X_{n},\cdot)\), where \(L^{n+1}=\frac{1}{n+1}\sum_{i=0}^{n}\delta_{X_{i}}\) is the empirical measure at time instant \(n\). Many types of reinforced stochastic dynamical systems fall within this framework and such processes arise in several different contexts, e.g., Monte-Carlo methods for quasi-stationary distributions [1, 5, 9], population growth models in mathematical ecology [33, 34], self organization in dynamical models of social networks, models for random monopolies in economics, models for neuron growth, bandit problems in sequential analysis, generalized Polya's urn models, and many others; see the excellent survey by Pemantle [31] for discussion of these diverse applications. Using techniques from stochastic approximation theory and branching processes, under suitable conditions on \(G\), law of large numbers and central limit results for the empirical measure sequence \(\{L^{n},\;n\in\mathbb{N}\}\) have been studied in various works [1, 5, 7, 31]. The goal of the current work is to establish a large deviation
principle (LDP) for the sequence \(\{L^{n},\ n\in\mathbb{N}\}\) under a broad set of conditions on the map \(G\). Our main result, Theorem 2.4 provides a large deviation upper bound only requiring that the map \(G\) is Lipschitz (Assumption 2.1). Furthermore, this theorem shows that under a stronger condition (Assumption 2.2) the matching large deviation lower bound holds as well, thus establishing a LDP for \(\{L^{n},\ n\in\mathbb{N}\}\). Assumption 2.2 imposes four main conditions on the model: The first condition says that \(G\) is an affine map; the second condition imposes a natural communicability structure on the transition probability matrix \(G(m)\) for \(m\in\mathcal{P}(\Delta^{o})\); the third condition requires that the fixed point equation \(\pi^{*}G(\pi^{*})=\pi^{*}\) admits a strictly positive solution in \(\mathcal{P}(\Delta^{o})\); and, finally, the fourth condition says that the empirical measure \(L^{n}\) eventually charges all points in \(\Delta^{o}\), a.s. As discussed in Example 2.6, Remark 2.3, and Section 8, these conditions are satisfied for many interesting settings.
One of the main motivating applications for this work is the reinforced Markov chain Monte-Carlo scheme for approximating quasi-stationary distributions (QSD) of finite state Markov chains that was introduced in the work of Aldous _et al.[1]_. For an overview of QSD, see [18] and see Example 2.6 for a precise definition of a QSD. Let \(P\) be the transition kernel of a \(\Delta=\Delta^{o}\cup\{0\}\)-valued Markov chain that is absorbed at \(0\), and consider the substochastic kernel \(P^{o}\) obtained by restricting \(P\) to \(\Delta^{o}\). Suppose that \(P^{o}\) is irreducible. Then, there is a unique QSD of \(P\) which is characterized as the normalized Perron-Frobenius eigenvector of \(P^{o}\). The QSD captures the long-term pre-absorption behavior of the Markov chain with transition kernel \(P\), consequently, QSD are widely used to understand metastability behavior of stochastic systems in ecology and biology [13, 25, 26], chemical kinetics [30, 32], epidemiology [2, 3, 4], and other fields. In particular numerical approximation of QSD is of significant interest. Various numerical methods have been proposed to approximate QSD, and one important family of methods are described in terms of self-interacting chains [1, 5, 9, 17]. The precise description of this approximation scheme is recalled in Example 2.6; here we merely note that the scheme corresponds to simulating a self-interacting Markov chain for which the function \(G\) is given as \(G(m)(x,y)\doteq P(x,y)+P(x,0)m(y)\), for \(x,y\in\Delta^{o}\), and \(m\in\mathcal{P}(\Delta^{o})\). The law of large numbers (LLN) for the empirical measure sequence \(\{L^{n},\ n\in\mathbb{N}\}\) associated with this Monte-Carlo method giving a.s. convergence to the QSD has been established in [1, 5]. Under exactly the conditions for the LLN, the current work establishes a LDP for this sequence. Beyond this example, as discussed in Section 8, the Assumptions of Theorem 2.4 cover many other types of self-interacting Markov chains as well, including certain variants of edge reinforced and vertex reinforced random walks, a type of personalized PageRank algorithm, and certain generalized Polya urn schemes.
In the special case where \(G(m)=P^{o}\) is independent of \(m\), the LDP in the current work reduces to the classical empirical measure LDP for finite state Markov chains [19, 20]. As is well known, in this case the rate function takes the following simple form
\[\tilde{I}(m)=\inf_{\gamma\in\mathcal{I}(m)}R(\gamma\|m\otimes P^{o}),\ m\in \mathcal{P}(\Delta^{o}), \tag{1.1}\]
where \(\mathcal{I}(m)\doteq\{\gamma\in\mathcal{P}(\Delta^{o}\times\Delta^{o}):\gamma (\Delta^{o}\times\cdot)=\gamma(\cdot\times\Delta^{o})=m(\cdot)\}\), \(m\otimes P^{o}\in\mathcal{P}(\Delta^{o}\times\Delta^{o})\) is defined as \(m\otimes P^{o}(x,y)=m(x)P^{o}(x,y)\), \(x,y\in\Delta^{o}\), and \(R\) is the relative entropy function. This rate function can be interpreted as saying that the most likely way the empirical measure is asymptotically close to a given probability measure \(m\) is for the realization of the sequence \(\{X_{n},\ n\in\mathbb{N}_{0}\}\) to behave like that from a Markov chain with transition probability kernel \(\gamma_{2|1}(\cdot\ |\ \cdot)\), where with \(x,y\in\Delta^{o}\), \(\gamma(x,y)=\gamma_{2|1}(y\ |\ x)m(x)\) defines the disintegration of the probability measure \(\gamma\) that achieves the infimum in the above variational formula. Indeed, this insight and an appropriate use of the ergodic theorem are the key ingredients in the proof of the large deviation lower bound in this classical setting. In contrast, for the self-interacting Markov chains considered in the current work, the atypical behaviors for which the empirical measure sequence is asymptotically close to a given \(m\in\mathcal{P}(\Delta^{o})\) are significantly more complex. Roughly speaking, after a long period of time, the suitably interpolated path constructed from the empirical measure sequence \(\{L^{n},\ n\in\mathbb{N}\}\) behaves like a trajectory, with a linear velocity, that converges to \(m\) and whose evolution is governed by certain dynamic local equilibria associated with time-dependent transition probability kernels on \(\Delta^{o}\times\Delta^{o}\) (cf. (P2), (P3)). The instantaneous local
averaging that is manifested in the form of the rate function is somewhat akin to the forms of rate functions for large deviations from stochastic averaging principles for multiscale stochastic dynamical systems [15, 22, 35]. This atypical behavior that produces a given \(m\in\mathcal{P}(\Delta^{o})\) can be seen from the definition of the rate function \(I\) in (2.4), which is described in terms of time-reversal of such linear paths, so that the convergence to \(m\) at \(\infty\) is replaced with the initial condition of the path being equal to \(m\). The infinite horizon discount in the variational formula for the rate function arises due to the natural time interpolation that is associated with the discrete evolution of \(L^{n}\) with steps of sizes \(1/(n+1)\) (see (3.2)). Such time interpolation is quite standard in the asymptotic analysis of stochastic approximation schemes [6, 8, 10, 28] and indeed a discounted cost has been previously observed in a rate function for certain large deviation problems arising from some stochastic approximation schemes with Gaussian noises [29]. We remark that in the special case when \(G(m)=P^{o}\) (i.e., \(G(m)\) is independent of \(m\)), the rate function in (2.4) is easily seen to reduce to the classical formula in (1.1) (see Example 1 in Section 8). We also note that the natural analogue of \(\tilde{I}\) in the general self-interacting setting, defined as
\[\tilde{I}(m)=\inf_{\gamma\in\mathcal{I}(m)}R(\gamma\|m\otimes G(m)),\;m\in \mathcal{P}(\Delta^{o}), \tag{1.2}\]
satisfies the inequality \(I(m)\leq\tilde{I}(m)\), \(m\in\mathcal{P}(\Delta^{o})\); see Remark 2.5.
We now make some comments on proof techniques. The basic idea is to use stochastic control representations for Laplace functionals of the form in (3.10) [14, 21]. Using this variational formula, the proof of the upper bound proceeds via natural tightness and weak convergence arguments. The main challenges and novelty are in the proof of the large deviation lower bound and so we limit our remarks to this inequality. The basic idea is to choose a near-optimal control \(\eta\), and the corresponding trajectory \(M\) given through (P3), in the variational formula for the rate function \(I(m)\) in (2.4), and then construct controlled empirical measures as in (3.2) that suitably approximate \(M\) and for which the associated cost, as given by the second term on the right side of (3.10), appropriately approximates the cost associated with \(\eta\) in (2.4). However, such a construction for an arbitrary near-optimal control \(\eta\) appears quite daunting, mainly due to the local equilibrium property (P2) that the constructed stochastic controls are required to achieve asymptotically. In order to handle this, we proceed by a series of approximations that lead to a 'well behaved' simple form near-optimal control that is more tractable for a suitable construction of controlled empirical measures. This is the main content of Section 5. Next, in Section 6 we proceed to the construction of controlled empirical measures that are designed to suitably approximate the simple form near-optimal path constructed in Section 5. This construction and proof of convergence are technically the most involved part of the proof. Detailed discussion of the construction can be found at the start of Section 6, but at a high level the idea is to employ the ergodic theorem in a dynamic fashion to successively approximate all the local equilibria that make up the simple form control \(\eta\) using suitably controlled empirical measures in such a manner that the associated costs also have the correct asymptotic behavior.
Finally, we discuss some related literature on large deviations. The model that we consider can be formulated as a type of urn model. Large deviations for a family of urn models (that are very different from the one considered here) have been studied in [23]. The paper [12] studies large deviations associated with a preferential attachment random graph by viewing it as a special type of an urn model. In the case \(d=2\), large deviations for certain generalized Polya urns have been studied in [24]. For a special choice of the 'urn function' in [24], this model reduces to a model of the form considered in the current work with \(d=2\) and \(G(m)(x,y)=(mP^{o})(y)\), \(x,y\in\Delta^{o}\), \(m\in\mathcal{P}(\Delta^{o})\) where \(P^{o}\) is a \(2\times 2\) transition probability matrix with strictly positive entries. Large deviations for a similar model but with a general \(d\) was recently studied in [16] under the condition that \(P^{o}(x,y)>0\) for all \(x,y\in\Delta^{o}\). The proofs in the latter paper also use stochastic control representations as in the current work, however the arguments there are significantly simpler due to fact that \(G(m)(x,y)\) does not depend on \(x\); in particular the main technical challenge of time varying equilibria does not arise in [16]. In fact as a corollary of the current work we obtain a substantial extension of the result in [16] where the condition \(P^{o}(x,y)>0\) for all \(x,y\in\Delta^{o}\) is relaxed to the requirement that \(P^{o}\) is an irreducible transition probability matrix
(see Section 8, Example 2). Our results also cover certain types of edge reinforced random walks (see Section 8, Example 5). Some results on large deviations for specific kinds of edge reinforced random walks (once-reinforced random walks) can be found in [27] and [37].
### Outline
This paper is organized as follows. In Section 1.2 we introduce some notation that is used throughout this work. In Section 2 we introduce the model of interest, state our main large deviation result (Theorem 2.4), and provide one basic example that motivates this study. In Section 3 we present the stochastic control representation that is used to in the proof of Theorem 2.4; both in proving the large deviation upper bound and lower bound. The large deviation upper bound is proved in Section 4. In Section 5, through appropriate perturbation, mollification, and discretization, we construct simple form near-optimal trajectories and controls that are tractable for constructing suitable controlled empirical measures for the proof of the large deviation lower bound. In Section 6 we proceed with this construction and provide the proof of the convergence of the controlled processes and costs, which finishes the proof of the large deviation lower bound. In Section 7 we show that the function \(I\) introduced in (2.4) is a rate function, namely it has compact sublevel sets. Finally in Section 8 we present several examples for which the assumptions of Theorem 2.4 are satisfied.
### Notation
In this section we introduce some notation that is used throughout this work. Fix \(d\in\mathbb{N}\), and let \(\Delta^{o}=\{1,\ldots,d\}\). For a metric space \(S\), \(\mathcal{B}(S)\) denotes the corresponding Borel \(\sigma\)-field and \(\mathcal{P}(S)\) denotes the space of probability measures on \((S,\mathcal{B}(S))\) equipped with the topology of weak convergence. When \(S\) is a finite set, we let \(\mathcal{P}_{+}(S)\doteq\{m\in\mathcal{P}(S):\min_{x\in S}m(x)>0\}\). Recall that a function \(I:S\to[0,\infty]\) is called a rate function if it has compact sublevel sets, namely \(S_{k}\doteq\{x\in S:I(x)\leq k\}\) is compact for every \(k\in[0,\infty)\). For \(x\in S\), \(\delta_{x}\in\mathcal{P}(S)\) denotes the Dirac probability measure concentrated at \(x\). For a probability measure \(\eta\) on \(S_{1}\times S_{2}\times S_{3}\), \(\eta_{(i)}\) denotes the marginal distribution of \(\eta\) on \(S_{i}\), \(i=1,2,3\), and for \(i<j\), \(\eta_{(i,j)}\) denotes the marginal distribution of \(\eta\) on \(S_{i}\times S_{j}\). Similar notation is used for probability measures on other product spaces. For \(\mu,\nu\in\mathcal{P}(S)\), we denote the relative entropy of \(\nu\) with respect to \(\mu\) as \(R(\nu\|\mu)\), which is the extended real number defined as
\[R(\nu\|\mu)\doteq\int_{S}\left(\log\frac{d\nu}{d\mu}\right)d\nu,\]
if \(\nu\) is absolutely continuous with respect to \(\mu\), and \(+\infty\) otherwise. Let \(\mathcal{V}^{d}\doteq\{e_{1},\ldots,e_{d}\}\), where \(e_{x}\) is the \(x\)-th unit coordinate vector in \(\mathbb{R}^{d}\). For a locally compact space \(S\), let \(\mathcal{M}(S)\) denote the space of locally finite measures on \(S\) equipped with the vague topology. We denote by \(C_{b}(\mathcal{P}(\Delta^{o}))\) the space of bounded continuous functions from \(\mathcal{P}(\Delta^{o})\) to \(\mathbb{R}\). For \(m,\tilde{m}\in\mathcal{P}(\Delta^{o})\), we write \(\|m-\tilde{m}\|\doteq\sum_{x\in\Delta^{o}}|m(x)-\tilde{m}(x)|\), and we use the same notation for the norm of a vector in \(\mathbb{R}^{d}\). For a Polish space \(S\), \(C([0,\infty):S)\) will denote the space of continuous functions from \([0,\infty)\) to \(S\), equipped with the topology of local uniform convergence. As a convention \(\int_{a}^{b}f(s)ds\) is taken to be \(0\) if \(a\geq b\) and \(\sum_{i=k}^{j}a_{i}\) is taken to be \(0\) if \(k>j\). For \(v\in\mathbb{R}^{d}\) we use \(v_{x}\) and \(v(x)\) interchangeably to denote the \(x\)-th coordinate of \(v\). A transition kernel \(K\) on a finite set \(S\) is a map \(K:S\times S\to[0,1]\) such that \(\sum_{y\in S}K(x,y)=1\) for all \(x\in S\). For such a kernel and \(x,y\in S\), we use \(K_{x,y}\) and \(K(x,y)\) interchangeably, and we write \(\mathcal{K}(S)\) to denote the set of transition kernels on \(S\). We write \(I_{d}\) to denote the \(d\times d\) identity matrix. For a matrix \(A\), we write \(A>0\) to denote that all of its entries are strictly positive. Finally, we write \(\mathbb{R}_{+}\) to denote \([0,\infty)\).
Setting and Main Result
### Description of the Model
Consider a map \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) and fix \(x_{0}\in\Delta^{o}\). We consider a collection \(\{X_{n},\;n\in\mathbb{N}_{0}\}\) of \(\Delta^{o}\)-valued random variables, a collection \(\{L^{n},\;n\in\mathbb{N}\}\) of \(\mathcal{P}(\Delta^{o})\)-valued random measures, and a filtration \(\{\mathcal{F}_{n},\;n\in\mathbb{N}_{0}\}\) on some probability space \((\Omega,\mathcal{F},P)\), defined recursively as follows. Let \(X_{0}\doteq x_{0}\), \(\mathcal{F}_{0}\doteq\{\emptyset,\Omega\}\), and \(L^{1}\doteq\delta_{x_{0}}\). Having defined \(\{X_{i},L^{i+1},\;0\leq i\leq n\}\) and \(\sigma\)-fields \(\{\mathcal{F}_{i},\;i\leq n\}\) for some \(n\in\mathbb{N}_{0}\), define
\[P(X_{n+1}=y\mid\mathcal{F}_{n})\doteq G(L^{n+1})_{X_{n},y},\;\;y\in\Delta^{o},\]
\(\mathcal{F}_{n+1}\doteq\sigma\{X_{k},\;k\leq n+1\}\), and
\[L^{n+2}\doteq\frac{1}{n+2}\sum_{i=0}^{n+1}\delta_{X_{i}}. \tag{2.1}\]
### Statement of Results
We introduce the following two assumptions on the operator \(G\).
**Assumption 2.1**.: _[Lipschitz Continuity] There is \(L_{G}\in(0,\infty)\) such that for all \(m,\tilde{m}\in\mathcal{P}(\Delta^{o})\),_
\[\sum_{x,y\in\Delta^{o}}|G(m)_{x,y}-G(\tilde{m})_{x,y}|\leq L_{G}\|m-\tilde{m}\|.\]
The above assumption is the only requirement for the large deviation upper bound. A \(d\times d\) matrix \(A\) is called an _adjacency matrix_ if it has entries \(0\) or \(1\); and it is called an _irreducible adjacency matrix_ if for each \(x,y\in\Delta^{o}\) there is a \(m\in\mathbb{N}\) such that \(A^{m}(x,y)>0\).
We denote by \(\mathcal{A}\) the collection of all adjacency matrices \(A\) that have the property that, for each \((x,y)\in\Delta^{o}\times\Delta^{o}\), \(A(x,y)=0\) implies \(G(m)(x,y)=0\) for all \(m\in\mathcal{P}(\Delta^{o})\). We write
\[A_{+}\doteq\{(x,y)\in\Delta^{o}\times\Delta^{o}:A_{x,y}=1\}.\]
Note that the class \(\mathcal{A}\) is nonempty as it contains the matrix with all ones (in that case the above property is vacuously true). In fact we will establish a collection of large deviation upper bounds, one for each choice of \(A\in\mathcal{A}\).
For the lower bound we need additional assumptions as follows, which in particular associate a'minimal' \(A\) associated with the map \(G\).
**Assumption 2.2**.:
1. _[Linearity] For all_ \(\kappa\in[0,1]\) _and_ \(m,\tilde{m}\in\mathcal{P}(\Delta^{o})\)_,_ \[G(\kappa m+(1-\kappa)\tilde{m})=\kappa G(m)+(1-\kappa)G(\tilde{m}).\]
2. _[Communication Structure] There is an irreducible adjacency matrix_ \(A\) _such that the following hold:_ 1. \(A\in\mathcal{A}\)_._ 2. _There is a_ \(\delta_{0}^{A}\in(0,\infty)\) _such that if_ \((x,y)\in A_{+}\)_, then_ \[G(m)_{x,y}\geq\delta_{0}^{A}\min_{z\in\Delta^{o}}m_{z},\;\;m\in\mathcal{P}( \Delta^{o}).\]
3. _[Positive Fixed Point] There is a_ \(\pi^{*}\in\mathcal{P}_{+}(\Delta^{o})\) _such that_ \(\pi^{*}G(\pi^{*})=\pi^{*}\)_._
4. _[Nondegeneracy of Empirical measure] For every_ \(x\in\Delta^{o}\)__ \[P(\omega\in\Omega:\text{ for some }n\in\mathbb{N},L^{n}(\omega)(x)>0)=1.\]
_Remark 2.3_.:
1. Note that Assumption 2.2 (1) implies that Assumption 2.1 holds with \(L_{G}=1\).
2. In many examples of interest we will have that a strong law of large numbers holds and that the limiting measure is non-degenerate, namely \(L^{n}\to\pi^{*}\) a.s. for some \(\pi^{*}\in\mathcal{P}_{+}(\Delta^{o})\) as \(n\to\infty\). In such a case, Assumption 2.2 (4) clearly holds. Also, in such a case, is easy to verify that \(\pi^{*}\) is a fixed point, namely \(\pi^{*}G(\pi^{*})=\pi^{*}\). Thus, Assumption 2.2 (3) holds as well.
3. Consider the map \(T:\mathcal{P}(\Delta^{o})\to\mathcal{P}(\Delta^{o})\) given by \(Tm\doteq mG(m)\). Since \(\mathcal{P}(\Delta^{o})\) is compact and convex, Assumption 2.1 and Brouwer's fixed point theorem ensure that there is some \(\pi^{*}\in\mathcal{P}(\Delta^{o})\) such that \(T\pi^{*}=\pi^{*}G(\pi^{*})=\pi^{*}\). In addition, in many situations of interest \(G(m)\) will be an irreducible transition probability kernel for all \(m\in\mathcal{P}(\Delta^{o})\). In such cases we have in fact that \(\pi^{*}\in\mathcal{P}_{+}(\Delta^{o})\) and so Assumption 2.2 (3) holds. Suppose the following stronger form of irreducibility holds: \[\text{For some }K\in\mathbb{N}\text{ and all }m_{1},\dots,m_{K}\in\mathcal{P}(\Delta^{o}),\ \sum_{j=1}^{K}G(m_{1})G(m_{2})\cdots G(m_{j})>0.\] (2.2) Then, as shown in Lemma A.2 in the Appendix, in this case Assumption 2.2 (4) holds as well.
In Section 8 we present several examples for which the assumptions of Theorem 2.4 are satisfied; see also Example 2.6 below.
We now introduce the rate function that governs the large deviation asymptotics. This function will be defined in terms of a matrix \(A\in\mathcal{A}\) but this dependence will be suppressed in the notation.
Let
\[\mathcal{P}^{*}(\Delta^{o})\doteq\{m\in\mathcal{P}(\Delta^{o}):\text{ for some }\bar{m}\in\mathcal{P}(\Delta^{o}\times\Delta^{o}),\]
\[\bar{m}_{(2)}=\bar{m}_{(1)}=m\text{ and }\bar{m}(x,y)=0\text{ for all }(x,y)\in(A_{+})^{c}\}.\]
The class \(\mathcal{P}^{*}(\Delta^{o})\) consists of all probability measures \(m\) that are invariant measures for some transition probability kernel for which, at each state charged by \(m\), jumps can only occur to neighbors as defined by the adjacency matrix \(A\). Let \(\mathcal{U}\) denote the collection of all measurable maps from \(\mathbb{R}_{+}\) to \(\mathcal{P}(\Delta^{o}\times\Delta^{o})\). Fix \(m\in\mathcal{P}(\Delta^{o})\) and suppose that a \(\eta\in\mathcal{U}\) satisfies the following properties:
1. For \(x,y\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\), let \[\beta(\{x\}\times\{y\}\times[0,t])\doteq\int_{0}^{t}\eta(\{x\}\times\{y\},s)ds.\] Then, \[\beta(\{x\}\times\{y\}\times[0,t])=0,\ \text{ for all }\ (x,y)\in(A_{+})^{c}\text{ and }t\in\mathbb{R}_{+}.\] (P1)
2. For a.e. \(s\in\mathbb{R}_{+}\), the two marginals of \(\eta(\cdot\mid s)\) are the same, namely, disintegrating \(\eta(x,y\mid s)\doteq\eta(\{x\}\times\{y\},s)\) as \[\eta(x,y\mid s)=\eta_{(1)}(x\mid s)\eta_{2\mid 1}(y\mid s,x),\] we have \[\eta_{(1)}(x\mid s)=\sum_{z\in\Delta^{o}}\eta(z,x\mid s)\doteq\eta_{(2)}(x\mid s ),\text{ for all }x\in\Delta^{o},\text{ a.e. }s\in\mathbb{R}_{+}.\] (P2)
3. If \(M:\mathbb{R}_{+}\to\mathbb{R}^{d}\) satisfies, \[M(t)=m-\int_{0}^{t}\eta_{(1)}(s)ds+\int_{0}^{t}M(s)ds,\ \ t\in\mathbb{R}_{+},\] (P3) where \(\eta_{(1)}(s)\doteq\eta_{(1)}(\cdot\mid s)\), then \(M\in C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\). Furthermore, there is a \(\mathcal{T}\in C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times\Delta^{o}))\) satisfying, for all \(s,t\in\mathbb{R}_{+}\), \(\|\mathcal{T}(t)-\mathcal{T}(s)\|\leq 2|t-s|\), \(M(t)=(\mathcal{T}(t))_{(1)}=(\mathcal{T}(t))_{(2)}\), and \(\mathcal{T}(t)(x,y)=0\) for all \((x,y)\in(A_{+})^{c}\). In particular, for all \(t\in\mathbb{R}_{+}\), \(M(t)\in\mathcal{P}^{*}(\Delta^{o})\).
Let
\[\mathcal{U}(m)=\{\eta\in\mathcal{U}:\eta\,\text{satisfies}\,(a),(b),(c)\}. \tag{2.3}\]
Note that for each \(\eta\in\mathcal{U}\) and \(m\in\mathcal{P}(\Delta^{o})\), there is a unique \(M\in C(\mathbb{R}_{+}:\mathbb{R}^{d})\) that solves (P3); however, for \(\eta\) to belong to \(\mathcal{U}(m)\) we require the stronger condition that for each \(t\geq 0\), \(M(t)\in\mathcal{P}^{*}(\Delta^{o})\). If, for some \(m\in\mathcal{P}(\Delta^{o})\) and \(\eta\in\mathcal{U}(m)\), we have that \(M\in C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\) solves (P3), we say that \(M\)_solves_\(\mathcal{U}(m,\eta)\).
For \(m\in\mathcal{P}(\Delta^{o})\), define \(I:\mathcal{P}(\Delta^{o})\to[0,\infty]\) as
\[I(m)\doteq\inf_{\eta\in\mathcal{U}(m)}\int_{0}^{\infty}\exp(-s)\sum_{x\in \Delta^{o}}\eta_{(1)}(x\mid s)R\left(\eta_{2|1}(\cdot\mid s,x)\|G(M(s))(x, \cdot)\right)ds, \tag{2.4}\]
where \(M\) solves \(\mathcal{U}(m,\eta)\). By the chain rule for relative entropy and (P2),
\[I(m)=\inf_{\eta\in\mathcal{U}(m)}\int_{0}^{\infty}\exp(-s)R\left(\eta(\cdot \mid s)\|\eta_{(1)}(\cdot\mid s)\otimes G(M(s))(\cdot,\cdot)\right)ds,\]
where \(\eta_{(1)}(\cdot\mid s)\otimes G(M(s))(\cdot,\cdot)\in\mathcal{P}(\Delta^{o} \times\Delta^{o})\) is defined as
\[\eta_{(1)}(\cdot\mid s)\otimes G(M(s))(\{x\}\times\{y\})\doteq\eta_{(1)}(x \mid s)G(M(s))_{x,y},\ \ x,y\in\Delta^{o}.\]
The following theorem is the main result of this work, which establishes a Large Deviation Principle (LDP) for \(\{L^{n},\ n\in\mathbb{N}\}\).
**Theorem 2.4**.: _Fix \(A\in\mathcal{A}\) and let \(I:\mathcal{P}(\Delta^{o})\to[0,\infty]\) be the function defined in (2.4). Suppose that Assumption 2.1 is satisfied. Then \(I\) is a rate function and the sequence \(\{L^{n+1},\ n\in\mathbb{N}_{0}\}\) defined in (2.1) satisfies the LDP upper bound with rate function \(I\), namely, for each closed set \(F\subseteq\mathcal{P}(\Delta^{o})\),_
\[\limsup_{n\to\infty}n^{-1}\log P(L^{n+1}\in F)\leq-\inf_{m\in F}I(m).\]
_Suppose in addition that Assumption 2.2 is satisfied. Then, the LDP lower bound holds as well, with the rate function \(I\) and the adjacency matrix \(A\) given as in Assumption 2.2, namely, for each open set \(G\subseteq\mathcal{P}(\Delta^{o})\),_
\[\liminf_{n\to\infty}n^{-1}\log P(L^{n+1}\in G)\geq-\inf_{m\in G}I(m).\]
Proof.: Large deviation upper and lower bounds are proved in Theorem 4.1 and Theorem 5.1 respectively. In Section 7 we show that \(I\) is a rate function, namely it has compact sublevel sets.
_Remark 2.5_.: Recall the function \(\tilde{I}\) from (1.2). We now show that \(I(m)\leq\tilde{I}(m)\) for all \(m\in\mathcal{P}(\Delta^{o})\). Without loss of generality suppose that \(\tilde{I}(m)<\infty\) and consider \(\gamma\in\mathcal{I}(m)\) with \(R(\gamma\|m\otimes G(m))<\infty\). Then we must have that \(\gamma(x,y)=0\) for all \((x,y)\in(A_{+})^{c}\). Thus, if we define \(\eta\in\mathcal{U}\) by \(\eta(\cdot\mid s)\doteq\gamma\) for all \(s\in\mathbb{R}_{+}\), then \(\eta\) satisfies (P1) and (P2). Furthermore, since \(\eta_{(1)}(s)=\gamma_{(1)}=m\), (P3) is satisfied with \(M(t)=m\), \(t\in\mathbb{R}_{+}\). This shows that \(\eta\in\mathcal{U}(m)\) and that \(M\) solves \(\mathcal{U}(m,\eta)\). Note that the cost on the right side of (2.4), with this choice of \(\eta\in\mathcal{U}(m)\), is
\[\int_{0}^{\infty}\exp(-s)R\left(\gamma\|\gamma_{(1)}\otimes G(m)\right)ds=R\left( \gamma\|m\otimes G(m)\right).\]
Thus, \(I(m)\leq R\left(\gamma\|m\otimes G(m)\right)\), from which the inequality \(I(m)\leq\bar{I}(m)\) follows on taking infimum over \(\gamma\). In Section 8, Example 1, we show that when \(G(m)\) is independent of \(m\) then the reverse inequality holds as well.
We now give one basic example for which our assumptions (namely Assumption 2.2) are satisfied. Several other examples are discussed in Section 8.
_Example 2.6_ (**Quasi-Stationary Distributions**).: Let \(\Delta=\Delta^{o}\cup\{0\}\), \(P\in\mathcal{K}(\Delta)\), and let \(\{Y_{n},\ n\in\mathbb{N}_{0}\}\) be a Markov chain with transition probability kernel \(P\). Then \(\pi^{*}\in\mathcal{P}(\Delta^{o})\) is called a quasi-stationary distribution for the chain \(\{Y_{n},\ n\in\mathbb{N}_{0}\}\) if
\[P_{\pi^{*}}(Y_{n}=x\mid Y_{n}\in\Delta^{o})=\pi^{*}_{x},\ \ x\in\Delta^{o},n\in \mathbb{N}_{0},\]
where \(P_{\pi^{*}}\) denotes the probability measure under which \(Y_{0}\) is distributed as \(\pi^{*}\). Suppose that the substochastic matrix \(P^{o}\) defined by
\[P^{o}_{x,y}\doteq P_{x,y},\ \ x,y\in\Delta^{o}\]
is irreducible. Then it is known that there is a unique QSD for the Markov chain \(\{Y_{n},\ n\in\mathbb{N}_{0}\}\)[18]. In [1] a basic Monte-Carlo method for computing this QSD was introduced. Define \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) as
\[G(m)_{x,y}\doteq P_{x,y}+P_{x,0}m_{y},\ \ x,y\in\Delta^{o},m\in\mathcal{P}( \Delta^{o}) \tag{2.5}\]
and construct \(\{X_{n},L^{n+1},\ n\in\mathbb{N}_{0}\}\) as in Section 2.1. Then [1, 5] show that \(L^{n}\) converges a.s. to the unique QSD \(\pi^{*}\) of the chain \(\{Y_{n},\ n\in\mathbb{N}_{0}\}\). The current work establishes a large deviation principle for the sequence \(\{L^{n},\ n\in\mathbb{N}_{0}\}\) under the same irreducibility assumption on \(P^{o}\) made in [5]. To see that Assumption 2.2 is satisfied, note that Part 1 of this Assumption is clearly satisfied by \(G\). Part 3 is also satisfied under the above irreducibility assumption (see [18]). From [1], \(L^{n}\) converges a.s. to the unique QSD \(\pi^{*}\), so Part 4 holds as well. Finally, for Part 2, define \(A_{+}\doteq\{(x,y)\in\Delta^{o}\times\Delta^{o}:P_{x,y}+P_{x,0}>0\}\) and \(A_{x,y}=\mathbf{1}_{\{(x,y)\in A_{+}\}}\). Clearly \(A\) is irreducible and parts 2a and 2b of Assumption 2.2 are satisfied with this choice of the adjacency matrix \(A\). Thus, the conditions for Theorem 2.4 are satisfied and one has a large deviation principle for the empirical measure associated with the self-interacting chain introduced in [1] for the approximation of the QSD of \(\{Y_{n},\ n\in\mathbb{N}_{0}\}\). We remark that the model in (2.5) can also be viewed as a type of a **vertex-reinforced random walk** on \(\Delta^{o}\). In this walk, given that at some instant the walker is at site \(x\), it jumps to a site \(y\) with probability that depends on the fraction of time the walker has previously visited the site \(y\), as given by the formula \(P(x,y)+P(x,0)m_{y}\).
## 3 A Stochastic Control Representation
Throughout this section and next we fix \(A\in\mathcal{A}\). It will be convenient to consider the following pathwise construction of the collection \(\{X_{n},L^{n+1},\ n\in\mathbb{N}_{0}\}\) which will be used throughout. For this construction it is useful to identify the state space with the space \(\mathcal{V}^{d}\) introduced in Section 1.2. In particular, note that each \(K\in\mathcal{K}(\Delta^{o})\) can be associated with a unique \(K^{\mathcal{V}}\in\mathcal{K}(\mathcal{V}^{d})\) through the identity
\[K^{\mathcal{V}}_{e_{x},e_{y}}=K_{x,y},\ x,y\in\Delta^{o}.\]
Similarly, define the operator \(G^{\mathcal{V}}:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\mathcal{V}^{d})\) by
\[G^{\mathcal{V}}(m)_{e_{x},e_{y}}\doteq G(m)_{x,y},\ x,y\in\Delta^{o}.\]
Let \(\{\nu^{k}(x,m),\ x\in\Delta^{o},m\in\mathcal{P}(\Delta^{o}),k\in\mathbb{N}\}\) be iid \(\mathcal{V}^{d}\)-valued random fields such that, for each \(x\in\Delta^{o}\) and \(m\in\mathcal{P}(\Delta^{o})\),
\[P(\nu^{1}(x,m)=e_{y})=G^{\mathcal{V}}(m)_{e_{x},e_{y}}=G(m)_{x,y},\quad y\in \Delta^{o}.\]
Then, the collection \(\{X_{n},L^{n+1},\ n\in\mathbb{N}_{0}\}\) has the following distributionally equivalent representation: \((X_{0},L^{1})=(x_{0},\delta_{x_{0}})\),
\[L^{k+1}=L^{k}+\frac{1}{k+1}\left[\nu^{k}(X_{k-1},L^{k})-L^{k}\right],\quad e_{X_ {k}}=\nu^{k}(X_{k-1},L^{k}),\ \ k\in\mathbb{N}. \tag{3.1}\]
To prove the upper and lower bounds in Theorems 4.1 and 5.1, we rely on a certain stochastic control representation for exponential moments of functionals of \(\{L^{n},\ n\in\mathbb{N}\}\) presented below. This representation is given in terms of certain controlled analogues of \(\{\nu^{k}(X_{k-1},L^{k}),L^{k+1},\ k\in\mathbb{N}\}\).
For each \(n\in\mathbb{N}\), the controlled stochastic system is a sequence \(\{\bar{L}^{n,k},\ k\in\mathbb{N}\}\) of \(\mathcal{P}(\Delta^{o})\)-valued random variables which is defined recursively in terms of a collection of random probability measures on \(\mathcal{V}^{d}\), \(\{\bar{\mu}^{n,k},\ k\in\mathbb{N}\}\), where for each \(k\in\mathbb{N}\), \(\bar{\mu}^{n,k}\) is \(\bar{\mathcal{F}}^{n,k}\doteq\sigma(\{\bar{L}^{n,j},\ 1\leq j\leq k\})\) measurable, \(\bar{L}^{n,1}\doteq\delta_{x_{0}}\), and, having defined \(\{\bar{L}^{n,j},1\leq j\leq k\}\), \(\bar{L}^{n,k+1}\) is defined as
\[\bar{L}^{n,k+1}\doteq\bar{L}^{n,k}+\frac{1}{k+1}\left[\bar{\nu}^{n,k}-\bar{L}^ {n,k}\right],\ \ k\in\mathbb{N}, \tag{3.2}\]
where \(\bar{\nu}^{n,k}\) is a \(\mathcal{V}^{d}\)-valued random variable such that
\[P[\bar{\nu}^{n,k}=e_{x}\mid\bar{\mathcal{F}}^{n,k}]=\bar{\mu}^{n,k}(e_{x}),\ \ x\in\Delta^{o}. \tag{3.3}\]
We set \(\bar{\nu}^{n,0}\doteq\delta_{x_{0}}\), and, for reach \(n\in\mathbb{N}\) and \(k\in\mathbb{N}_{0}\), we let \(\bar{X}^{n,k}\) denote the \(\Delta^{o}\)-valued random variable such that \(e_{\bar{X}^{n,k}}=\bar{\nu}^{n,k}\). We note that the evolution equation (3.2) can be viewed as a 'controlled' analogue of (3.1). For the stochastic control representation we give, it suffices, as discussed below (3.10), to consider controlled processes for which, a.s., for each \(k\in\mathbb{N}_{0}\),
\[\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}(e_{x},e_{y})=0,\ \text{for all}\ (x,y)\in(A_{+})^{c}. \tag{3.4}\]
We denote the collection of all such _control_ sequences \(\{\bar{\mu}^{n,k},\ k\in\mathbb{N}\}\) as \(\Theta^{n}\).
It will be convenient to also consider the following collection of \(\mathcal{P}(\Delta^{o}\times\Delta^{o})\)-valued random variables. Let \(\bar{\mathcal{T}}^{n,1}\doteq\delta_{(x_{0},\bar{X}^{n,1})}\), and, having defined \(\{\bar{\mathcal{T}}^{n,j},\ 1\leq j\leq k\}\), define \(\bar{\mathcal{T}}^{n,k+1}\) as
\[\bar{\mathcal{T}}^{n,k+1}\doteq\bar{\mathcal{T}}^{n,k}+\frac{1}{k+1}\left[\bar {\nu}^{n,k}\otimes\bar{\nu}^{n,k+1}-\bar{\mathcal{T}}^{n,k}\right],\ \ k\in\mathbb{N}, \tag{3.5}\]
where, with a slight abuse of notation, \(\bar{\nu}^{n,k}\otimes\bar{\nu}^{n,k+1}\) is a \(\mathcal{P}(\Delta^{o}\times\Delta^{o})\)-valued random variable defined, for \((x,y)\in\Delta^{o}\times\Delta^{o}\), as \(\bar{\nu}^{n,k}\otimes\bar{\nu}^{n,k+1}(x,y)=1\), if \(\bar{\nu}^{n,k}=e_{x}\) and \(\bar{\nu}^{n,k+1}=e_{y}\); and \(0\) otherwise.
Since it is more convenient to work with processes indexed with a continuous time parameter, we consider the following time interpolation sequence \(\{t_{k},\ k\in\mathbb{N}_{0}\}\) defined by
\[t_{0}\doteq 0,\ \ t_{k}=\sum_{j=1}^{k}(j+1)^{-1},\ \ k\in\mathbb{N}.\]
Such a time interpolation is standard in study of stochastic approximation schemes [6, 8, 10, 28]. For each \(n\in\mathbb{N}\), define the \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\)-valued (resp. \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times\Delta^{o}))\)-valued) random variable \(\bar{L}^{n}\) (resp. \(\bar{\mathcal{T}}^{n}\)) by linear interpolation:
\[\bar{L}^{n}(t) \doteq\bar{L}^{n,k+1}+(k+2)(t-t_{k})[\bar{L}^{n,k+2}-\bar{L}^{n,k+ 1}],\ \ t\in[t_{k},t_{k+1}),\ k\in\mathbb{N}_{0}, \tag{3.6}\] \[\bar{\mathcal{T}}^{n}(t) \doteq\bar{\mathcal{T}}^{n,k+1}+(k+2)(t-t_{k})[\bar{\mathcal{T}}^ {n,k+2}-\bar{\mathcal{T}}^{n,k+1}],\ \ t\in[t_{k},t_{k+1}),\ k\in\mathbb{N}_{0},\]
Consider random measures on \(\mathcal{V}^{d}\times[0,t_{n}]\) and \(\mathcal{V}^{d}\times\mathcal{V}^{d}\times[0,t_{n}]\) defined as follows: for \(A\subseteq\mathcal{V}^{d}\), \(C\subseteq\mathcal{V}^{d}\) and \(B\in\mathcal{B}[0,t_{n}]\),
\[\bar{\Lambda}^{n}(A\times B)\doteq\int_{B}\bar{\Lambda}^{n}(A \mid t)dt, \tag{3.7}\] \[\bar{\xi}^{n}(A\times C\times B)\doteq\int_{B}\bar{\xi}^{n}(A \times C\mid t)dt,\ \ \bar{\Xi}^{n}(A\times C\times B)\doteq\int_{B}\bar{\Xi}^{n}(A\times C\mid t)dt\]
where, for \(k\leq n-1\) and \(t\in[t_{k},t_{k+1})\),
\[\bar{\Lambda}^{n}(\cdot\mid t)\doteq\delta_{\bar{\nu}^{n,k+1}}(\cdot),\ \ \bar{\xi}^{n}(\cdot\mid t)\doteq\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}( \cdot),\ \ \bar{\Xi}^{n}(\cdot\mid t)\doteq\delta_{\bar{\nu}^{n,k+1}}\otimes\delta_{\bar{ \nu}^{n,k+2}}(\cdot). \tag{3.8}\]
From (3.4), it follows that if \((x,y)\in(A_{+})^{c}\), then, for a.e. \(t\in\mathbb{R}_{+}\),
\[\bar{\xi}^{n}(e_{x},e_{y}\mid t)=0,\ \ \bar{\Xi}^{n}(e_{x},e_{y}\mid t)=0,\ \text{a.s.} \tag{3.9}\]
The following variational representation follows from [21, Theorem 4.2.2], [14, Theorem 4.5]. For each \(F\in C_{b}(\mathcal{P}(\Delta^{o}))\),
\[-n^{-1}\log E\exp[-nF(L^{n+1})]\\ =\inf_{\{\bar{\mu}^{n,\star}\}\in\Theta^{n}}E\left[F(\bar{L}^{n} (t_{n}))+n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu} ^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})( \bar{\nu}^{n,k},\cdot)\right)\right]. \tag{3.10}\]
The fact that in the infimum on the right side we can restrict, without loss of generality, to sequences \(\{\bar{\mu}^{n,i},\ i\in\mathbb{N}\}\) for which (3.4) holds a.s. is because, if this property is violated, then the expression on the right side is \(\infty\) since \(A\in\mathcal{A}\). Using (3.6) and the representation in (3.7), we can rewrite the right side of the identity in (3.10) as follows. For \(s\in\mathbb{R}_{+}\), let
\[m(s)\doteq\sup\{k:t_{k}\leq s\},\quad a(s)\doteq t_{m(s)}. \tag{3.11}\]
For each \(n\in\mathbb{N}\), define the random measure \(\bar{\zeta}^{n}\) on \(\mathcal{V}^{d}\times\mathcal{V}^{d}\times[0,t_{n}]\) as follows: for \(A\subseteq\mathcal{V}^{d}\), \(C\subseteq\mathcal{V}^{d}\) and \(B\in\mathcal{B}[0,t_{n}]\),
\[\bar{\zeta}^{n}(A\times C\times B)\doteq\int_{B}\bar{\zeta}^{n}\left(A\times C \mid t,\bar{L}^{n}(a(t))\right)dt, \tag{3.12}\]
where for \(k\leq n-1\), \(m\in\mathcal{P}(\Delta^{o})\) and \(t\in[t_{k},t_{k+1})\),
\[\bar{\zeta}^{n}\left(\cdot\mid t,m\right)\doteq\delta_{\bar{\nu}^{n,k}}\otimes G ^{\mathcal{V}}(m)(\bar{\nu}^{n,k},\cdot)(\cdot).\]
From (3.2), (3.5), (3.6), and (3.8) it follows that, for \(t\in[0,t_{n}]\),
\[\bar{L}^{n}(t)=\bar{L}^{n}(0)+\int_{0}^{t}\sum_{v\in\mathcal{V}^{d}}(v-\bar{L} ^{n}(a(s)))\bar{\Lambda}^{n}(v\mid s)ds, \tag{3.13}\]
and, for \((x,y)\in\Delta^{o}\times\Delta^{o}\),
\[\bar{\mathcal{T}}^{n}(t)(x,y)=\bar{\mathcal{T}}^{n}(0)(x,y)+\int_{0}^{t}\sum_ {v,v^{\prime}\in\mathcal{V}^{d}}(\mathbf{1}_{\{v=e_{x},v^{\prime}=e_{y}\}}- \bar{\mathcal{T}}^{n}(a(s))(x,y))\bar{\Xi}^{n}(v,v^{\prime}\mid s)ds.\]
Define \(\psi_{e}:\mathbb{R}_{+}\to\{2,3,\dots\}\) as
\[\psi_{e}(t)\doteq\sum_{k=0}^{\infty}(k+2)\mathbf{1}_{[t_{k},t_{k+1})}(t), \tag{3.14}\]
so that
\[\begin{split}& n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k }}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}( \bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\\ &=n^{-1}\int_{0}^{t_{n}}\psi_{e}(s)R\left(\bar{\xi}^{n}(\cdot \mid s)\|\bar{\zeta}^{n}\left(\cdot\mid s,\bar{L}^{n}(a(s))\right)\right)ds. \end{split} \tag{3.15}\]
Define \(\mathcal{P}(\mathcal{V}^{d}\times\mathbb{R}_{+})\) and \(\mathcal{P}(\mathcal{V}^{d}\times\mathcal{V}^{d}\times\mathbb{R}_{+})\)-valued random variables as follows: for \(t\in\mathbb{R}_{+}\) and \(x,y\in\Delta^{o}\), let
\[\begin{split}\gamma^{n}(\{e_{x}\}\times[0,t])& \doteq n^{-1}\int_{0}^{t_{n}\wedge t}\psi_{e}(t_{n}-s)\bar{\Lambda}^{n}(e_{x} \mid t_{n}-s)ds\\ \beta^{n}(\{e_{x}\}\times\{e_{y}\}\times[0,t])& \doteq n^{-1}\int_{0}^{t_{n}\wedge t}\psi_{e}(t_{n}-s)\bar{\xi}^{n}((e_{x},e_{ y})\mid t_{n}-s)ds\\ \theta^{n}(\{e_{x}\}\times\{e_{y}\}\times[0,t])& \doteq n^{-1}\int_{0}^{t_{n}\wedge t}\psi_{e}(t_{n}-s)\bar{\zeta}^{n}\left((e_{ x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s))\right)ds.\end{split} \tag{3.16}\]
The fact that the quantities in (3.16) define probability measures on \(\mathcal{V}^{d}\times\mathbb{R}_{+}\) (resp. \(\mathcal{V}^{d}\times\mathcal{V}^{d}\times\mathbb{R}_{+}\)) follows on observing that, for each \(n\in\mathbb{N}\),
\[n^{-1}\int_{0}^{t_{n}}\psi_{e}(s)ds=1.\]
Also, from (3.9) it follows that if \((x,y)\in(A_{+})^{c}\), then, for each \(t\in\mathbb{R}_{+}\),
\[\beta^{n}(\{e_{x}\}\times\{e_{y}\}\times[0,t])=0,\text{ a.s.} \tag{3.17}\]
From (3.15) and chain rule for relative entropies (see [14, Corollary 2.7]), it follows that
\[n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1} \|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{ n,k},\cdot)\right)=R\left(\beta^{n}\|\theta^{n}\right). \tag{3.18}\]
With the identity in (3.18), the expectation on the right side of (3.10) can be rewritten as
\[E\left[F(\bar{L}^{n}(t_{n}))+R\left(\beta^{n}\|\theta^{n}\right)\right]. \tag{3.19}\]
In our proofs it will be convenient to consider the dynamics of \(\bar{L}^{n}\) viewed backwards in time. Towards that end, for each \(n\in\mathbb{N}\), define the \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\)-valued (resp. \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times\Delta^{o}))\)-valued) random variable \(\dot{\mathbf{L}}^{n}\) (resp. \(\bar{\mathcal{T}}^{n}\)) by
\[(\ddot{\mathbf{L}}^{n}(t),\bar{\mathcal{T}}^{n}(t))\doteq\begin{cases}(\bar{L} ^{n}(t_{n}-t),\bar{\mathcal{T}}^{n}(t_{n}-t))&0\leq t\leq t_{n}\\ (\bar{L}^{n}(0),\bar{\mathcal{T}}^{n}(0))&t\geq t_{n}.\end{cases} \tag{3.20}\]
Also, for each \(n\in\mathbb{N}\), define the \(\mathcal{M}(\mathcal{V}^{d}\times\mathbb{R}_{+})\)-valued random variable \(\check{\Lambda}^{n}\) by, for \(A\subseteq\mathcal{V}^{d}\) and \(t\in\mathbb{R}_{+}\),
\[\check{\Lambda}^{n}(A\times[0,t])\doteq\int_{t_{n}-t}^{t_{n}}\bar{\Lambda}^{n} (A\mid s)ds=\int_{0}^{t}\check{\Lambda}^{n}(A\mid s)ds, \tag{3.21}\]
where \(\bar{\Lambda}^{n}(A\mid s)\doteq 0\) for \(s\leq 0\), and \(\check{\Lambda}^{n}(A\mid s)\doteq\check{\Lambda}^{n}(A\mid t_{n}-s)\) for \(s\in\mathbb{R}_{+}\). For each \(n\in\mathbb{N}\), define the quantities \(\check{\Xi}^{n}\) and \(\check{\Xi}^{n}(\cdot\mid s)\) similarly. From (3.13) we see that these time-reversed controlled processes satisfy the following evolution equation: for \(t\in\mathbb{R}_{+}\) and \(n\geq m(t)\),
\[\dot{\mathbf{L}}^{n}(t)=\dot{\mathbf{L}}^{n}(0)-\int_{0}^{t}\sum_{v\in\mathcal{ V}^{d}}v\check{\Lambda}^{n}(v\mid s)ds+\int_{t_{n}-t}^{t_{n}}\dot{\mathbf{L}}^{n}(t_ {n}-a(s))ds, \tag{3.22}\]
and, for \((x,y)\in\Delta^{o}\times\Delta^{o}\),
\[\tilde{\mathcal{T}}^{n}(t)(x,y)=\tilde{\mathcal{T}}^{n}(0)(x,y)-\int_{0}^{t} \sum_{v,v^{\prime}\in\mathcal{V}^{d}}\mathbf{1}_{\{v=e_{x},v^{\prime}=e_{y}\}} \check{\Xi}^{n}(v,v^{\prime}\mid s)ds+\int_{t_{n}-t}^{t_{n}}\tilde{\mathcal{T}} ^{n}(t_{n}-a(s))(x,y)ds. \tag{3.23}\]
Furthermore, from (3.10) and (3.19) we have that
\[-n^{-1}\log E\exp[-nF(L^{n+1})]=\inf_{\{\check{\mu}^{n,i}\}\in\Theta^{n}}E \left[F(\dot{\mathbf{L}}^{n}(0))+R\left(\beta^{n}\|\theta^{n}\right)\right].\]
## 4 Laplace Upper Bound
Recall that we fix \(A\in\mathcal{A}\). The main result of the section is the following theorem, which gives the large deviations upper bound.
**Theorem 4.1**.: _Suppose that Assumption 2.1 is satisfied. Then, for every \(F\in C_{b}(\mathcal{P}(\Delta^{o}))\),_
\[\liminf_{n\to\infty}-n^{-1}\log E\exp[-nF(L^{n+1})]\geq\inf_{m\in\mathcal{P}( \Delta^{o})}[F(m)+I(m)].\]
Assumption 2.1 will be taken to hold for the rest of this section.
### Tightness and Weak Convergence
A key step in the proof of Theorem 4.1 will be establishing the tightness of suitable controlled quantities and identifying their weak limit points. In preparation for that we first recall the following estimate for the harmonic series (cf. [16]). For any \(n\geq 2\)
\[\gamma+\frac{1}{2(n+1)}<\sum_{k=1}^{n}k^{-1}-\log n<\gamma+\frac{1}{2(n-1)}, \tag{4.1}\]
where \(\gamma\approx 0.57721\) is the Euler-Mascheroni constant. Recall the map \(m:\mathbb{R}_{+}\to\mathbb{N}_{0}\) (resp. \(\psi_{e}:\mathbb{R}_{+}\to\{2,3,\ldots\}\)) from (3.11) (resp. (3.14)). By (4.1) and the observation that \(t_{n}-s\leq t_{m(t_{n}-s)+1}\), we see that for all \(n\in\mathbb{N}\) and \(s\in\mathbb{R}_{+}\),
\[\log(n+1)+\frac{1}{2(n+2)}-(s+1)\leq t_{n}-s-\gamma\leq t_{m(t_{n}-s)+1}- \gamma\leq\log(m(t_{n}-s)+2)-1+\frac{1}{2(m(t_{n}-s)+1)}. \tag{4.2}\]
As a consequence of this inequality we have the following lemma.
**Lemma 4.2**.: _For each \(t\in\mathbb{R}_{+}\), as \(n\to\infty\), \(n^{-1}m(t_{n}-t)\to\exp(-t).\) Additionally, for each \(t\in\mathbb{R}_{+}\), as \(n\to\infty\),_
\[\sup_{s\in[0,t]}\left|n^{-1}\psi_{e}(t_{n}-s)-\exp(-s)\right|\to 0. \tag{4.3}\]
Proof.: The first statement in the lemma is immediate from the second on observing that for all \(t\in\mathbb{R}_{+}\), \(m(t_{n}-t)-\psi_{e}(t_{n}-t)=2\). We now prove the second statement. For \(s\in\mathbb{R}_{+}\) and \(n\in\mathbb{N}\), let \(k_{s,n}\doteq m(t_{n}-s)\), so that
\[n^{-1}\psi_{e}(t_{n}-s)=n^{-1}(k_{s,n}+2). \tag{4.4}\]
From (4.2), for all \(s\in\mathbb{R}_{+}\) and \(n\in\mathbb{N}\),
\[\gamma+\log(n+1)+(2n+4)^{-1}-(s+1)\leq t_{n}-s|\leq t_{k_{s,n}+1}\leq\log(k_{ s,n}+2)-1+\gamma+(2k_{s,n}+2)^{-1}, \tag{4.5}\]
from which it follows that \(e^{-s}\leq(n+1)^{-1}(k_{s,n}+2)e^{\frac{1}{2k_{s,n}+2}}\), and therefore that
\[(e^{-\frac{1}{2k_{s,n}+3}}-1)e^{-s}\leq n^{-1}(k_{s,n}+2)-e^{-s}. \tag{4.6}\]
Next, using the estimate
\[\gamma+\log(k_{s,n}+1)\leq\gamma+\log(k_{s,n}+1)+(2k_{s,n}+4)^{-1}\leq t_{k_{ s,n}}+1\leq t_{n}-s+1\leq\gamma+\log(n+1)+\frac{1}{2n}-s,\]
we have that \(\log\left(\frac{k_{s,n}+1}{n+1}\right)\leq(2n)^{-1}-s\), which, along with the fact that \(k_{s,n}\leq n\), ensures that \(n^{-1}k_{s,n}\leq e^{-s}e^{\frac{1}{2n}}\) Consequently,
\[n^{-1}(k_{s,n}+2)-e^{-s}\leq e^{-s}(e^{\frac{1}{2n}}-1)+2n^{-1}\leq e^{\frac{1} {2n}}+n^{-1}(2-n). \tag{4.7}\]
Once more using (4.5), we see that \(\log(n+1)-s\leq\log(k_{s,n}+2)+(2k_{s,n}+2)^{-1}\), so, for fixed \(t\in\mathbb{R}_{+}\) and for each \(s\in[0,t]\) and all \(n\in\mathbb{N}\),
\[(n+1)e^{-t-1}\leq(n+1)e^{-s-1}\leq k_{s,n}+2. \tag{4.8}\]
Using (4.8), we see that for each \(n\in\mathbb{N}\),
\[\sup_{s\in[0,t]}\left|e^{-\frac{1}{2k_{s,n}+3}}-1\right|\leq 1-\exp\left(-(2(n+ 1)e^{-t-1}-1)^{-1}\right)\]
which shows that, as \(n\to\infty\),
\[\sup_{s\in[0,t]}\left|e^{-\frac{1}{2k_{s,n}+3}}-1\right|\to 0. \tag{4.9}\]
Combining (4.6) and (4.7), we see that for each \(s\geq 0\) and \(n\in\mathbb{N}\),
\[\left(e^{-\frac{1}{2k_{s,n}+3}}-1\right)e^{-s}\leq\frac{k_{s,n}+2}{n}-e^{-s} \leq e^{\frac{1}{2n}}+\frac{2-n}{n},\]
so, from (4.4), for each \(n\in\mathbb{N}\),
\[\sup_{s\in[0,t]}\left|n^{-1}\psi_{e}(t_{n}-s)-e^{-s}\right|\leq\max\left\{ \sup_{s\in[0,t]}(1-e^{-\frac{1}{2k_{s,n}+3}}),e^{\frac{1}{2n}}+n^{-1}(2-n) \right\}. \tag{4.10}\]
Combining (4.9) and (4.10), we obtain (4.3).
The next lemma shows that the sequences of various quantities, introduced in Section 3, associated with a sequence of controls \(\{\bar{\mu}^{n,i}\}\in\Theta^{n}\), is tight.
**Lemma 4.3**.: _Let for \(n\in\mathbb{N}\), \(\{\bar{\mu}^{n,i}\}\in\Theta^{n}\). The collection \(\{(\check{\mathbf{L}}^{n},\check{\mathcal{T}}^{n},\check{\Lambda}^{n},\gamma^ {n},\beta^{n},\theta^{n}),\;n\in\mathbb{N}\}\), associated with the sequence of controls \(\{\bar{\mu}^{n,i},\;n\in\mathbb{N}\}\), as defined in Section 3, is tight in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\times C(\mathbb{R}_{+}:\mathcal{P}( \Delta^{o}\times\Delta^{o}))\times\mathcal{M}(\mathcal{V}^{d}\times\mathbb{R}_ {+})\times\mathcal{P}(\mathcal{V}^{d}\times\mathbb{R}_{+})\times(\mathcal{P}( \mathcal{V}^{d}\times\mathcal{V}^{d}\times\mathbb{R}_{+}))^{2}\)._
Proof.: We begin by showing that \(\{\check{\mathbf{L}}^{n},\;n\in\mathbb{N}\}\) is tight. Since \(\mathcal{P}(\Delta^{o})\) is compact, it suffices to show that for some \(C\in(0,\infty)\), and for all \(n\in\mathbb{N}\) and \(s,t\in\mathbb{R}_{+}\), \(\|\check{\mathbf{L}}^{n}(t)-\check{\mathbf{L}}^{n}(s)\|\leq C|t-s|\), a.s. However, this is immediate from (3.22) (or equivalently (3.13)), on using the fact that \(\|v-\check{L}^{n}(a(s))\|\leq 2\) for all \(s\in\mathbb{R}_{+}\) and \(v\in\mathcal{V}^{d}\). The tightness of \(\{\check{\mathcal{T}}^{n},\;n\in\mathbb{N}\}\) is argued similarly.
The tightness of \(\{\check{\Lambda}^{n},\;n\in\mathbb{N}\}\) in \(\mathcal{M}(\mathcal{V}^{d}\times\mathbb{R}_{+})\) under the vague topology is immediate on observing that for each \(k\in\mathbb{N}\), \(\sup_{n\in\mathbb{N}}\check{\Lambda}^{n}(\mathcal{V}^{d}\times[0,k])=k\). Next, since \(\mathcal{V}^{d}\) is compact, the sequences \(\{\gamma^{n}_{(1)},\;n\in\mathbb{N}\}\), \(\{\beta^{n}_{(1)},\;n\in\mathbb{N}\}\), \(\{\beta^{n}_{(2)},\;n\in\mathbb{N}\}\), \(\{\theta^{n}_{(1)},\;n\in\mathbb{N}\}\) and \(\{\theta^{n}_{(2)},\;n\in\mathbb{N}\}\), are obviously tight. Also, for each \(n\in\mathbb{N}\), \(\gamma^{n}_{(2)}=\beta^{n}_{(3)}=\theta^{n}_{(3)}\), so to complete the proof it suffices to show that the sequence \(\{\gamma^{n}_{(2)},\;n\in\mathbb{N}\}\) is tight. Observe that, for each \(n\in\mathbb{N}\), if \(n\geq m(t)\), then, since \(t_{n}-t\leq t_{m(t_{n}-t)+1}\),
\[\gamma^{n}_{(2)}([0,t])=n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)ds=n^{- 1}\int_{t_{n}-t}^{t_{n}}\psi_{e}(s)ds\\ \geq n^{-1}\sum_{k=m(t_{n}-t)+1}^{n-1}\int_{t_{k}}^{t_{k+1}}\psi_{ e}(s)ds=1-n^{-1}(m(t_{n}-t)+1).\]
Fix \(\varepsilon>0\) and \(t>\log(3\varepsilon^{-1})\). Then, from Lemma 4.2, we can find some \(n_{0}>2\varepsilon^{-1}\) such that \(n_{0}\geq m(t)\) and
\[\sup_{n\geq n_{0}}|n^{-1}m(t_{n}-t)-e^{-t}|\leq 2^{-1}\varepsilon.\]
Thus, \(\inf_{n\geq n_{0}}\gamma^{n}_{(2)}([0,t])\geq 1-\varepsilon\). Since \(\varepsilon>0\) is arbitrary, the desired tightness follows.
The next lemma provides a useful characterization of the weak limit points of the tight collection in Lemma 4.3.
**Lemma 4.4**.: _Let the sequence \(\{(\check{\mathbf{L}}^{n},\check{\mathcal{T}}^{n},\check{\Lambda}^{n},\gamma^{n}, \beta^{n},\theta^{n}),\;\;n\in\mathbb{N}\}\) be as in Lemma 4.3 and let \((\check{\mathbf{L}}^{*},\check{\mathcal{T}}^{*},\check{\Lambda}^{*},\gamma^{*},\beta^{*},\theta^{*})\) be a weak limit point of the sequence. Then, the following hold a.s._
1. _The measure_ \(\check{\Lambda}^{*}\) _can be disintegrated as_ \(\check{\Lambda}^{*}(dv,ds)=\check{\Lambda}^{*}(dv\mid s)ds\)_._
2. _For_ \(t\in\mathbb{R}_{+}\)__ \[\check{\mathbf{L}}^{*}(t)=\check{\mathbf{L}}^{*}(0)-\int_{0}^{t}\sum_{v\in \mathcal{V}^{d}}v\check{\Lambda}^{*}(v\mid s)ds+\int_{0}^{t}\check{\mathbf{L} }^{*}(s)ds.\]
3. \(\beta^{*}_{(1,3)}=\beta^{*}_{(2,3)}=\theta^{*}_{(1,3)}=\gamma^{*}\)_._
4. _For_ \(t\in\mathbb{R}_{+}\) _and_ \(x\in\Delta^{o}\)_,_ \(\gamma^{*}(\{e_{x}\}\times[0,t])=\int_{0}^{t}\exp(-s)\check{\Lambda}^{*}(e_{x} \mid s)ds\)_._
5. _For all_ \(t\in\mathbb{R}_{+}\) _and_ \((x,y)\in(A_{+})^{c}\)_,_ \(\beta^{*}(\{e_{x}\}\times\{e_{y}\}\times[0,t])=0\)_._
6. _For_ \(t\in\mathbb{R}_{+}\) _and_ \(x,y\in\Delta^{o}\)_,_ \[\theta^{*}(\{e_{x}\}\times\{e_{y}\}\times[0,t])=\int_{0}^{t}\exp(-s)\check{ \Lambda}^{*}(e_{x}\mid s)G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y })ds.\]
7. _For_ \(t\in\mathbb{R}_{+}\)_,_ \((\check{\mathcal{T}}^{*}(t))_{(1)}=(\check{\mathcal{T}}^{*}(t))_{(2)}=\check {\mathbf{L}}^{*}(t)\)_. Furthermore, for all_ \(t\in\mathbb{R}_{+}\) _and_ \((x,y)\in(A_{+})^{c}\)_,_ \(\check{\mathcal{T}}^{*}(t)(x,y)=0\)_, and for all_ \(s,t\in\mathbb{R}_{+}\)_,_ \(\|\check{\mathcal{T}}^{*}(t)-\check{\mathcal{T}}^{*}(s)\|\leq 2|t-s|\)_._
Proof.: Fix a weakly convergent subsequence of \(\{(\check{\mathbf{L}}^{n},\check{\Lambda}^{n},\gamma^{n},\beta^{n},\theta^{n }),\;n\in\mathbb{N}\}\) and relabel it as \(\{n\}\). We now prove the various statements in the lemma for the limit \((\check{\mathbf{L}}^{*},\check{\Lambda}^{*},\gamma^{*},\beta^{*},\theta^{*})\) of this sequence.
1. This is immediate on noting that for each \(n\in\mathbb{N}\) and \(t\geq 0\), \(\sum_{x\in\Delta^{o}}\check{\Lambda}^{n}(e_{x}\times[0,t])=t\).
2. By appealing to Skorohod's representation theorem, we assume without loss of generality that \(\{(\check{\mathbf{L}}^{n},\check{\Lambda}^{n}),\;n\in\mathbb{N}\}\) converges almost surely to \((\check{\mathbf{L}}^{*},\check{\Lambda}^{*})\). For \(t\in\mathbb{R}_{+}\) and \(m(t)\leq n\), recall the evolution equation (3.22). Also note that \[\int_{t_{n}-t}^{t_{n}}\check{\mathbf{L}}^{n}(t_{n}-a(s))ds=\left(\int_{t_{n}-t }^{t_{n}}\check{\mathbf{L}}^{n}(t_{n}-a(s))ds-\int_{t_{n}-t}^{t_{n}}\check{ \mathbf{L}}^{n}(t_{n}-s)ds\right)+\int_{0}^{t}\check{\mathbf{L}}^{n}(s)ds,\] (4.11) and, for each \(t\in\mathbb{R}_{+}\), \[\left\|\int_{t_{n}-t}^{t_{n}}\check{\mathbf{L}}^{n}(t_{n}-a(s))ds-\int_{t_{n}- t}^{t_{n}}\check{\mathbf{L}}^{n}(t_{n}-s)ds\right\|\leq t\sup_{s\in[t_{n}-t,t_{n}]} \|\check{L}^{n}(a(s))-\check{L}^{n}(s)\|.\] (4.12) As in the proof of Lemma 4.3, for all \(n\in\mathbb{N}\) satisfying \(t_{n}\geq t\) and \(s\in[t_{n}-t,t_{n}]\), \[\|\check{L}^{n}(a(s))-\check{L}^{n}(s)\|\leq 2(m(t_{n}-t)+2)^{-1}.\] (4.13) Combining (4.11), (4.12), and (4.13) with (3.22), and using the almost sure convergence of \(\{(\check{\mathbf{L}}^{n},\check{\Lambda}^{n}),\;n\in\mathbb{N}\}\) to \((\check{\mathbf{L}}^{*},\check{\Lambda}^{*})\) we see that, as \(n\to\infty\), for each \(t\in\mathbb{R}_{+}\), \[\check{\mathbf{L}}^{n}(t)\to\check{\mathbf{L}}^{*}(0)-\int_{0}^{t}\sum_{v\in \mathcal{V}^{d}}v\check{\Lambda}^{*}(v\mid s)ds+\int_{0}^{t}\check{\mathbf{L} }^{*}(s)ds,\] almost surely. The result follows.
3. As in part (b), we assume that the convergence of \(\{(\gamma^{n},\beta^{n},\theta^{n}),\ n\in\mathbb{N}\}\) holds in the almost sure sense. Observe that, for each \(n\in\mathbb{N}\), \(\beta^{n}_{(1,3)}=\theta^{n}_{(1,3)}\), so the identity \(\beta^{*}_{(1,3)}=\theta^{*}_{(1,3)}\) follows. Now we show that \(\beta^{*}_{(1,3)}=\gamma^{*}\). Towards this end, fix \(x\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\), and observe that, for each \(n\geq m(t)\), \[\left|\beta^{n}_{(1,3)}(\{e_{x}\}\times[0,t])-\gamma^{n}(\{e_{x}\}\times[0,t]) \right|\leq n^{-1}+n^{-1}\left|\sum_{k=m(t_{n}-t)}^{n-1}\left(\delta_{\tilde{ \nu}^{n,k}}(e_{x})-\delta_{\tilde{\nu}^{n,k+1}}(e_{x})\right)\right|\leq 2n^{-1}.\] The desired identity follows on letting \(n\to\infty\). Now we show that \(\beta^{*}_{(2,3)}=\gamma^{*}\). Once more, fix \(t\in\mathbb{R}_{+}\) and \(x\in\Delta^{o}\), and observe that, for each \(n\geq m(t)\), \[\gamma^{n}(\{e_{x}\}\times[0,t])-\beta^{n}_{(2,3)}(\{e_{x}\} \times[0,t])\] \[=n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\left(\bar{\Lambda}^{n}(e_{x} \mid t_{n}-s)-\bar{\xi}^{n}_{(2)}(e_{x}\mid t_{n}-s)\right)ds\] (4.14) \[=n^{-1}\int_{t_{n}-t}^{t_{n}}\psi_{e}(s)\left(\bar{\Lambda}^{n}(e_{x} \mid s)-\bar{\xi}^{n}_{(2)}(e_{x}\mid s)\right)ds.\] Additionally, for \(1\leq l\leq m\leq n\), \[n^{-1}\int_{t_{l}}^{t_{m+1}}\psi_{e}(s)\left(\bar{\Lambda}^{n}(e_{x}\mid s)- \bar{\xi}^{n}_{(2)}(e_{x}\mid s)\right)ds=n^{-1}\sum_{k=l}^{m}\left(\delta_{ \tilde{\nu}^{n,k+1}}(e_{x})-\bar{\mu}^{n,k+1}(e_{x})\right)\] (4.15) so, recalling (3.3) and using the martingale-difference property, we see that \[\mathbb{E}\left(n^{-1}\sum_{k=l}^{m}\left(\delta_{\tilde{\nu}^{n,k+1}}(e_{x}) -\bar{\mu}^{n,k+1}(e_{x})\right)\right)^{2}\leq n^{-2}(m-l+1).\] (4.16) Combining (4.14), (4.15), and (4.16), we see that for some \(C_{1}\in(0,\infty)\), and all \(n\geq m(t)\), \[\mathbb{E}\left(\left|\beta^{n}_{(2,3)}(\{e_{x}\}\times[0,t])-\gamma^{n}(\{e_ {x}\}\times[0,t])\right|^{2}\right)\leq C_{1}n^{-1}.\] The statement in (c) follows on letting \(n\to\infty\).
4. Once more we assume, without loss of generality, that the sequence \(\{(\check{\mathbf{L}}^{n},\check{\Lambda}^{n},\gamma^{n},\beta^{n},\theta^{n} ),\ n\in\mathbb{N}\}\), converges almost surely to \((\check{\mathbf{L}}^{*},\check{\Lambda}^{*},\gamma^{*},\beta^{*},\theta^{*})\). Fix \(x\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\), and observe that, for each \(n\geq m(t)\), \[\left|n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\check{\Lambda}^{n}(e _{x}\mid s)ds-\int_{0}^{t}\exp(-s)\check{\Lambda}^{*}(e_{x}\mid s)ds\right|\\ \leq\left|n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\check{\Lambda}^{n}(e _{x}\mid s)ds-\int_{0}^{t}\exp(-s)\check{\Lambda}^{n}(e_{x}\mid s)ds\right|\\ +\left|\int_{0}^{t}\exp(-s)\check{\Lambda}^{n}(e_{x}\mid s)ds-\int _{0}^{t}\exp(-s)\check{\Lambda}^{*}(e_{x}\mid s)ds\right|.\] (4.17) Next, note that \[\left|n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\check{\Lambda}^{n}(e_{x} \mid s)ds-\int_{0}^{t}\exp(-s)\check{\Lambda}^{n}(e_{x}\mid s)ds\right|\\ \leq\int_{0}^{t}\left|n^{-1}\psi_{e}(t_{n}-s)-\exp(-s)\right| \left|\check{\Lambda}^{n}(e_{x}\mid s)\right|ds\leq t\sup_{s\in[0,t]}\left|n^{ -1}\psi_{e}(t_{n}-s)-\exp(-s)\right|,\] (4.18)
and, by convergence of \(\Lambda^{n}\) to \(\check{\Lambda}^{*}\), as \(n\to\infty\),
\[\left|\int_{0}^{t}\exp(-s)\check{\Lambda}^{n}(\{e_{x}\}\times ds)-\int_{0}^{t} \exp(-s)\check{\Lambda}^{*}(\{e_{x}\}\times ds)\right|\to 0. \tag{4.19}\]
The displays in (4.17),(4.18),(4.19), together with Lemma 4.2 show that, as \(n\to\infty\),
\[n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\check{\Lambda}^{n}(e_{x}\mid s)ds\to\int_{ 0}^{t}\exp(-s)\check{\Lambda}^{*}(e_{x}\mid s)ds.\]
On recalling the definition of \(\gamma^{n}\) we now have the statement in (d).
1. This result follows immediately from the observation in (3.17).
2. Once again, we assume that the a.e. convergence as in (d) holds. Fix \(t\in\mathbb{R}_{+}\), \(n\geq m(t)\), and \(x,y\in\Delta^{o}\), and observe that, \[\left|\theta^{n}(\{e_{x}\}\times\{e_{y}\}\times[0,t])-\int_{0}^{t} \exp(-s)\check{\Lambda}^{*}(e_{x}\mid s)G^{\mathcal{V}}(\check{\mathbf{L}}^{* }(s))(e_{x},e_{y})ds\right|\] \[=\left|n^{-1}\int_{0}^{t}\psi_{e}(t_{n}-s)\bar{\zeta}^{n}\left(( e_{x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s))\right)ds\right.\] \[\left.\quad-\int_{0}^{t}\exp(-s)\check{\Lambda}^{*}(e_{x}\mid s) G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})ds\right|\] \[\leq\left|\int_{0}^{t}\left(n^{-1}\psi_{e}(t_{n}-s)-\exp(-s) \right)\bar{\zeta}^{n}\left((e_{x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s)) \right)ds\right|\] \[\left.\quad+\left|\int_{0}^{t}\exp(-s)\left(\bar{\zeta}^{n}\left( (e_{x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s))\right)-\check{\Lambda}^{*}( e_{x}\mid s)G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\right)ds \right|.\]
From Lemma 4.2,
\[\left|\int_{0}^{t}\left(n^{-1}\psi_{e}(t_{n}-s)-\exp(-s)\right)\bar{\zeta}^{n }\left((e_{x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s))\right)ds\right|\to 0,\]
as \(n\to\infty\). Additionally, for each \(s\in[0,t]\),
\[\bar{\zeta}^{n}((e_{x},e_{y})\mid t_{n}-s,\bar{L}^{n}(a(t_{n}-s)) )-\check{\Lambda}^{*}(e_{x}\mid s)G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e _{x},e_{y})\] \[=\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G^{\mathcal{V}}(\bar{L} ^{n}(a(t_{n}-s)))(e_{x},e_{y})-\check{\Lambda}^{*}(e_{x}\mid s)G^{\mathcal{V}} (\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\] \[=\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G^{\mathcal{V}}(\bar{L} ^{n}(a(t_{n}-s)))(e_{x},e_{y})-\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\] \[\quad+\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G^{\mathcal{V}}( \check{\mathbf{L}}^{*}(s))(e_{x},e_{y})-\check{\Lambda}^{*}(e_{x}\mid s)G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y}),\]
and
\[\left|\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G^{\mathcal{V}}( \bar{L}^{n}(a(t_{n}-s)))(e_{x},e_{y})-\delta_{\bar{\nu}^{n,m}(t_{n}-s)}\otimes G ^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\right| \tag{4.20}\] \[\leq\left|G^{\mathcal{V}}(\bar{L}^{n}(a(t_{n}-s)))(e_{x},e_{y})- G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\right|\] \[\leq\left|G^{\mathcal{V}}(\bar{L}^{n}(a(t_{n}-s)))(e_{x},e_{y})- G^{\mathcal{V}}(\bar{L}^{n}(t_{n}-s))(e_{x},e_{y})\right|\] \[\quad+\left|G^{\mathcal{V}}(\check{\mathbf{L}}^{n}(s)(e_{x},e_{y} )-G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\right|.\]
Since by Assumption 2.1\(G^{\mathcal{V}}\) is a Lipschitz map and recalling from (4.13) that
\[\|\bar{L}^{n}(a(t_{n}-s))-\bar{L}^{n}(t_{n}-s)\|\leq 2(m(t_{n}-t)+2)^{-1}, \quad s\in[0,t], \tag{4.21}\]
and that \(\check{\mathbf{L}}^{n}\to\check{\mathbf{L}}^{*}\) almost surely as \(n\to\infty\), it follows from (4.20) that
\[\left|\delta_{\check{\nu}^{n,m(t_{n}-s)}}\otimes G^{\mathcal{V}}(\check{L}^{n}(a (t_{n}-s)))(e_{x},e_{y})-\delta_{\check{\nu}^{n,m(t_{n}-s)}}\otimes G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\right|\to 0,\]
as \(n\to\infty\). Now, observe that, for each \(s\in[0,t]\),
\[\delta_{\check{\nu}^{n,m(t_{n}-s)+1}}\otimes G^{\mathcal{V}}( \check{\mathbf{L}}^{*}(s))(e_{x},e_{y})-\check{\Lambda}^{*}(e_{x}\mid s)G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\] \[\quad=\check{\Lambda}^{n}(e_{x}\mid t_{n}-s)G^{\mathcal{V}}( \check{\mathbf{L}}^{*}(s))(e_{x},e_{y})-\check{\Lambda}^{*}(e_{x}\mid s)G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\] \[\quad=\check{\Lambda}^{n}(e_{x}\mid s)G^{\mathcal{V}}(\check{ \mathbf{L}}^{*}(s))(e_{x},e_{y})-\check{\Lambda}^{*}(e_{x}\mid s)G^{\mathcal{ V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})\]
Letting
\[h(s)\doteq\exp(-s)G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y}),\; s\in\mathbb{R}_{+},\]
we see that
\[\int_{0}^{t}\exp(-s)\left(\delta_{\check{\nu}^{n,m(t_{n}-s)+1}} \otimes G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{y})-\check{ \Lambda}^{*}(e_{x}\mid s)G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(e_{x},e_{ y})\right)ds\] \[\quad=\int_{0}^{t}h(s)\check{\Lambda}^{n}(e_{x}\mid s)ds-\int_{0 }^{t}h(s)\check{\Lambda}^{*}(e_{x}\mid s)ds=\int_{0}^{t}h(s)\check{\Lambda}^ {n}(e_{x}\times ds)-\int_{0}^{t}h(s)\check{\Lambda}^{*}(e_{x}\times ds)\]
which converges to \(0\) a.s. since \(\check{\Lambda}^{n}\to\check{\Lambda}^{*}\) a.s. in \(\mathcal{M}(\mathcal{V}^{d}\times\mathbb{R}_{+})\) and \(h\) is a continuous and bounded function. To complete the proof of (f) it now suffices to show that for all \(x\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\),
\[\left|\int_{0}^{t}h(s)\delta_{\check{\nu}^{n,m(t_{n}-s)+1}}(e_{x})ds-\int_{0} ^{t}h(s)\delta_{\check{\nu}^{n,m(t_{n}-s)}}(e_{x})ds\right|\to 0,\;\text{ a.s.} \tag{4.22}\]
Fix \(t\in\mathbb{R}_{+}\) and \(\varepsilon>0\), and let \(\kappa>0\) be such that \(|h(s)-h(s^{\prime})|\leq\varepsilon\) whenever \(|s-s^{\prime}|\leq\kappa\) and \(s,s^{\prime}\in[0,t]\). Let, for \(k\in\mathbb{N}\), \(\sigma(k)\doteq\delta_{\check{\nu}^{n,k}}(e_{x})\) and choose \(n_{0}\in\mathbb{N}\) such that \(m(t_{n_{0}}-t)^{-1}<\kappa\). Then, for \(n\geq n_{0}\), the quantity on the left side of (4.22) can be written as
\[\left|\int_{t_{n}-t}^{t_{n}}h(t_{n}-s)(\sigma(m(s)+1)-\sigma(m(s) ))ds\right|\] \[\leq\left|\sum_{k=m(t_{n}-t)}^{n-1}(\sigma(k+1)-\sigma(k))\int_{t _{k}}^{t_{k+1}}h(t_{n}-s)ds\right|+m(t_{n}-t)^{-1}\|h\|_{t,\infty}\] \[\leq\left|\sum_{k=m(t_{n}-t)}^{n-1}\left(\frac{1}{k+2}\sigma(k+1) h(t_{n}-t_{k+1})-\frac{1}{k+2}\sigma(k)h(t_{n}-t_{k})\right)\right|+2 \varepsilon t+m(t_{n}-t)^{-1}\|h\|_{t,\infty},\]
where \(\|h\|_{t,\infty}\doteq\sup_{0\leq s\leq t}|h(s)|\). The last expression can be bounded above by
\[2\varepsilon t+2m(t_{n}-t)^{-1}\|h\|_{t,\infty}+\|h\|_{t,\infty}\sum_{k=m(t_{n} -t)}^{n-1}\left(\frac{1}{k+1}-\frac{1}{k+2}\right)\leq 2\varepsilon t+3m(t_{n}-t)^{-1}\|h\|_{t, \infty}.\]
Taking the limit as \(n\to\infty\), we now have that
\[\limsup_{n\to\infty}\left|\int_{0}^{t}h(s)\delta_{\check{\nu}^{n,m(t_{n}-s)+1}} (e_{x})ds-\int_{0}^{t}h(s)\delta_{\check{\nu}^{n,m(t_{n}-s)}}(e_{x})ds\right| \leq 2\varepsilon t.\]
Since \(\varepsilon>0\) is arbitrary, the statement in (4.22) follows.
* The first statement is immediate on noting that for \(t\in\mathbb{R}_{+}\), \(\check{\mathbf{L}}^{n}(t)=(\check{\mathcal{T}}^{n}(t))_{(1)}\), and for \(k\in\mathbb{N}\), \[\|\bar{\mathcal{T}}^{n,k+1}_{(2)}-\bar{\mathcal{T}}^{n,k+1}_{(1)}\|\leq 2(k+1)^{-1}.\] The second statement follows from the fact that whenever \((x,y)\in(A_{+})^{c}\), \(\bar{\mathcal{T}}^{n,k+1}(x,y)=0\) a.e. for all \(n\in\mathbb{N}\) and \(k\in\mathbb{N}_{0}\). The final statement follows immediately from (3.23) and the fact that \(\check{\mathcal{T}}^{n}\) converges a.s. to \(\check{\mathcal{T}}^{*}\) as \(n\to\infty\).
### Proof of Laplace Upper Bound
Proof of Theorem 4.1.: Fix \(F\in C_{b}(\mathcal{P}(\Delta^{o}))\) and \(\varepsilon>0\). From the variational representation in (3.10), for each \(n\in\mathbb{N}\) we can find \(\{\check{\mu}^{n,i}\}\in\Theta^{n}\) such that
\[-n^{-1}\log E\exp[-nF(L^{n+1})]\\ \geq E\left[F(\bar{L}^{n}(t_{n}))+n^{-1}\sum_{k=0}^{n-1}R\left( \delta_{\check{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\check{\nu}^{n,k}} \otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot))\right]- \varepsilon,\]
where the sequence \(\{\bar{L}^{n,k},\ k\in\mathbb{N}\}\) is defined by (3.2).
For each \(n\in\mathbb{N}\), define the \(\mathcal{P}(\Delta^{o})\)-valued continuous process \(\bar{L}^{n}\) and random measures \(\bar{\Lambda}^{n}\) on \(\mathcal{V}^{d}\times[0,t_{n}]\) according to (3.6) and (3.7) respectively. Also define, for each \(n\in\mathbb{N}\), \(\gamma^{n}\), \(\beta^{n}\), \(\theta^{n}\), \(\check{\mathbf{L}}^{n}\), \(\check{\mathcal{T}}^{n}\), and \(\check{\Lambda}^{n}\) as in (3.16), (3.20), and (3.21). Recalling the identity in (3.18) we have that
\[-n^{-1}\log E\exp[-nF(L^{n+1})]\geq E\left[F(\check{\mathbf{L}}^{n}(0))+R( \beta^{n}\|\theta^{n})\right]-\varepsilon. \tag{4.23}\]
From Lemma 4.3, the collection \(\{(\check{\mathbf{L}}^{n},\check{\mathcal{T}}^{n},\check{\Lambda}^{n},\gamma^ {n},\beta^{n},\theta^{n}),\ n\in\mathbb{N}\}\) is tight in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\times C(\mathbb{R}_{+}:\mathcal{P}( \Delta^{o}\times\Delta^{o}))\times\mathcal{M}(\mathcal{V}^{d}\times\mathbb{R} _{+})\times(\mathcal{P}(\mathcal{V}^{d}\times\mathbb{R}_{+}))\times(\mathcal{P} (\mathcal{V}^{d}\times\mathcal{V}^{d}\times\mathbb{R}_{+}))^{2}\).
Let \((\check{\mathbf{L}}^{*},\check{\mathcal{T}}^{*},\check{\Lambda}^{*},\gamma^ {*},\beta^{*},\theta^{*})\) be a weak limit point of the above sequence and suppose without loss of generality that the convergence holds along the full sequence and in the almost sure sense.
From parts (c) and (d) of Lemma 4.4, the third marginal of \(\beta^{*}\), namely \(\beta^{*}_{(3)}(ds)\), equals \(e^{-s}ds\) a.s. Disintegrate \(\beta^{*}\) as
\[\beta^{*}(\{e_{x}\}\times\{e_{y}\}\times[0,t])=\int_{0}^{t}\exp(-s)\check{ \eta}^{*}(\{e_{x}\}\times\{e_{y}\},s)ds, \tag{4.24}\]
where \(s\mapsto\check{\eta}^{*}(\cdot,s)\) is a measurable map from \(\mathbb{R}^{+}\) to \(\mathcal{P}(\mathcal{V}^{d}\times\mathcal{V}^{d})\). Let, for \(s\in\mathbb{R}_{+}\), \(\eta^{*}(\cdot,s)\in\mathcal{P}(\Delta^{o}\times\Delta^{o})\) be defined as
\[\eta^{*}(\{x\}\times\{y\},s)\doteq\check{\eta}^{*}(\{e_{x}\}\times\{e_{y}\}, s),\ \ x,y\in\Delta^{o}, \tag{4.25}\]
and write \(\eta^{*}(x,y\mid s)\doteq\eta^{*}(\{x\}\times\{y\},s)\), and, for \(i=1,2\), \(\eta^{*}_{(i)}(\cdot\mid s)\doteq\eta^{*}_{(i)}(\cdot,s)\).
From Lemma 4.4(c), \(\eta^{*}_{(1)}(s)=\eta^{*}_{(2)}(s)\) for a.e. \(s\in\mathbb{R}_{+}\). In particular \(\eta^{*}\) satisfies Property (P2) in Section (2.2) (with \(\eta\) replaced by \(\eta^{*}\)). Thus, Property (b) in (2.3) holds.
Moreover, if \((x,y)\in(A_{+})^{c}\), then by part (e) of Lemma 4.4, a.s.,
\[\beta^{*}(\{e_{x}\}\times\{e_{y}\}\times[0,t])=0,\quad t\in\mathbb{R}_{+},\]
which shows that Property (P1) holds with \((\beta,\eta)\) replaced by \((\beta^{*},\eta^{*})\). Consequently, Property (a) in (2.3) is satisfied.
Next, note that, from Lemma 4.4(c) and (d), for \(x\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\),
\[\int_{0}^{t}\exp(-s)\check{\Lambda}(e_{x}\mid s)ds=\gamma^{*}(\{e_{x}\}\times[ 0,t])=\beta^{*}_{(1,3)}(\{e_{x}\}\times[0,t])=\int_{0}^{t}\exp(-s)\eta^{*}_{(1)}( x\mid s)ds,\]
where the last equality is from (4.24) and (4.25). This shows that for all \(x\in\Delta^{o}\) and a.e. \(s\in\mathbb{R}_{+}\), \(\eta^{*}_{(1)}(x\mid s)=\check{\Lambda}(e_{x}\mid s)\) and so \(\sum_{v\in\mathcal{V}^{d}}v\check{\Lambda}(v\mid s)(\cdot)=\eta^{*}_{(1)}( \cdot,s)\). In particular, from parts (a), (b), and (g) of Lemma 4.4, it follows that Property (c) in (2.3) also holds (with \(\eta\) replaced by \(\eta^{*}\), \(M\) replaced by \(\check{\mathbf{L}}^{*}\), and \(\mathcal{T}\) replaced by \(\check{\mathcal{T}}^{*}\)).
Since Properties (a), (b), and (c) of (2.3) hold a.s., it follows that \(\eta^{*}\in\mathcal{U}(\check{\mathbf{L}}^{*}(0))\) a.s. Furthermore, \(\check{\mathbf{L}}^{*}\) solves \(\mathcal{U}(\check{\mathbf{L}}^{*}(0),\eta^{*})\) a.s.
Note, from (4.24), and parts (d) and (f) of Lemma 4.4, that
\[(\gamma^{*},\beta^{*},\theta^{*})=\left(\exp(-s)\check{\Lambda}^{*}(\cdot \mid s)ds,\;\exp(-s)\check{\eta}^{*}(\cdot\mid s)ds,\;\exp(-s)\check{\Lambda}^ {*}(\cdot\mid s)\otimes G^{\mathcal{V}}(\check{\mathbf{L}}^{*}(s))(\cdot, \cdot)ds\right). \tag{4.26}\]
For \(s\in\mathbb{R}_{+}\), disintegrate
\[\eta^{*}(x,y\mid s)=\eta^{*}_{(1)}(x\mid s)\eta^{*}_{2|1}(y\mid s,x),\;\;(x,y )\in\Delta^{o}\times\Delta^{o}. \tag{4.27}\]
Then, using (4.23),
\[\varepsilon+\liminf_{n\to\infty}-n^{-1}\log E\exp[-nF(L^{n+1})]\\ \geq\liminf_{n\to\infty}E\left[F(\check{\mathbf{L}}^{n}(0))+R( \beta^{n}\|\theta^{n})\right]\geq\left[F(\check{\mathbf{L}}^{*}(0))+R(\beta^{ *}\|\theta^{*})\right]\\ =\left[F(\check{\mathbf{L}}^{*}(0))+R\left(\exp(-s)\check{\eta}^{ *}(\cdot\mid s)ds\right\|\exp(-s)\check{\Lambda}^{*}(\cdot\mid s)\otimes G^{ \mathcal{V}}(\check{\mathbf{L}}^{*}(s))(\cdot,\cdot)ds\right)\right]\\ =\left[F(\check{\mathbf{L}}^{*}(0))+\int_{0}^{\infty}\exp(-s)R \left(\eta^{*}(\cdot\mid s)\big{\|}\eta^{*}_{(1)}(x\mid s)\otimes G(\check{ \mathbf{L}}^{*}(s))(\cdot,\cdot)\right)ds\right]\\ =\left[F(\check{\mathbf{L}}^{*}(0))+\int_{0}^{\infty}\exp(-s) \sum_{x\in\Delta^{o}}\eta^{*}_{(1)}(x\mid s)R\left(\eta^{*}_{2|1}(\cdot\mid s,x)\big{\|}G(\check{\mathbf{L}}^{*}(s))(x,\cdot)\right)ds\right]\\ \geq\left[F(\check{\mathbf{L}}^{*}(0))+I(\check{\mathbf{L}}^{*}( 0))\right]\geq\inf_{m\in\mathcal{P}(\Delta^{o})}\left[F(m)+I(m)\right],\]
where the second inequality uses Fatou's lemma and the lower semicontinuity of relative entropy, the third line uses (4.26), the fourth line uses the chain rule for relative entropies (see [14, Corollary 2.7]) (observe that the relative entropies in lines three and four are on different spaces), the relationship between \(\eta^{*}\), \(\check{\eta}^{*}\), and \(\check{\Lambda}\), and the relationship between \(G\) and \(G^{\mathcal{V}}\), the fifth line again uses the chain rule for relative entropies and the disintegration in (4.27), and the last line uses the fact that \(\check{\mathbf{L}}^{*}\) solves \(\mathcal{U}(\check{\mathbf{L}}^{*}(0),\eta^{*})\), and the expression of the rate function \(I\) given in (2.4). The result follows on letting \(\varepsilon\to 0\).
## 5 Laplace Lower Bound
We now proceed to the large deviation lower bound. The main result in this direction is the following.
**Theorem 5.1**.: _Suppose that Assumption 2.2 is satisfied. Then, for every \(F\in C_{b}(\mathcal{P}(\Delta^{o}))\),_
\[\limsup_{n\to\infty}-n^{-1}\log E\exp[-nF(L^{n+1})]\leq\inf_{m\in\mathcal{P}( \Delta^{o})}[F(m)+I(m)],\]
_where \(I\) is defined using the matrix \(A\) in Assumption 2.2._
The proof of the above theorem is completed in this section and the next, and in both of these sections Assumption 2.2 is assumed to hold throughout and the matrix \(A\) is as in this assumption.
We now proceed to construct a suitable near-optimal path. We begin in the following subsection with the selection of a near-optimal control and trajectory and then in successive sections, by a series of approximations, we suitably modify these quantities, culminating in Section 5.4 in the final form of the simple form controls and paths that will be used in the lower bound proof.
### Near-Optimal Control
Fix \(F\in C_{b}(\mathcal{P}(\Delta^{o}))\) and \(\varepsilon\in(0,1)\). In order to prove Theorem 5.1 we can assume without loss of generality that \(F\) is Lipschitz (see [14, Corollary 1.10]), i.e., for some \(F_{\text{\tiny lip}}\in(0,\infty)\),
\[|F(m)-F(\tilde{m})|\leq F_{\text{\tiny lip}}\|m-\tilde{m}\|,\;m,\tilde{m}\in \mathcal{P}(\Delta^{o}).\]
Choose \(m^{0}\in\mathcal{P}(\Delta^{o})\) such that
\[F(m^{0})+I(m^{0})\leq\inf_{m\in\mathcal{P}(\Delta^{o})}[F(m)+I(m)]+\varepsilon. \tag{5.1}\]
Recall the definition of the rate function from (2.4) given in terms of \(\eta\in\mathcal{U}\). In proofs it will sometimes be convenient to work with analogues of \(\eta\) that are probability measures on \(\mathcal{P}(\mathcal{V}^{d}\times\mathcal{V}^{d})\). In particular, for \(\eta\in\mathcal{U}\), we define \(\eta^{\mathcal{V}}:\mathbb{R}_{+}\to\mathcal{P}(\mathcal{V}^{d}\times \mathcal{V}^{d})\) as
\[\eta^{\mathcal{V}}(s)(e_{x},e_{y})\doteq\eta(x,y\mid s),\;x,y\in\Delta^{o}. \tag{5.2}\]
Observe that the map defined in (2.4) can be rewritten as
\[I(m)=\inf_{\eta\in\mathcal{U}(m)}\int_{0}^{\infty}\exp(-s)\sum_{x\in\Delta^{o }}\eta^{\mathcal{V}}_{(1)}(e_{x}\mid s)R\left(\eta^{\mathcal{V}}_{2|1}(\cdot \mid s,e_{x})\|G^{\mathcal{V}}(M(s))(e_{x},\cdot)\right)ds, \tag{5.3}\]
where \(\eta^{\mathcal{V}}(s)(e_{x},e_{y})=\eta^{\mathcal{V}}_{(1)}(e_{x}\mid s)\eta^ {\mathcal{V}}_{2|1}(\cdot\mid s,e_{x})\). Also note that the relative entropy in (2.4) is computed for probability measures on \(\Delta^{o}\) while the relative entropy in (5.3) is computed for probability measures on \(\mathcal{V}^{d}\).
We choose \(\eta^{0}\in\mathcal{U}(m^{0})\) such that, with \(\eta^{0,\mathcal{V}}\) defined by the right side of (5.2) (with \(\eta\) replaced with \(\eta^{0}\)),
\[\int_{0}^{\infty}\exp(-s)\sum_{x\in\Delta^{o}}\eta^{0,\mathcal{V}}_{(1)}(e_{x }\mid s)R\left(\eta^{0,\mathcal{V}}_{2|1}(\cdot\mid s,e_{x})\|G^{\mathcal{V}} (M^{0}(s))(e_{x},\cdot)\right)\leq I(m^{0})+\varepsilon, \tag{5.4}\]
where \(M^{0}\) solves \(\mathcal{U}(m^{0},\eta^{0})\). From the definition of \(\mathcal{U}(m_{0})\), there is a \(\mathcal{T}^{0}\in C([0,\infty):\mathcal{P}(\Delta^{o}\times\Delta^{o}))\) such that, for each \(t\in\mathbb{R}_{+}\), \((\mathcal{T}^{0}(t))_{(1)}=(\mathcal{T}^{0}(t))_{(2)}=M^{0}(t)\) and \(\mathcal{T}^{0}(t)(x,y)=0\) whenever \((x,y)\in(A_{+})^{c}\).
We now modify \(M^{0}\) and \(\eta^{0}\) to construct a more tractable near-optimal trajectory.
### Step 1: Ensuring Nondegeneracy.
Our first approximation step ensures that the probability measure that appears in the second argument of the relative entropy terms of the form in (5.4) are suitably nondegenerate. Let, for each \(z\in\Delta^{o}\), \(\Delta_{+}(z)\doteq\{y\in\Delta^{o}:A_{z,y}>0\}\), and recall the constant \(\delta^{A}_{0}\in(0,\infty)\) from Assumption 2.2(2b). Note that from Assumption 2.2(3), there is a \(\pi^{*}\in\mathcal{P}_{+}(\Delta^{o})\) satisfying
\[\sum_{x\in\Delta^{o}}\pi^{*}_{x}G(\pi^{*})_{x,y}=\pi^{*}_{y},\;y\in\Delta^{o}.\]
It will be helpful to consider the measure \(\pi^{\mathcal{V},*}\in\mathcal{P}(\mathcal{V}^{d})\) defined by \(\pi^{\mathcal{V},*}_{e_{x}}\doteq\pi^{*}_{x}\), \(x\in\Delta^{o}\), so that
\[\sum_{v\in\mathcal{V}^{d}}\pi^{\mathcal{V},*}_{v}G^{\mathcal{V}}(\pi^{*})_{v,u }=\pi^{\mathcal{V},*}_{u},\;u\in\mathcal{V}^{d}.\]
Let
\[\delta^{\pi^{*}}_{0}\doteq\inf_{x\in\Delta^{o}}\pi^{*}_{x},\;\;\delta^{G,\pi^{ *}}_{0}\doteq\inf_{(x,y)\in A_{+}}\pi^{*}_{x}G(\pi^{*})_{x,y},\]
and note that
\[\delta_{0}^{G,\pi^{*}}\geq\delta_{0}^{A}(\delta_{0}^{\pi^{*}})^{2}>0. \tag{5.5}\]
Let, for \(x,y\in\Delta^{o}\) and \(s\in\mathbb{R}_{+}\),
\[M^{*}(s)\doteq\pi^{*},\quad\eta^{*}(x,y\mid s)\doteq\pi^{*}_{x}G(\pi^{*})_{x, y},\]
and observe that \(\eta^{*}\in\mathcal{U}(\pi^{*})\) and \(M^{*}\) solves \(\mathcal{U}(\pi^{*},\eta^{*})\). Define, for \(\kappa\in(0,1)\) and \(t\in\mathbb{R}_{+}\),
\[M^{\kappa}(t)\doteq(1-\kappa)M^{0}(t)+\kappa M^{*}(t),\;\eta^{\kappa}(\cdot \mid t)\doteq(1-\kappa)\eta^{0}(\cdot\mid t)+\kappa\eta^{*}(\cdot\mid t),\;m^ {\kappa}\doteq(1-\kappa)m^{0}+\kappa\pi^{*},\]
and observe that with
\[\mathcal{T}^{*}(t)\doteq\pi^{*}G(\pi^{*}),\;\mathcal{T}^{*}(t)\doteq(1- \kappa)\mathcal{T}^{0}(t)+\kappa\mathcal{T}^{*}(t),\;\;t\in\mathbb{R}_{+},\]
we have that \((\mathcal{T}^{\kappa}(t))_{(1)}=(\mathcal{T}^{\kappa}(t))_{(2)}=M^{\kappa}(t)\). Also note that, since \(M^{0}(t)\in\mathcal{P}^{*}(\Delta^{o})\) and \(M^{*}(t)\in\mathcal{P}^{*}(\Delta^{o})\), we have that \(M^{\kappa}(t)\in\mathcal{P}^{*}(\Delta^{o})\) for every \(t\in\mathbb{R}_{+}\), in fact we have that \(\operatorname{supp}(\mathcal{T}^{\kappa}(t))=A_{+}\) for each \(t\in\mathbb{R}_{+}\). From these observations we see that \(\eta^{\kappa}\in\mathcal{U}(m^{\kappa})\) and \(M^{\kappa}\) solves \(\mathcal{U}(m^{\kappa},\eta^{\kappa})\).
For each \(t\in\mathbb{R}_{+}\) and \(x,y\in\Delta^{o}\), define
\[\tau^{\kappa}(x,y\mid t)=\frac{\kappa(1-\kappa)\eta^{0}(x,y\mid t)+\kappa\pi^ {*}_{x}G(\pi^{*})_{x,y}}{2\kappa(1-\kappa)+\kappa^{2}}, \tag{5.6}\]
and note that for each \(x,y\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\),
\[\begin{split}\eta^{\kappa}(x,y\mid t)&=(1-\kappa) \eta^{0}(x,y\mid t)+\kappa\pi^{*}_{x}G(\pi^{*})_{x,y}\\ &=(1-\kappa)^{2}\eta^{0}(x,y\mid t)+(2\kappa(1-\kappa)+\kappa^{2 })\tau^{\kappa}(x,y\mid t).\end{split} \tag{5.7}\]
Also, observe that
\[\eta^{\kappa}_{(1)}(x\mid s)=(1-\kappa)\eta^{0}_{(1)}(x\mid s)+\kappa\pi^{*}_ {x}, \tag{5.8}\]
and, by Assumption 2.2(1),
\[G(M^{\kappa}(s))_{x,y}=G((1-\kappa)M^{0}(s)+\kappa\pi^{*})_{x,y}=(1-\kappa)G( M^{0}(s))_{x,y}+\kappa G(\pi^{*})_{x,y}. \tag{5.9}\]
Thus, from the previous two displays,
\[\begin{split}\eta^{\kappa}_{(1)}(x\mid s)G(M^{\kappa}(s))_{x,y}& =(1-\kappa)^{2}\eta^{0}_{(1)}(x\mid s)G(M^{0}(s))_{x,y}+\kappa(1- \kappa)\eta^{0}_{(1)}(x\mid s)G(\pi^{*})_{x,y}\\ &\quad+\kappa(1-\kappa)\pi^{*}_{x}G(M^{0}(s))_{x,y}+\kappa^{2}\pi ^{*}_{x}G(\pi^{*})_{x,y}\\ &=(1-\kappa)^{2}\eta^{0}_{(1)}(x\mid s)G(M^{0}(s))_{x,y}+\left(2 \kappa(1-\kappa)+\kappa^{2}\right)\sigma^{\kappa}(x,y\mid s),\end{split} \tag{5.10}\]
where for \(x,y\in\Delta^{o}\) and \(t\in\mathbb{R}_{+}\),
\[\sigma^{\kappa}(x,y\mid t)\doteq\frac{\kappa(1-\kappa)\eta^{0}_{(1)}(x\mid s )G(\pi^{*})_{x,y}+\kappa(1-\kappa)\pi^{*}_{x}G(M^{0}(s))_{x,y}+\kappa^{2}\pi ^{*}_{x}G(\pi^{*})_{x,y}}{2\kappa(1-\kappa)+\kappa^{2}}. \tag{5.11}\]
Using the convexity of relative entropy and combining (5.7) and (5.10)
\[\begin{split}&\int_{0}^{\infty}\exp(-s)\sum_{x\in\Delta^{o}}\eta^{ \kappa}_{(1)}(x\mid s)R\left(\eta^{\kappa}_{2|1}(\cdot\mid s,x)\|G(M^{\kappa}( s))(x,\cdot)\right)ds\\ &\quad=\int_{0}^{\infty}\exp(-s)R\left(\eta^{\kappa}(\cdot\mid s) \|\eta^{\kappa}_{(1)}(\cdot\mid s)\otimes G(M^{\kappa}(s))(\cdot,\cdot)\right)ds \\ &\quad\leq(1-\kappa)^{2}\int_{0}^{\infty}\exp(-s)R\left(\eta^{0}( \cdot\mid s)\|\eta^{0}_{(1)}(\cdot\mid s)\otimes G(M^{0}(s))(\cdot,\cdot) \right)ds\\ &\quad\quad+\left(2\kappa(1-\kappa)+\kappa^{2}\right)\int_{0}^{ \infty}\exp(-s)R\left(\tau^{\kappa}(\cdot\mid s)\|\sigma^{\kappa}(\cdot\mid s) \right)ds.\end{split} \tag{5.12}\]
Observe from Assumption 2.2(2a) that, for each \(s\in\mathbb{R}_{+}\), \(\operatorname{supp}(\sigma^{\kappa}(\cdot\mid s))=A_{+}\), and for all \(t\in\mathbb{R}_{+}\) and \((x,y)\in A_{+}\),
\[|\log\sigma^{\kappa}(x,y\mid t)|\leq\left|\log\left(\frac{\kappa^{2}}{2\kappa(1 -\kappa)+\kappa^{2}}\pi_{x}^{*}G(\pi^{*})_{x,y}\right)\right|.\]
Combining the observation in the previous display with (5.6), and (5.11), it follows that, for \(s\in\mathbb{R}_{+}\) and \(\kappa\in(0,1/2)\)
\[R\left(\tau^{\kappa}(\cdot\mid s)\|\sigma^{\kappa}(\cdot\mid s)\right) \leq\sum_{(x,y)\in A_{+}}\tau^{\kappa}(x,y\mid s)|\log\sigma^{ \kappa}(x,y\mid s)|\] \[\leq d\left|\log\left(\frac{\kappa^{2}}{2\kappa(1-\kappa)+\kappa^ {2}}\right)\right|+\sum_{(x,y)\in A_{+}}\tau^{\kappa}(x,y\mid s)|\log(\pi_{x} ^{*}G(\pi^{*})_{x,y})|\] \[\leq d\left(\left|\log\left(\frac{\kappa}{2}\right)\right|+\left| \log\delta_{0}^{G,\pi^{*}}\right|\right).\]
Thus,
\[\left(2\kappa(1-\kappa)+\kappa^{2}\right)\int_{0}^{\infty}\exp(-s)R\left(\tau ^{\kappa}(\cdot\mid s)\|\sigma^{\kappa}(\cdot\mid s)\right)ds\leq\left(2 \kappa(1-\kappa)+\kappa^{2}\right)d\left(\left|\log\left(\frac{\kappa}{2} \right)\right|+\left|\log\delta_{0}^{G,\pi^{*}}\right|\right). \tag{5.13}\]
Choose \(\kappa_{1}\in(0,1/2)\) such that
\[\|m^{0}-m^{\kappa_{1}}\|\leq\frac{\min\{(F_{\text{\tiny{lip}}})^{-1},1\} \varepsilon}{2},\ \ \left(2\kappa_{1}(1-\kappa_{1})+(\kappa_{1})^{2}\right)d\left(\left|\log \left(\frac{\kappa_{1}}{2}\right)\right|+\left|\log\delta_{0}^{G,\pi^{*}} \right|\right)\leq\varepsilon/2. \tag{5.14}\]
For convenience, write \((m^{1},\eta^{1},M^{1},\mathcal{T}^{1})\doteq(m^{\kappa_{1}},\eta^{\kappa_{1}}, M^{\kappa_{1}},\mathcal{T}^{\kappa_{1}})\). Then,
\[F(m^{1})+\int_{0}^{\infty}\exp(-s)\sum_{x\in\mathcal{V}^{d}}\eta _{(1)}^{1}(x\mid s)R\left(\eta_{2|1}^{1}(\cdot\mid s,x)\|G(M^{1}(s))(x,\cdot) \right)ds\\ \leq F(m^{0})+\int_{0}^{\infty}\exp(-s)\sum_{x\in\mathcal{V}^{d}} \eta_{(1)}^{0}(x\mid s)R\left(\eta_{2|1}^{0}(\cdot\mid s,x)\|G(M^{0}(s))(x, \cdot)\right)ds+\varepsilon\\ \leq\inf_{m\in\mathcal{P}(\Delta^{o})}[F(m)+I(m)]+3\varepsilon, \tag{5.15}\]
where we have used the Lipschitz property of \(F\), (5.12), (5.13), and (5.14) for the first inequality, and the displays in (5.1) and (5.4) for the second inequality.
Note that, for each \(s\in\mathbb{R}_{+}\), \(\operatorname{supp}(\eta_{(1)}^{1}(\cdot\mid s)\otimes G(M^{1}(s))(\cdot, \cdot))=A_{+}\), and that, from (5.5), (5.8), and (5.9), with \(\delta_{0}^{M_{1}}\doteq\kappa_{1}^{2}\delta_{0}^{G,\pi^{*}}\),
\[\inf_{s\in\mathbb{R}_{+}\;(x,y)\in A_{+}}G(M^{1}(s))(x,y)\geq\inf_{s\in \mathbb{R}_{+}\;(x,y)\in A_{+}}\eta_{(1)}^{1}(x\mid s)G(M^{1}(s))(x,y)\geq \delta_{0}^{M_{1}}>0,\]
which implies that for each \(s\in\mathbb{R}_{+}\),
\[R\left(\eta^{1}(\cdot\mid s)\|\eta_{(1)}^{1}(\cdot\mid s)\otimes G(M^{1}(s))( \cdot,\cdot)\right)\leq\left|\log\delta_{0}^{M_{1}}\right|. \tag{5.16}\]
Also, from (5.5) it follows that \(\kappa_{1}\delta_{0}^{G,\pi^{*}}\leq\kappa_{1}\delta_{0}^{\pi^{*}}\) and so, for each \(s\in\mathbb{R}_{+}\), with \(\delta\doteq\kappa_{1}\delta_{0}^{G,\pi^{*}}\),
\[\inf_{x\in\Delta^{o}}M^{1}(s)(x)\geq\delta,\quad\inf_{(x,y)\in A_{+}}\eta^{1}(x,y\mid s)\geq\delta,\ \text{ and }\operatorname{supp}(\eta^{1}(\cdot\mid s))=A_{+}. \tag{5.17}\]
Recall from the proof of the upper bound that the trajectories in the variational problem in the Laplace upper bound are related to the limit controlled trajectories by time reversal (see e.g., (3.20) and last
display in Section 4.2). Thus, we now introduce a time reversal of \(M^{1}\), which, after further approximations, will be used to construct suitable controlled processes in the proof of the lower bound. Fix \(T\in(0,\infty)\) large enough so that
\[\exp(-T+1)\max\left\{\left|\log\left(\frac{\delta\delta_{0}^{A}}{8}\right)\right|,\left|\log\delta_{0}^{M_{1}}\right|\right\}\leq\varepsilon \tag{5.18}\]
Throughout this section and the next, this \(T\) is fixed. Define, for \(t\in[0,T]\),
\[\hat{M}^{1}(t)\doteq M^{1}(T-t),\ \hat{\eta}^{1}(t)=\hat{\eta}^{1}(\cdot\mid t )\doteq\eta^{1}(\cdot\mid T-t),\]
and note that, since \(M^{1}\) solves \(\mathcal{U}(m^{1},\eta^{1})\),
\[\hat{M}^{1}(t)=M^{1}(T)+\int_{0}^{t}\hat{\eta}^{1}_{(1)}(s)ds-\int_{0}^{t}\hat {M}^{1}(s)ds,\ t\in[0,T], \tag{5.19}\]
where \(\hat{\eta}^{1}_{(1)}(s)=\eta^{1}_{(1)}(T-s)\). Recalling the non-negativity of relative entropy, note that,
\[\int_{0}^{\infty}\exp(-s)\sum_{x\in\Delta^{o}}\eta^{1}_{(1)}(x\mid s )R\left(\eta^{1}_{2|1}(\cdot\mid s,x)\|G(M^{1}(s))(x,\cdot)\right)ds\\ \geq\exp(-T)\int_{0}^{T}\exp(s)\sum_{x\in\Delta^{o}}\hat{\eta}^{1 }_{(1)}(x\mid s)R\left(\hat{\eta}^{1}_{2|1}(\cdot\mid s,x)\big{\|}G(\hat{M}^{1 }(s))(x,\cdot)\right)ds, \tag{5.20}\]
where \(\hat{\eta}^{1}_{2|1}(\cdot\mid s,x)=\eta^{1}_{2|1}(\cdot\mid T-s,x)\).
Finally, disintegrating \(\mathcal{T}^{1}(T)\) as \(\mathcal{T}^{1}(T)(x,y)=M^{1}(T)(x)Q(x,y)\), \(x,y\in\Delta^{o}\), we have that \(M^{1}(T)\) is a stationary distribution of the Markov chain with transition probability kernel \(Q\). Also, on recalling that \(\operatorname{supp}(M^{1}(T))=\Delta^{o}\) and \(\operatorname{supp}(\mathcal{T}^{1}(T))=A_{+}\), we see that \(Q(x,y)=0\) if and only if \((x,y)\in(A_{+})^{c}\), from which it follows that the kernel \(Q\) is irreducible and has unique stationary distribution \(M^{1}(T)\).
### Step 2: Continuity of Control
Our next step mollifies the control \(\hat{\eta}^{1}\) in a suitable manner so that it can be discretized at a later step. For \(\kappa>0\), define
\[\hat{\eta}^{1,\kappa}(s)\doteq\kappa^{-1}\int_{s}^{\kappa+s}\hat{\eta}^{1}(u )du,\ s\in[0,T],\]
where \(\hat{\eta}^{1}(u)\doteq\hat{\eta}^{1}(T)\) for \(u\geq T\). Also, define for \(t\in[0,T]\),
\[\hat{M}^{1,\kappa}(t)\doteq M_{1}(T)+\int_{0}^{t}\hat{\eta}^{1,\kappa}_{(1)} (s)ds-\int_{0}^{t}\hat{M}^{1,\kappa}(s)ds. \tag{5.21}\]
Note that there is a unique \(\hat{M}^{1,\kappa}\in C([0,T]:\mathbb{R}^{d})\) that solves (5.21), and that this \(\hat{M}^{1,\kappa}\) satisfies, for each \(s\in[0,T]\), \(\sum\limits_{x\in\Delta^{o}}\hat{M}^{1,\kappa}(s)(x)=1\). We now show that for \(\kappa\) sufficiently small we have \(\inf_{s\in[0,T]}\inf_{x\in\Delta^{o}}\hat{M}^{1,\kappa}(s)(x)>0\), namely that the solution to (5.21) in fact belongs to \(C([0,T]:\mathcal{P}_{+}(\Delta^{o}))\). We can write, for \(t\in[0,T]\),
\[\hat{M}^{1,\kappa}(t)=M_{1}(T)+\int_{0}^{t}\hat{\eta}^{1}_{(1)}(s)ds-\int_{0}^ {t}\hat{M}^{1,\kappa}(s)ds+\mathcal{R}^{\kappa}_{1}(t), \tag{5.22}\]
where
\[\mathcal{R}^{\kappa}_{1}(t)\doteq\int_{0}^{t+\kappa}\hat{\eta}^{1}_{(1)}(u) \kappa^{-1}\int_{(u-\kappa)^{+}}^{u\wedge t}dsdu-\int_{0}^{t}\hat{\eta}^{1}_{( 1)}(u)du.\]
Observe that, for each \(t\in[0,T]\), \(\|\mathcal{R}_{1}^{\kappa}(t)\|\leq 3\kappa\). Combining this estimate with (5.19) and (5.22), we have, for \(t\in[0,T]\),
\[\|\hat{M}^{1,\kappa}(t)-\hat{M}^{1}(t)\|\leq 3\kappa+\int_{0}^{t}\|\hat{M}^{1, \kappa}(s)-\hat{M}^{1}(s)\|ds,\]
from which we see, by an application of Gronwall's lemma, that
\[\sup_{t\in[0,T]}\|\hat{M}^{1,\kappa}(t)-\hat{M}^{1}(t)\|\leq 3\kappa\exp(T). \tag{5.23}\]
Recall the definition of \(\delta\) from above (5.17), and let
\[c_{1}\doteq 2\left(\delta^{2}\delta_{0}^{A}\right)^{-1}. \tag{5.24}\]
Assume that \(\kappa\) is small enough so that
\[3\kappa\exp(T)\leq\min\left\{\frac{\varepsilon}{2}\min\{1,F_{\rm lip}^{-1}\}, \frac{\varepsilon}{2c_{1}},\frac{\delta}{2}\right\}, \tag{5.25}\]
and
\[2c_{1}L_{G}\kappa+\kappa(e^{1-T}+1)\left|\log\delta_{0}^{M_{1}}\right|<\frac{ \varepsilon}{2}. \tag{5.26}\]
This, in particular, in view of (5.17) and (5.23), ensures that \(\hat{M}^{1,\kappa}\in C([0,T]:\mathcal{P}_{+}(\Delta^{o}))\), and in fact
\[\inf_{s\in[0,T]}\inf_{x\in\Delta^{o}}\hat{M}^{1,\kappa}(s)(x)\geq\delta/2. \tag{5.27}\]
Next, we write
\[\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{1,\kappa}(\cdot\mid s)\big{\|} \hat{\eta}^{1,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{1,\kappa}(s))( \cdot,\cdot)\right)ds\] \[=\int_{0}^{T}\exp(s)R\left(\kappa^{-1}\int_{s}^{\kappa+s}\hat{ \eta}^{1}(\cdot\mid u)du\big{\|}\kappa^{-1}\int_{s}^{\kappa+s}\hat{\eta}^{1}_{ (1)}(\cdot\mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)du\right)ds+\mathcal{R} _{1}, \tag{5.28}\]
where
\[\mathcal{R}_{1}=\int_{0}^{T}\exp(s)R\left(\kappa^{-1}\int_{s}^{ \kappa+s}\hat{\eta}^{1}(\cdot\mid u)du\big{\|}\kappa^{-1}\int_{s}^{\kappa+s} \hat{\eta}^{1}_{(1)}(\cdot\mid u)\otimes G(\hat{M}^{1,\kappa}(s))(\cdot,\cdot )du\right)ds\] \[\qquad-\int_{0}^{T}\exp(s)R\left(\kappa^{-1}\int_{s}^{\kappa+s} \hat{\eta}^{1}(\cdot\mid u)du\big{\|}\kappa^{-1}\int_{s}^{\kappa+s}\hat{\eta} ^{1}_{(1)}(\cdot\mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)du\right)ds. \tag{5.29}\]
Observe from (5.17), (5.24) and (5.27) that, for \(s\in[0,T]\) and \((x,y)\in A_{+}\),
\[\left|\log\left(\kappa^{-1}\int_{s}^{\kappa+s}\hat{\eta}^{1}_{(1) }(x\mid u)G(\hat{M}^{1,\kappa}(s))(x,y)du\right)-\log\left(\kappa^{-1}\int_{s }^{\kappa+s}\hat{\eta}^{1}_{(1)}(x\mid u)G(\hat{M}^{1}(u))(x,y)du\right)\right|\] \[\leq c_{1}\kappa^{-1}\left|\int_{s}^{\kappa+s}\hat{\eta}^{1}_{(1) }(x\mid u)G(\hat{M}^{1,\kappa}(s))(x,y)du-\int_{s}^{\kappa+s}\hat{\eta}^{1}_{ (1)}(x\mid u)G(\hat{M}^{1}(u))(x,y)du\right|. \tag{5.30}\]
Also, for each \(s\in[0,T]\),
\[\kappa^{-1}\sum_{(x,y)\in A_{+}}\left|\int_{s}^{\kappa+s}\hat{ \eta}^{1}_{(1)}(x\mid u)G(\hat{M}^{1,\kappa}(s))(x,y)du-\int_{s}^{\kappa+s} \hat{\eta}^{1}_{(1)}(x\mid u)G(\hat{M}^{1}(u))(x,y)du\right|\] \[\qquad\leq L_{G}\left(\|\hat{M}^{1,\kappa}(s)-\hat{M}^{1}(s)\|+ \kappa^{-1}\int_{s}^{\kappa+s}\|\hat{M}^{1}(s)-\hat{M}^{1}(u)\|du\right)\ \leq L_{G}\left(\frac{ \varepsilon}{2c_{1}}+2\kappa\right), \tag{5.31}\]
where the first inequality follows from Assumption 2.1 and the last inequality follows from (5.19), (5.23), and (5.25). Combining (5.29), (5.30), and (5.31) we see that
\[e^{-T}\mathcal{R}_{1}\leq e^{-T}\int_{0}^{T}L_{G}\exp(s)\left(\frac{\varepsilon} {2}+2\kappa c_{1}\right)ds\leq L_{G}\left(\frac{\varepsilon}{2}+2c_{1}\kappa \right). \tag{5.32}\]
For \(u\in\mathbb{R}_{+}\), let
\[\mathcal{R}_{2}(u)\doteq R\left(\hat{\eta}^{1}(\cdot\mid u)\big{\|}\hat{\eta} ^{1}_{(1)}(\cdot\mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)\right). \tag{5.33}\]
Using (5.16), we see that, for each \(u\in\mathbb{R}_{+}\), \(\mathcal{R}_{2}(u)\leq\left|\log\delta_{0}^{M_{1}}\right|.\) Using the convexity of relative entropy and (5.33), we now have that
\[\int_{0}^{T}\exp(s)R\left(\kappa^{-1}\int_{s}^{\kappa+s}\hat{ \eta}^{1}(\cdot\mid u)du\big{\|}\kappa^{-1}\int_{s}^{\kappa+s}\hat{\eta}^{1}_{ (1)}(\cdot\mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)du\right)ds\] \[\quad\leq\int_{0}^{T}\exp(s)\kappa^{-1}\int_{s}^{\kappa+s}R\left( \hat{\eta}^{1}(\cdot\mid u)\big{\|}\hat{\eta}^{1}_{(1)}(\cdot\mid u)\otimes G (\hat{M}^{1}(u))(\cdot,\cdot)\right)du\,ds\] \[\quad=\int_{0}^{T}\exp(s)\kappa^{-1}\int_{s}^{\kappa+s}\mathcal{R }_{2}(u)du\,ds, \tag{5.34}\]
Next, on recalling that \(\kappa^{-1}(1-e^{-\kappa})\leq 1\), it is easily checked that
\[\int_{0}^{T}\exp(s)\kappa^{-1}\int_{s}^{\kappa+s}\mathcal{R}_{2} (u)du\,ds=\kappa^{-1}\int_{0}^{T+\kappa}\mathcal{R}_{2}(u)\int_{(u-\kappa)^{+ }}^{u\wedge T}\exp(s)ds\,du\\ \leq\kappa^{-1}\int_{0}^{\kappa}(\exp(u)-1)\mathcal{R}_{2}(u)\,du +\kappa^{-1}e^{T}(1-e^{-\kappa})\int_{T}^{T+\kappa}\mathcal{R}_{2}(u)du+ \kappa^{-1}(1-e^{-\kappa})\int_{\kappa}^{T}\exp(u)\mathcal{R}_{2}(u)du\\ \leq\kappa(e+e^{T})\left|\log\delta_{0}^{M_{1}}\right|+\int_{0}^ {T}\exp(u)R\left(\hat{\eta}^{1}(\cdot\mid u)\big{\|}\hat{\eta}^{1}_{(1)}(\cdot \mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)\right)du. \tag{5.35}\]
Combining the estimates in (5.28), (5.32), (5.34), and (5.35), we have
\[e^{-T}\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{1,\kappa}(\cdot\mid s )\big{\|}\hat{\eta}^{1,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{1,\kappa} (s))(\cdot,\cdot)\right)\\ \leq L_{G}\left(\frac{\varepsilon}{2}+2c_{1}\kappa\right)+\kappa( e^{1-T}+1)\left|\log\delta_{0}^{M_{1}}\right|\\ +e^{-T}\int_{0}^{T}\exp(u)R\left(\hat{\eta}^{1}(\cdot\mid u) \big{\|}\hat{\eta}^{1}_{(1)}(\cdot\mid u)\otimes G(\hat{M}^{1}(u))(\cdot, \cdot)\right)du.\]
Now denote by \(\kappa_{2}\) the constant \(\kappa\) that satisfies (5.25) and (5.26). Let
\[(\hat{\eta}^{2},\hat{M}^{2},m^{2})\doteq(\hat{\eta}^{1,\kappa_{2}},\hat{M}^{1, \kappa_{2}},\hat{M}^{1,\kappa_{2}}(T)).\]
Note from (5.23) and (5.25) that
\[|F(m^{2})-F(m^{1})|=|F(\hat{M}^{1,\kappa_{2}}(T))-F(\hat{M}^{1}(T))|\leq \varepsilon/2.\]
Thus,
\[F(m^{2})+e^{-T}\int_{0}^{T}\exp(s)\sum_{x\in\Delta^{o}}\hat{\eta} ^{2}_{(1)}(x\mid s)R\left(\hat{\eta}^{2}_{(2)}(\cdot\mid s,x)\big{\|}G(\hat{M}^ {2}(s))(x,\cdot)\right)ds\\ \leq L_{G}\varepsilon/2+\varepsilon/2+F(m^{1})+e^{-T}\int_{0}^{T} \exp(u)R\left(\hat{\eta}^{1}(\cdot\mid u)\big{\|}\hat{\eta}^{1}_{(1)}(\cdot \mid u)\otimes G(\hat{M}^{1}(u))(\cdot,\cdot)\right)du\\ \leq\inf_{m\in\mathcal{P}(\Delta^{o})}[F(m)+I(m)]+(4+L_{G})\varepsilon, \tag{5.36}\]
where for the last inequality we have used (5.15) and (5.20). Furthermore, from (5.21),
\[\hat{M}^{2}(t)=M_{1}(T)+\int_{0}^{t}\hat{\eta}^{2}_{(1)}(s)ds-\int_{0}^{t}\hat{M} ^{2}(s)ds,\ t\in[0,T],\ \ \hat{M}^{2}(T)=m_{2}. \tag{5.37}\]
By construction, \(\hat{\eta}^{2}\in\mathcal{C}(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times \Delta^{o}))\), and we can find \(C_{1}\doteq C_{1}(\kappa_{2})\in(0,\infty)\) such that
\[\|\hat{\eta}^{2}(s)-\hat{\eta}^{2}(t)\|\leq C_{1}|s-t|,\ \ s,t\in[0,T], \tag{5.38}\]
and, on recalling (5.17) and (5.27), observe that
\[\inf_{s\in[0,T]}\inf_{(x,y)\in A_{+}}\hat{\eta}^{2}(x,y\mid s)\geq\delta,\ \inf_{s\in[0,T],x\in\Delta^{o}}\hat{M}^{2}(s)(x)\geq\delta/2. \tag{5.39}\]
### Step 3: Piecewise Constant Approximation
Now we carry out the last step in the approximation, which is to replace continuous controls by piecewise constant controls. For \(\kappa>0\), define \(\hat{\eta}^{2,\kappa}\) as
\[\hat{\eta}^{2,\kappa}(\cdot\mid s)\doteq\sum_{j=0}^{\lfloor T\kappa^{-1} \rfloor-1}\hat{\eta}^{2}(\cdot\mid j\kappa)\mathbf{1}_{[j\kappa,(j+1)\kappa)} (s)+\hat{\eta}^{2}(\cdot\mid\lfloor T\kappa^{-1}\rfloor\kappa)\mathbf{1}_{[ \lfloor T\kappa^{-1}\rfloor\kappa,T]}(s),\ \ s\in[0,T].\]
Let \(\hat{M}^{2,\kappa}\) solve the equation
\[\hat{M}^{2,\kappa}(t)=M^{1}(T)+\int_{0}^{t}\hat{\eta}^{2,\kappa}_{(1)}(s)ds- \int_{0}^{t}\hat{M}^{2,\kappa}(s)ds,\ t\geq 0. \tag{5.40}\]
Then, with \(\mathcal{R}^{\kappa}_{2}(t)\doteq\int_{0}^{t}\hat{\eta}^{2,\kappa}_{(1)}(s)ds -\int_{0}^{t}\hat{\eta}^{2}_{(1)}(s)ds\), we have that, for \(t\in[0,T]\),
\[\hat{M}^{2,\kappa}(t)=M_{1}(T)+\int_{0}^{t}\hat{\eta}^{2}_{(1)}(s)ds-\int_{0} ^{t}\hat{M}^{2,\kappa}(s)+\mathcal{R}^{\kappa}_{2}(t). \tag{5.41}\]
From (5.38) and the definition of \(\hat{\eta}^{2,\kappa}\),
\[\sup_{t\in[0,T]}\|\mathcal{R}^{\kappa}_{2}(t)\|\leq T\left(\sup_{s,t\in[0,T], |s-t|\leq\kappa}\|\hat{\eta}^{2}(s)-\hat{\eta}^{2}(t)\|\right)\leq C_{1} \kappa T,\]
Combining the last estimate, (5.37), and (5.41), we have from Gronwall's lemma that
\[\sup_{t\in[0,T]}\|\hat{M}^{2,\kappa}(t)-\hat{M}^{2}(t)\|\leq C_{1}\kappa T \exp(T). \tag{5.42}\]
Assume that \(\kappa\) is sufficiently small so that
\[C_{1}\kappa T\exp(T)\leq\min\left\{\frac{\varepsilon}{2}\min\{1,F_{\rm lip}^{- 1}\},\frac{\varepsilon}{4c_{1}},\frac{\delta}{4}\right\}, \tag{5.43}\]
and
\[2\kappa(2L_{G}+C_{1})(2|\log c_{1}|+c_{1})+4\kappa c_{1}L_{G}\leq\varepsilon, \tag{5.44}\]
where \(c_{1}\) is defined in (5.24). Then, it follows from (5.42) and (5.39) that for this choice of \(\kappa\),
\[\inf_{s\in[0,T]}\inf_{x\in\Delta^{o}}\hat{M}^{2,\kappa}(s)(x)\geq\frac{\delta }{4},\ \ \inf_{s\in[0,T]}\inf_{(x,y)\in A_{+}}\hat{\eta}^{2,\kappa}(x,y\mid s)\geq\delta. \tag{5.45}\]
This shows that \(\hat{M}^{2,\kappa}\) belongs to \(C([0,T]:\mathcal{P}_{+}(\Delta^{o}))\). For \(t\in[0,T]\), let \(\alpha_{\kappa}(t)\doteq\lfloor t\kappa^{-1}\rfloor\kappa\) and write
\[\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot\mid s) \big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2,\kappa}(s ))(\cdot,\cdot)\right)ds\] \[\quad=\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot\mid s )\big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2}( \alpha_{\kappa}(s)))(\cdot,\cdot)\right)ds+\mathcal{R}_{3}, \tag{5.46}\]
where
\[\mathcal{R}_{3} \doteq\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot \mid s)\big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2, \kappa}(s))(\cdot,\cdot)\right)ds\] \[\quad-\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot \mid s)\big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2}( \alpha_{\kappa}(s)))(\cdot,\cdot)\right)ds. \tag{5.47}\]
Recalling the definition of \(c_{1}\) from (5.24), we have, from Assumption 2.1 and (5.45), for \((x,y)\in A_{+}\) and \(s\in[0,T]\),
\[\sum_{(x,y)\in A_{+}}\left|\log\left(\hat{\eta}^{2,\kappa}_{(1)}( x\mid s)G(\hat{M}^{2,\kappa}(s))(x,y)\right)-\log\left(\hat{\eta}^{2,\kappa}_{(1)} (x\mid s)G(\hat{M}^{2}(\alpha_{\kappa}(s)))(x,y)\right)\right|\] \[\quad\leq 2c_{1}\sum_{(x,y)\in A_{+}}\left|\hat{\eta}^{2,\kappa}_{( 1)}(x\mid s)G(\hat{M}^{2,\kappa}(s))(x,y)-\hat{\eta}^{2,\kappa}_{(1)}(x\mid s )G(\hat{M}^{2}(\alpha_{\kappa}(s)))(x,y)\right|\] \[\quad\leq 2L_{G}c_{1}\left(\|\hat{M}^{2,\kappa}(s)-\hat{M}^{2}(s) \|+\|\hat{M}^{2}(s)-\hat{M}^{2}(\alpha_{\kappa}(s))\|\right)\leq 2L_{G}c_{1} \left(\frac{\varepsilon}{4c_{1}}+2\kappa\right)=L_{G}\left(\frac{\varepsilon} {2}+4\kappa c_{1}\right), \tag{5.48}\]
where the last inequality is due to (5.37), (5.42), and (5.43). Combining (5.47), and (5.48), we have that
\[e^{-T}\mathcal{R}_{3}\leq e^{-T}\int_{0}^{T}L_{G}\exp(s)\left(\frac{\varepsilon }{2}+4\kappa c_{1}\right)ds\leq L_{G}\left(\frac{\varepsilon}{2}+4\kappa c_{1} \right). \tag{5.49}\]
Next, using Assumption 2.1, (5.37), and (5.38), note that, for \(s,u\in[0,T]\) such that \(|s-u|\leq\kappa\),
\[\sum_{(x,y)\in A_{+}}\left|\hat{\eta}^{2}_{(1)}(x\mid s)G(\hat{M} ^{2}(s))(x,y)-\hat{\eta}^{2}_{(1)}(x\mid u)G(\hat{M}^{2}(u))(x,y)\right|\] \[\quad\leq L_{G}\|\hat{M}^{2}(s)-\hat{M}^{2}(u)\|+\|\hat{\eta}^{2} _{(1)}(\cdot\mid s)-\hat{\eta}^{2}_{(1)}(\cdot\mid u)\|\leq\kappa(2L_{G}+C_{1}), \tag{5.50}\]
and, using (5.45), observe that
\[\inf_{s\in[0,T]}\inf_{(x,y)\in A_{+}}\hat{\eta}^{2}_{(1)}(x\mid s)G(\hat{M}^{ 2}(s))(x,y)\geq\frac{\delta^{2}\delta^{A}_{0}}{4}=\frac{c_{1}^{-1}}{2}. \tag{5.51}\]
Moreover, note that if, for some \(\gamma\in(0,1)\), \(a,\tilde{a},b,\tilde{b}\in(\gamma,1]\), then
\[\left|a\log(a/b)-\tilde{a}\log(\tilde{a}/\tilde{b})\right|\leq\left(2|\log \gamma|+\gamma^{-1}\right)\left(|a-\tilde{a}|+|b-\tilde{b}|\right). \tag{5.52}\]
Using (5.50), (5.51), and (5.52), we have
\[\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot\mid s) \big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2}(\alpha_{ \kappa}(s)))(\cdot,\cdot)\right)ds\\ \leq\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2}(\cdot\mid s)\big{\|} \hat{\eta}^{2}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2}(s))(\cdot,\cdot) \right)ds\\ +2\kappa\left(2L_{G}+C_{1}\right)\left(2|\log c_{1}|+c_{1}\right) \int_{0}^{T}\exp(s)ds.\]
Combining the estimate in the last display with (5.46) and (5.49), we have
\[e^{-T}\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2,\kappa}(\cdot\mid s) \big{\|}\hat{\eta}^{2,\kappa}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2,\kappa}(s) )(\cdot,\cdot)\right)ds\\ \leq e^{-T}\int_{0}^{T}\exp(s)R\left(\hat{\eta}^{2}(\cdot\mid s) \big{\|}\hat{\eta}^{2}_{(1)}(\cdot\mid s)\otimes G(\hat{M}^{2}(s))(\cdot, \cdot)\right)ds\\ +2\kappa(2L_{G}+C_{1})(2|\log c_{1}|+c_{1})+L_{G}\left(\varepsilon /2+4\kappa c_{1}\right). \tag{5.53}\]
Now, denote by \(\kappa_{3}\) the constant \(\kappa\) that satisfies (5.43) and (5.44) and let
\[(\hat{\eta}^{3},\hat{M}^{3},m^{3})\doteq(\hat{\eta}^{2,\kappa_{3}},\hat{M}^{2,\kappa_{3}},\hat{M}^{2,\kappa_{3}}(T)).\]
From (5.42) and (5.43) it follows that
\[\big{|}F(m^{3})-F(m^{2})\big{|}=\Big{|}F(\hat{M}^{2,\kappa_{3}}(T))-F(\hat{M}^ {2}(T))\Big{|}\leq\varepsilon/2.\]
Combining the last display with the estimate in (5.53) and recalling our choice of \(\kappa_{3}\), we have
\[F(m^{3})+e^{-T}\int_{0}^{T}\exp(s)\sum_{x\in\Delta^{a}}\hat{\eta }^{3}_{(1)}(x\mid s)R\left(\hat{\eta}^{3}_{(2)}(\cdot\mid s,x)\big{\|}G(\hat{M} ^{3}(s))(x,\cdot)\right)ds\\ \leq L_{G}\varepsilon/2+\varepsilon/2+\varepsilon+F(m^{2})+e^{-T }\int_{0}^{T}e^{s}R\left(\hat{\eta}^{2}(\cdot\mid u)\big{\|}\hat{\eta}^{2}_{(1 )}(\cdot\mid u)\otimes G(\hat{M}^{2}(u)(\cdot,\cdot)\right)du\\ \leq\inf_{m\in\mathcal{P}(\Delta^{a})}[F(m)+I(m)]+(6+2L_{G}) \varepsilon, \tag{5.54}\]
where for the last inequality we have used (5.36). Furthermore, from (5.40),
\[\hat{M}^{3}(t)=M^{1}(T)+\int_{0}^{t}\hat{\eta}^{3}_{(1)}(s)ds-\int_{0}^{t} \hat{M}^{3}(s)ds,\;t\in[0,T],\;\;\hat{M}^{3}(T)=m_{3}. \tag{5.55}\]
By construction, for each \(j\in\{0,1,\ldots,\lfloor T\kappa_{3}^{-1}\rfloor-1\}\), the map \(t\mapsto\hat{\eta}^{3}(\cdot\mid t)\) is constant over the interval \([j\kappa_{3},(j+1)\kappa_{3})\), as well as over the interval \([\lfloor T\kappa_{3}^{-1}\rfloor\kappa_{3},T]\), and, from (5.45),
\[\inf_{s\in[0,T]}\inf_{(x,y)\in A_{+}}\hat{\eta}^{3}((x,y)\mid s)\geq\delta,\; \;\inf_{s\in[0,T]}\inf_{x\in\Delta^{a}}\hat{M}^{3}(s)(x)\geq\delta/4. \tag{5.56}\]
## 6 Proof of Laplace Lower Bound
In this section we will prove Theorem 5.1 by constructing a suitable sequence of controlled processes based on the quantities \(\hat{\eta}^{3},\hat{M}^{3}\) from Section 5.4.
### Construction of Suitable Controls
To simplify notation, write
\[\hat{\eta}\doteq\hat{\eta}^{3},\;\;\hat{M}\doteq\hat{M}^{3},\;\;c\doteq \kappa_{3},\;\;q\doteq M^{1}(T),\]
where \(M^{1}\) is as in Section 5.2 and \(\hat{\eta}^{3},\hat{M}^{3}\), and \(\kappa_{3}\) are as in Section 5.4.
The precise construction of the controlled processes will be given in a recursive fashion in Construction 6.1. An informal outline of this construction is as follows:
* Use the original uncontrolled dynamics until the empirical measure charges each point in \(\Delta^{o}\). This is needed to ensure that the relative entropy costs are well controlled and it can be done due to Assumption 2.2(4). Note that this incurs zero cost since no control is exercised.
* Once the empirical measure of the original uncontrolled dynamics has charged each point in \(\Delta^{o}\), proceed as follows. Recall the irreducible transition probability kernel \(Q\) as given at the end of Section 5.2 and note that \(q\) is the unique stationary distribution for \(Q\). Until time step \(m(t_{n}-T)-1\), the controlled chain will use the kernel \(Q\). Note that this approximately corresponds to evolving according to \(Q\) until the interpolated continuous time instant \(t_{n}-T\). By the ergodic theorem, at this time instant the empirical measure will, with high probability, be very close to \(q\).
* in the continuous time interpolation (as described in Section 3)
- the interval \([lc,(l+1)c]\), using the transition probability kernel \(\hat{\eta}(\cdot\mid lc)\). Using the ergodic theorem again, the empirical measures that are formed using this construction will be close to the trajectory \(\hat{M}\) with high probability.
* Over the small probability events where deviations from the ergodic limits occur we modify controls so that we do not expend too much control cost.
The reader may want to keep the above rough outline in mind in what follows.
Recall the constants \(\delta_{0}^{A}\) and \(\delta\) defined in Part 2 of Assumption 2.2 and above (5.17), respectively. Let \(l_{0}\doteq\lfloor Tc^{-1}\rfloor\) and define
\[\begin{split}& b_{1}\doteq 4+c,\;\;d_{1}\doteq e^{c}(12+c),\;\;d_{2} =6,\;\;d_{3}\doteq d_{1}+l_{0}b_{1}e^{c},\;\;d_{4}\doteq 2^{l_{0}}(3+d_{2}),\; \;\delta_{1}\doteq\delta\delta_{0}^{A}/8,\\ & A_{1}\doteq|\log\delta_{1}|+(2+d_{3})\delta_{1}^{-1},\;\;B_{1} \doteq 2d_{4}\delta_{1}^{-1},\;\;C_{1}\doteq|\log\delta_{1}|(l_{0}+3)^{2}. \end{split} \tag{6.1}\]
Fix \(\varepsilon_{0},\varepsilon_{1}>0\) sufficiently small so that
\[\varepsilon_{0}<\min\{c,\delta/16\},\,F_{\mbox{\tiny{\rm lip}}}(d_{3} \varepsilon_{0}+2d_{4}\varepsilon_{1})\leq\varepsilon,\;C_{1}\varepsilon_{1} +(l_{0}+1)(A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1})\leq\varepsilon. \tag{6.2}\]
We now proceed to the first item in the above outline. Let \(\{Z_{i},\;i\in\mathbb{N}_{0}\}\) be a sequence of \(\Delta^{o}\)-valued random variables defined recursively as follows. Recall \(x_{0}\in\Delta^{o}\) as fixed in Section 2.1 and let \(Z_{0}\doteq x_{0}\). Having defined \(Z_{0},\ldots,Z_{n}\) for some \(n\in\mathbb{N}_{0}\), define the conditional law of \(Z_{n+1}\) given \(Z_{0},\ldots,Z_{n}\) by
\[P\left(Z_{n+1}=x\mid\sigma\{Z_{0},\ldots Z_{n}\}\right)\doteq G(L^{n+1,Z})(Z_{ n},x),\;x\in\Delta^{o},\]
where \(\{L^{n,Z},\;n\in\mathbb{N}\}\) is defined by \(L^{n,Z}\doteq\frac{1}{n}\sum_{i=0}^{n-1}\delta_{Z_{i}}\), \(n\in\mathbb{N}\). From Assumption 2.2(4) and Lemma A.1 in the Appendix, it follows that there is an \(a^{*}>0\) and \(r_{1}\in\mathbb{N}\), such that with
\[N_{1}(\omega)\doteq\inf\{k\in\mathbb{N}:L^{k,Z}(\omega)(x)>a^{*}\mbox{ for all }x\in\Delta^{o}\}, \tag{6.3}\]
we have
\[P(N_{1}>r_{1})\leq\varepsilon_{1}. \tag{6.4}\]
We will use this sequence later in the construction to build the piece in the first item above.
Next, we proceed to the second item in the outline given at the start of the section. Recall \(q=M^{1}(T)\) and the irreducible transition probability kernel \(Q\) as given at the end of Section 5.2 which has \(q\) as the unique stationary distribution. Also recall that
\[\left(A_{+}\right)^{c}=\{(x,y)\in\Delta^{o}\times\Delta^{o}:Q(x,y)=0\}. \tag{6.5}\]
Let \(Q^{\mathcal{V}}\in\mathcal{K}(\mathcal{V}^{d})\) be defined as
\[Q^{\mathcal{V}}(e_{x},e_{y})\doteq Q(x,y),\;\;x,y\in\Delta^{o}.\]
Now, let \(\{Y_{i}(x),\ x\in\Delta^{o},i\in\mathbb{N}_{0}\}\) be iid \(\Delta^{o}\)-valued random vectors, independent of the sequence \(\{Z_{i},\ i\in\mathbb{N}_{0}\}\), such that
\[P(Y_{i}(x)=y)=Q(x,y),\ x,y\in\Delta^{o},\ i\in\mathbb{N}_{0}.\]
For each \(j\in\mathbb{N}_{0}\), let
\[\mathcal{G}_{j}\doteq\sigma\left\{Z_{l},\ l\in\mathbb{N}_{0}\right\}\vee \sigma\left\{Y_{i}(x),\ x\in\Delta^{o},0\leq i\leq j\right\},\]
and, for each \(y\in\Delta^{o}\), define the sequence \(\{\bar{Y}_{i}^{y},\ i\in\mathbb{N}_{0}\}\) of \(\Delta^{o}\)-valued random variables as \(\bar{Y}_{0}^{y}\doteq y\), and
\[P\left(\bar{Y}_{i+1}^{y}=z\mid\mathcal{G}_{i+1}\vee\sigma\{\bar{Y}_{j}^{y},0 \leq j\leq i\}\right)\doteq Q(\bar{Y}_{i}^{y},z),\ \ z\in\Delta^{o},i\in\mathbb{N}_{0}.\]
By using the ergodic theorem for the transition probability matrix \(Q\), we can find \(k_{0}\in\mathbb{N}\) such that, with
\[A_{\varepsilon_{0}}\doteq\left\{\omega:\max_{y\in\Delta^{o}}\sup_{m\geq k_{0}} \left\|\frac{1}{m}\sum_{i=0}^{m-1}\delta_{\bar{Y}_{i}^{y}(\omega)}-q\right\| \geq\varepsilon_{0}\right\},\]
we have
\[P(A_{\varepsilon_{0}})\leq\varepsilon_{1},\ \text{and}\ k_{0}>4\varepsilon_{0} ^{-1}. \tag{6.6}\]
We now go on to the third item in the outline given at the start of the section. For that we introduce some notation that is useful in describing the construction. For each \(n\in\mathbb{N}\) let
\[m_{j}^{n}\doteq m(t_{n}-T+jc),\ \ j=0,1,\ldots,l_{0},\]
where \(l_{0}\) is defined above (6.1). For each \(n\in\mathbb{N}\) and \(j=0,\ldots,l_{0}\), let
\[\mathcal{I}^{n,j}\doteq\left\{i\in\mathbb{N}_{0}:t_{i}\in[t_{m_{j}^{n}+1},t_{m _{j+1}^{n}})\right\}=\{m_{j}^{n}+1,m_{j}^{n}+2,\ldots,m_{j}^{n}+l^{n,j}\}, \tag{6.7}\]
where \(l^{n,j}\doteq|\mathcal{I}^{n,j}|\) denotes the cardinality of each of these sets. Note that, for all \(n\in\mathbb{N}\),
\[m_{j}^{n}\doteq m_{j-1}^{n}+l^{n,j-1}+1,\ \ j=1,\ldots,l_{0}.\]
For \(j=0,1,\ldots,l_{0}\), we define \(\beta^{j}\in\mathcal{K}(\Delta^{o})\) as
\[\beta^{j}(x,y)\doteq\hat{\eta}(x,y\mid cj),\ x,y\in\Delta^{o}.\]
Such a \(\beta^{j}\) can be disintegrated as
\[\beta^{j}(x,y)=\beta^{j}_{(1)}(x)\beta^{j}_{2|1}(y\mid x),\ x,y\in\Delta^{o}.\]
Recall from (5.56) (and the fact that \(A\) is irreducible from Part 2 of Assumption 2.2) that
\[\inf_{j=0,\ldots,l_{0}}\inf_{x\in\Delta^{o}}\beta^{j}_{(1)}(x)\geq\delta,\ \inf_{j=0,\ldots,l_{0}}\inf_{(x,y)\in A_{+}}\beta^{j}_{2|1}(y\mid x)\geq\delta. \tag{6.8}\]
Also, by our construction of \(\hat{\eta}\), for each \(j=0,1,\ldots,l_{0}\),
\[\sum_{x\in\Delta^{o}}\beta^{j}_{(1)}(x)\beta^{j}_{2|1}(y\mid x)=\beta^{j}_{(1) }(y),\ \ y\in\Delta^{o}.\]
This is a consequence of the fact that \(\eta^{0}\) introduced in Section 5.1 belongs to \(\mathcal{U}(m^{0})\) (see Property (P2) in Section 2.2). The above identity, together with (6.8), says that \(\beta^{j}_{(1)}\) is the unique stationary distribution of the Markov chain with an irreducible transition probability function \(\beta^{j}_{2|1}(\cdot\mid\cdot)\).
Let, for \(j=0,1,\ldots,l_{0}\), \(\{U^{j}_{i},\ i\in\mathbb{N}_{0}\}\) be sequences of \(\Delta^{o}\)-valued random variables that are mutually independent of one another for different \(j\), independent of \(\{Z_{j},\bar{Y}_{j}^{y},\ j\in\mathbb{N}_{0},y\in\Delta^{o}\}\), and are distributed according to \(U^{j}_{0}\sim\beta^{j}_{(1)}\), and
\[P(U^{j}_{i}=y\mid\sigma\{U^{j}_{m},\ 0\leq m\leq i-1\})=\beta^{j}_{2|1}(y\mid U^{j} _{i-1}),\ \ i\in\mathbb{N}.\]
Then, using the ergodic theorem, we can find \(k^{*}\in\mathbb{N}\) such that
\[P\left(\max_{j=0,\ldots,l_{0}}\,\sup_{m\geq k^{*}}\left\|\frac{1}{m+1}\sum_{i=0}^{ m}\delta_{U_{i}^{j}}-\beta_{(1)}^{j}\right\|\geq\varepsilon_{0}\right)\leq \varepsilon_{1} \tag{6.9}\]
and
\[\max_{j=0,\ldots,l_{0}}\max_{x\in\Delta^{o}}E\left(\left\|\frac{1}{k^{*}+1}\sum _{i=0}^{k^{*}}\delta_{U_{i}^{j}}-\beta_{(1)}^{j}\right\|\ \left|U_{0}^{j}=x\right)\leq \varepsilon_{0}. \tag{6.10}\]
Now, fix \(n_{0}\in\mathbb{N}\) large enough so that, for all \(n\geq n_{0}\),
\[m_{0}^{n}>k_{0}+r_{1}+\lfloor 4(r_{1}+1)/\varepsilon_{0}\rfloor+1,\ \ 2(m_{0}^{n}+2)^{-1}\leq \varepsilon_{0},\ m_{0}^{n}c>k^{*}. \tag{6.11}\]
Now, we piece the above main ingredients together to construct the controlled collection \(\{\bar{\nu}^{n,k},\bar{L}^{n,k},\bar{\mu}^{n,k},\ n\in\mathbb{N}\}\) as follows.
**Construction 6.1**.: _Fix \(n\geq n_{0}\)._
1. _Let_ \(x_{0}\in\Delta^{o}\) _be as fixed in Section_ 2.1_. Let_ \(\bar{X}_{0}^{n}\doteq x_{0}\)_,_ \(\bar{\nu}^{n,0}\doteq e_{x_{0}}\)_,_ \(\bar{L}^{n,1}\doteq\delta_{x_{0}}\)_. Define, for_ \(k\in\{1,\ldots,N_{1}(\omega)\wedge r_{1}\}\)_,_ \[\bar{X}_{k}^{n}(\omega)\doteq Z_{k}(\omega),\ \bar{\nu}^{n,k}(\omega)\doteq e_{\bar{X}_{k}^{n}( \omega)},\ \bar{L}^{n,k+1}(\omega)\doteq L^{k+1,Z}(\omega).\] _Also, set_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq G^{\mathcal{V}}(\bar{L}^{n,k}(\omega))(e _{\bar{X}_{k-1}^{n}(\omega)},e_{y}),\ y\in\Delta^{o}.\] _On the 'low probability' set_ \(\{r_{1}<N_{1}(\omega)\}\)_, we once more define_ \(\bar{X}_{k}^{n},\bar{\nu}^{n,k},\bar{L}^{n,k+1},\bar{\mu}^{n,k}\) _by the above formulas for all_ \(k>r_{1}\)_._
2. _On the 'high probability' set_ \(\{N_{1}(\omega)\leq r_{1}\}\) _the construction proceeds as follows. Let_ \(k_{0}\) _be as introduced above (_6.6_) and let_ \(k_{1}\doteq k_{0}+\lfloor 4(r_{1}+1)/\varepsilon_{0}\rfloor+1\)_. For_ \(k\in\{N_{1}(\omega)+1,\ldots,N_{1}(\omega)+k_{1}\}\)_, define_ \(\bar{X}_{k}^{n}(\omega)\doteq\bar{Y}_{k-N_{1}(\omega)}^{\bar{X}_{N_{1}(\omega) }(\omega)}(\omega)\)_, and let_ \[\bar{\nu}^{n,k}(\omega)\doteq e_{\bar{X}_{k}^{n}(\omega)},\ \bar{L}^{n,k+1}(\omega)\doteq\frac{1}{k+1}\sum_{j=0}^{k}\delta_{\bar{X}_{j}^{n} (\omega)},\] (6.12) _and set_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq Q(\bar{X}_{k-1}^{n}(\omega),y),\ y\in \Delta^{o}.\] (6.13)
3. _Again, on the set_ \(\{N_{1}(\omega)\leq r_{1}\}\)_, let_ \(k_{2}(\omega)\doteq N_{1}(\omega)+k_{1}\)_, and define_ \[\tau^{n}(\omega)\doteq\inf\{k\geq k_{2}(\omega):\|\bar{L}^{n,k+1}(\omega)-q\|>2 \varepsilon_{0}\},\ \ k_{3}(\omega)\doteq\tau^{n}(\omega)\wedge(m_{0}^{n}-1).\] _For_ \(k\in\{k_{2}(\omega)+1,\ldots,k_{3}(\omega)\}\)_, let_ \(\bar{X}_{k}^{n}(\omega)\doteq\bar{Y}_{k-k_{2}(\omega)}^{\bar{X}_{k_{2}(\omega) }^{n}(\omega)}(\omega)\)_, and define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1},\bar{\mu}^{n,k}\) _by (_6.12_) and (_6.13_). Let_ \(\mathcal{J}_{0}^{n}(\omega)\doteq\mathbf{1}_{\{\tau^{n}(\omega)\leq m_{0}^{n}- 1\}}\)_. Define_ \[\mathcal{D}_{0}^{n}\doteq\{N_{1}(\omega)\leq r_{1}\}\cap\{\mathcal{J}_{0}^{n}( \omega)=1\},\] (6.14) _which, in view of (_6.6_), is again a 'low probability set'. On_ \(\mathcal{D}_{0}^{n}\)_, for_ \(k\geq k_{3}(\omega)\) _let_ \(\bar{\mu}^{n,k}\) _and_ \(\bar{X}_{k}^{n}\) _be defined so that_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq G^{\mathcal{V}}(\bar{L}^{n,k}(\omega))(e_{ \bar{X}_{k-1}^{n}(\omega)},e_{y}),\ \ y\in\Delta^{o},\] _and_ \[P(\bar{X}_{k}^{n}=y\mid\bar{\mathcal{F}}^{n,k})=\bar{\mu}^{n,k}(e_{y}),\ \ y \in\Delta^{o},\] _where, as in Section_ 3_,_ \(\bar{\mathcal{F}}^{n,k}=\sigma\{\bar{L}^{n,i},\ 1\leq i\leq k\}\)_, and_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _are defined by (_6.12_). This ensures that no cost is incurred on this low probability event._
_._
* _Now we give the construction on the 'high probability' set_ \(\mathcal{E}^{n}_{0}\doteq\{N_{1}(\omega)\leq r_{1}\}\cap\{\mathcal{J}^{n}_{0}( \omega)=0\}\)_._
* _For_ \(k=m^{n}_{0}\)_, let_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq\beta^{0}_{(1)}(y),\,\,\,y\in\Delta^{o},\] _define_ \(\bar{X}^{n}_{k}\doteq U^{0}_{0}\)_, and note that, a.e. on_ \(\mathcal{E}_{0}\)_,_ \[P(\bar{X}^{n}_{k}=y\mid\bar{\mathcal{F}}^{n,k})=\beta^{0}_{(1)}(y),\,\,\,y\in \Delta^{o},\] _where_ \(\bar{\mathcal{F}}^{n,k}=\sigma\{\bar{L}^{n,j},1\leq j\leq k\}\)_, and_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _are defined by_ \((\ref{eq:11})\)_._
* _Recall the set_ \(\mathcal{I}^{n,0}\) _introduced in_ \((\ref{eq:11})\)_. Also, recall that we have chosen_ \(k^{*}\) _so that_ \((\ref{eq:11})\) _and_ \[\tau^{n,0}(\omega)\doteq\inf\left\{m\geq k^{*}:\left\|\frac{1}{m+1}\sum_{i=0} ^{m}\delta_{U^{0}_{i}(\omega)}-\beta^{0}_{(1)}\right\|>\varepsilon_{0}\right\}.\] (6.15) _For_ \(k\in\{m^{n}_{0}+1,\ldots,m^{n}_{0}+(l^{n,0}\wedge\tau^{n,0}(\omega))\}\)_, let_ \(\bar{X}^{n}_{k}\doteq U^{0}_{k-m^{n}_{0}}\)_, and note that, a.e. on_ \(\mathcal{E}^{n}_{0}\)_,_ \[P(\bar{X}^{n}_{k}=y\mid\bar{\mathcal{F}}^{n,k})=\beta^{0}_{2|1}(y\mid\bar{X}^ {n}_{k-1}),\,\,\,y\in\Delta^{o},\] _where_ \(\bar{\mathcal{F}}^{n,k}\) _is defined as above. Also, for_ \(k\in\{m^{n}_{0}+1,\ldots,m^{n}_{0}+(l^{n,0}\wedge\tau^{n,0}(\omega))\}\)_, define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by_ \((\ref{eq:11})\)_, and_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq\beta^{0}_{2|1}(y\mid\bar{X}^{n}_{k-1}( \omega)),\,\,\,y\in\Delta^{o}.\] _If_ \(\{\tau^{n,0}(\omega)\leq l^{n,0}\}\) _occurs, then let, for_ \(k\geq m^{n}_{0}+\tau^{n,0}(\omega)+1\)_,_ \(\bar{\mu}^{n,k}(\omega)\) _and_ \(\bar{X}^{n}_{k}(\omega)\) _be defined so that_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq G^{\mathcal{V}}(\bar{L}^{n,k}(\omega))(e _{\bar{X}^{n}_{k-1}(\omega)},e_{y}),\,\,\,y\in\Delta^{o},\] _and_ \[P(\bar{X}^{n}_{k}=y\mid\bar{\mathcal{F}}^{n,k})=\bar{\mu}^{n,k}(e_{y}),\,\,\,y \in\Delta^{o},\] _define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by_ \((\ref{eq:11})\)_, and let_ \(\mathcal{J}^{n}_{1}(\omega)=\mathbf{1}_{\{\tau^{n,0}(\omega)\leq l^{n,0}\}}\)_. Note that, by_ \((\ref{eq:11})\)_,_ \(\{\tau^{n,0}(\omega)\leq l^{n,0}\}\) _is a 'low probability' event for large_ \(n\) _and so once more we are using uncontrolled (zero cost) dynamics on this event._
* _We now recursively extend the construction. Towards this end, suppose that, for some_ \(l\in\{0,\ldots,l_{0}-1\}\)_, we have defined the quantities_ \[\left\{\bar{X}^{n}_{k}(\omega),\bar{\nu}^{n,k}(\omega),\bar{L}^{n,k}(\omega), \bar{\mu}^{n,k}(\omega),\,k\in\cup_{i=0}^{l}\left(\{m^{n}_{i}\}\cup\mathcal{I }^{n,i}\right)\right\}\] _and_ \(\{\mathcal{J}^{n}_{i},\,0\leq i\leq l+1\}\)_. Let_ \[\mathcal{E}^{n}_{l+1}\doteq\mathcal{E}^{n}_{l}\cap\{\mathcal{J}^{n}_{l+1}( \omega)=0\}=\{N_{1}(\omega)\leq r_{1}\}\cap\left(\cap_{i=0}^{l+1}\{\mathcal{J }^{n}_{i}(\omega)=0\}\right).\] (6.16) _Then on the 'high probability' set_ \(\mathcal{E}^{n}_{l+1}\) _we proceed as follows._
* _For_ \(k=m^{n}_{l+1}\)_, let_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq\beta^{l+1}_{(1)}(y),\,\,\,y\in\Delta^{o},\] _define_ \(\bar{X}^{n}_{k}\doteq U^{l+1}_{0}\)_, and note that, a.e. on_ \(\mathcal{E}^{n}_{l+1}\)_,_ \[P(\bar{X}^{n}_{k}=y\mid\bar{\mathcal{F}}^{n,k})=\beta^{l+1}_{(1)}(y),\,\,\,y \in\Delta^{o}.\] _Also, define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by_ \((\ref{eq:11})\)_._
* _We now consider_ \(k\in\mathcal{I}^{n,l+1}\)_. Let_ \[\tau^{n,l+1}(\omega)\doteq\inf\left\{m\geq k^{*}:\left\|\frac{1}{m+1}\sum_{i=0}^ {m}\delta_{U_{i}^{l+1}(\omega)}-\beta_{(1)}^{l+1}\right\|>\varepsilon_{0}\right\}.\] _For_ \(k\in\{m_{l+1}^{n}+1,\ldots,m_{l+1}^{n}+(l^{n,l+1}\wedge\tau^{n,l+1}(\omega))\}\)_, let_ \(\bar{X}_{k}^{n}=U_{k-m_{l+1}^{n}}^{l+1}\)_, and note that, a.e. on_ \(\mathcal{E}_{l+1}^{n}\)_,_ \[P(\bar{X}_{k}^{n}=y\mid\bar{\mathcal{F}}^{n,k})=\beta_{2|1}^{l+1}(y\mid\bar{X }_{k-1}^{n}),\;\;y\in\Delta^{o},\] _where_ \(\bar{\mathcal{F}}^{n,k}\) _is defined as above. Also, for_ \(k\in\{m_{l+1}^{n}+1,\ldots,m_{l+1}^{n}+(l^{n,l+1}\wedge\tau^{n,l+1}(\omega))\}\)_, define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by (_6.12_), and_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq\beta_{2|1}^{l+1}(y\mid\bar{X}_{k-1}^{n} (\omega)),\;\;y\in\Delta^{o}.\] _If_ \(\{\tau^{n,l+1}(\omega)\leq l^{n,l+1}\}\) _occurs, then let, for_ \(k\geq m_{l+1}^{n}+\tau^{n,l+1}(\omega)+1\)_,_ \(\bar{\mu}^{n,k}(\omega)\) _and_ \(\bar{X}_{k}^{n}(\omega)\) _be defined so that_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq G^{\mathcal{V}}(\bar{L}^{n,k}(\omega))(e _{\bar{X}_{k-1}^{n}(\omega)},e_{y}),\;\;y\in\Delta^{o},\] _and_ \[P(\bar{X}_{k}^{n}=y\mid\bar{\mathcal{F}}^{n,k})=\bar{\mu}^{n,k}(e_{y}),\;\;y \in\Delta^{o},\] _define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by (_6.12_), and let_ \(\mathcal{J}_{l+2}^{n}(\omega)\doteq\mathbf{1}_{\{\tau^{n,l+1}(\omega)\leq l^{ n,l+1}\}}\)_._
* _On the event_ \(\{l^{n,l_{0}}<\tau^{n,l_{0}}(\omega)\}\) _and for_ \(k\geq m_{l_{0}}^{n}+\tau^{n,l_{0}}(\omega)+1\)_, define_ \(\bar{\mu}^{n,k}(\omega)\) _and_ \(\bar{X}_{k}^{n}(\omega)\) _by_ \[\bar{\mu}^{n,k}(\omega)(e_{y})\doteq G^{\mathcal{V}}(\bar{L}^{n,k}(\omega))(e _{\bar{X}_{k-1}^{n}(\omega)},e_{y}),\;\;y\in\Delta^{o},\] _and_ \[P(\bar{X}_{k}^{n}=y\mid\bar{\mathcal{F}}^{n,k})=\bar{\mu}^{n,k}(e_{y}),\;\;y \in\Delta^{o},\] _and define_ \(\bar{\nu}^{n,k},\bar{L}^{n,k+1}\) _by (_6.12_)_
### Convergence of Controlled Processes
Let, for \(n\in\mathbb{N}\), \(\{\bar{L}^{n,k+1},\bar{\mu}^{n,k+1},\bar{\nu}^{n,k},\;k\in\mathbb{N}_{0}\}\) be defined as in Construction 6.1. Using these, for \(n\in\mathbb{N}\) and \(t\in[0,T]\), define \(\bar{L}^{n}(t)\) as in (3.6), \(\{\bar{\Lambda}^{n},\bar{\xi}^{n},\bar{\zeta}^{n}\}\) as in (3.7) - (3.9), (3.12), and \(\{\gamma^{n},\beta^{n},\theta^{n}\}\) as in (3.16). Also, define \(\bar{\mathbf{L}}^{n}\) and \(\bar{\Lambda}^{n}\) as in (3.20) and (3.21), and for \(t\in[0,T]\), define \(\sigma_{n,t}\doteq t_{n}-T+t\),
\[\hat{M}^{n}(t)\doteq\bar{L}^{n}(\sigma_{n,t})=\bar{L}^{n}(t_{n}-T+t). \tag{6.17}\]
For notational convenience, write \(\mathcal{E}^{n}\doteq\mathcal{E}_{l_{0}+1}^{n}\), where \(\mathcal{E}_{l_{0}+1}^{n}\) is defined as in (6.16). Also, note that, for each \(n\in\mathbb{N}\),
\[B^{n}\doteq\cup_{l=1}^{l_{0}+1}\{\mathcal{J}_{l}^{n}=1\}\subseteq\left\{\max_ {j=0,\ldots,l_{0}}\;\sup_{m\geq k^{*}}\left\|\frac{1}{m+1}\sum_{i=0}^{m}\delta_ {U_{i}^{j}}-\beta_{(1)}^{j}\right\|>\varepsilon_{0}\right\},\]
and thus, from (6.9), for all \(n\in\mathbb{N}\),
\[P(B^{n})\leq\varepsilon_{1}. \tag{6.18}\]
The following lemma gives an estimate on the distance between \(\hat{M}^{n}(0)\) and \(q\).
**Lemma 6.2**.: _For all \(n\geq n_{0}\),_
\[P(\|\hat{M}^{n}(0)-q\|\geq 3\varepsilon_{0})\leq 3\varepsilon_{1}.\]
Proof.: Fix \(n\geq n_{0}\) and note that
\[\hat{M}^{n}(0)=\bar{L}^{n}(t_{n}-T)=\bar{L}^{n}(a(t_{n}-T))+\mathcal{R}^{n},\]
where, from (4.21) and our choice of \(n_{0}\) above (6.11),
\[\|\mathcal{R}^{n}\|=\|\bar{L}^{n}(t_{n}-T)-\bar{L}^{n}(a(t_{n}-T))\|\leq 2(m(t_{n }-T)+2)^{-1}=2(m_{0}^{n}+2)^{-1}\leq\varepsilon_{0}.\]
Thus, on recalling the definition of \(\{a(s),\ s\in\mathbb{R}_{+}\}\) from (3.11), we have
\[P(\|\hat{M}^{n}(0)-q\|>3\varepsilon_{0})\leq P(\|\bar{L}^{n}(a(t_{n}-T))-q\|>2 \varepsilon_{0})=P(\|\bar{L}^{n,m_{0}^{n}}-q\|>2\varepsilon_{0}). \tag{6.19}\]
Also,
\[P(\|\bar{L}^{n,m_{0}^{n}}-q\|>2\varepsilon_{0})\leq P\left((\mathcal{E}^{n})^ {c}\right)+P(\mathcal{E}^{n},\|\bar{L}^{n,m_{0}^{n}}-q\|>2\varepsilon_{0}), \tag{6.20}\]
and
\[\{\omega:\mathcal{J}_{0}^{n}(\omega)=1\}\subseteq\left\{\omega:\sup_{k_{2}( \omega)\leq k\leq m_{n}^{0}-1}\|\bar{L}^{n,k+1}(\omega)-q\|>2\varepsilon_{0}\right\} \tag{6.21}\]
Also, for all \(k\geq k_{2}(\omega)\), we have
\[\bar{L}^{n,k+1}(\omega)=\frac{N_{1}(\omega)+1}{k+1}\bar{L}^{n,N_{1}(\omega)+1 }(\omega)+\frac{k-N_{1}(\omega)}{k+1}\frac{1}{k-N_{1}(\omega)}\sum_{j=N_{1}( \omega)+1}^{k}\delta_{\tilde{X}_{j}^{n}(\omega)}.\]
From this, using the fact that, on \(\{N_{1}\leq r_{1}\}\), from the definitions of \(k_{1}\) and \(k_{2}(\omega)\),
\[\frac{2(N_{1}(\omega)+1)}{k+1}\leq\frac{2(r_{1}+1)}{N_{1}+k_{0}+\left[4(r_{1} +1)/\varepsilon_{0}\right]+1}\leq\frac{\varepsilon_{0}}{2},\ \text{for all}\ k\geq k_{2}(\omega),\]
Figure 6.1: An algorithmic representation of Construction 6.1 of the controls. From the root of the tree to the bottom: red edges (resp. blue, green) correspond to ‘low probability’ (resp. ‘high probability’) events and lead to uncontrolled dynamics based on \(G\) (resp. the controls based on the kernel \(Q\), the kernel \(\beta_{2|1}\)).
we have, on \(\mathcal{D}_{0}^{n}\) (recall the definition of \(\mathcal{D}_{0}^{n}\) from (6.14)) that, for all \(k\in\{k_{2}(\omega),k_{2}(\omega)+1,\ldots,k_{3}(\omega)\}\),
\[\|\bar{L}^{n,k+1}(\omega)-q\| \leq\frac{\varepsilon_{0}}{2}+\frac{k-N_{1}(\omega)}{k+1}\left\| \frac{1}{k-N_{1}(\omega)}\sum_{j=N_{1}(\omega)+1}^{k}\delta_{\bar{Y}_{j}^{n}( \omega)}-q\right\| \tag{6.22}\] \[\leq\frac{\varepsilon_{0}}{2}+\max_{y\in\Delta^{\omega}}\left\| \frac{1}{k-N_{1}(\omega)}\sum_{j=N_{1}(\omega)+1}^{k}\delta_{\bar{Y}_{j-N_{1} (\omega)}^{y}(\omega)}-q\right\|\] \[\leq\frac{\varepsilon_{0}}{2}+\max_{y\in\Delta^{\omega}}\left\| \frac{1}{k-N_{1}(\omega)+1}\sum_{j=0}^{k-N_{1}(\omega)}\delta_{\bar{Y}_{j}^{y} (\omega)}-q\right\|+\frac{2}{k-N_{1}(\omega)+1}\] \[\leq\varepsilon_{0}+\max_{y\in\Delta^{\omega}}\left\|\frac{1}{k- N_{1}(\omega)+1}\sum_{j=0}^{k-N_{1}(\omega)}\delta_{\bar{Y}_{j}^{y}(\omega)}-q \right\|,\]
where the last inequality follows on recalling (6.6) and noting that \(k_{2}(\omega)-N_{1}(\omega)>k_{0}\) on \(\{N_{1}\leq r_{1}\}\). Once more using the fact that \(k_{2}(\omega)-N_{1}(\omega)>k_{0}\) on \(\{N_{1}\leq r_{1}\}\), we have, due to (6.6), (6.21), and (6.22), that
\[\mathbb{P}(\mathcal{D}_{0}^{n}) \leq P\left(\left\{\omega:N_{1}(\omega)\leq r_{1},\sup_{k_{2}( \omega)\leq k\leq m_{n}^{0}-1}\|\bar{L}^{n,k+1}(\omega)-q\|>2\varepsilon_{0} \right\}\right) \tag{6.23}\] \[\leq P\left(\left\{\omega:N_{1}(\omega)\leq r_{1},\max_{y\in \Delta^{\omega}}\sup_{k\geq k_{2}(\omega)}\left\|\frac{1}{k-N_{1}(\omega)+1} \sum_{j=0}^{k-N_{1}(\omega)}\delta_{\bar{Y}_{j}^{y}(\omega)}-q\right\|> \varepsilon_{0}\right\}\right)\] \[\leq P(A_{\varepsilon_{0}})\leq\varepsilon_{1},\]
where the next to last inequality uses \(k_{2}(\omega)-N_{1}(\omega)=k_{1}>k_{0}\). This, together with (6.4) and (6.18), shows that
\[P\left((\mathcal{E}^{n})^{c}\right)\leq P(N_{1}>r_{1})+P(\mathcal{D}_{0}^{n})+ P\left(\cup_{i=1}^{l_{0}+1}\{\mathcal{J}_{i}^{n}=1\}\right)\leq\varepsilon_{1}+ \varepsilon_{1}+\varepsilon_{1}=3\varepsilon_{1}.\]
Also, on \(\mathcal{E}^{n}\), we have \(\tau^{n}\geq m_{0}^{n}-1\), and therefore
\[P(\mathcal{E}^{n},\,\|\bar{L}^{n,m_{0}^{n}}-q\|>2\varepsilon_{0})=0.\]
The result follows on using the above estimate together with (6.19), (6.20), and (6.23).
For \(n\in\mathbb{N}\) and \(t\in[0,c]\), let
\[p_{0}^{n}\doteq m(t_{n}-T+\varepsilon_{0})-m_{0}^{n},\,\,\,p_{0}^{n}(t) \doteq m(t_{n}-T+t)-m_{0}^{n},\]
and note that \(p_{0}^{n}(\varepsilon_{0})=p_{0}^{n}\). Recall the definitions of \(k^{*}\) and \(n_{0}\) from above (6.9) and (6.11), respectively, and let \(n_{1}\geq n_{0}\) be such that, for each \(n\geq n_{1}\) and all \(i=0,\ldots,l_{0}\),
\[p_{0}^{n}>k^{*},\,\,\,\frac{2\max\{k^{*},2c,1\}}{m_{0}^{n}}\leq\frac{ \varepsilon_{0}}{2},\,\,\,\left\|\frac{m(t_{n}-T+(i+1)c)+1}{m(t_{n}-T+ic+ \varepsilon_{0})}-e^{c-\varepsilon_{0}}\right\|<\varepsilon_{0}. \tag{6.24}\]
Note that the last estimate in the previous display is possible due to Lemma 4.2 and that it implies that
\[\frac{m(t_{n}-T+(i+1)c)+1}{m(t_{n}-T+ic+\varepsilon_{0})}\leq e^{c-\varepsilon _{0}}+\varepsilon_{0}\leq 2+c,\,\,\,i=0,\ldots,l_{0}, \tag{6.25}\]
since \(c-\varepsilon_{0}\in(0,1)\). Also, note that the first inequality in (6.24) implies that
\[p_{i}^{n}=m(t_{n}-T+ic+\varepsilon_{0})-m_{i}^{n}\geq k^{*},\ \ i=0,\ldots,l_{0},\]
which is used in Lemma 6.4. Further, recall from (6.1) that \(d_{1}\doteq e^{c}(12+c)\) and \(d_{2}=6\).
The following lemma shows that the controlled state process has the correct asymptotic behavior over the time interval \([0,c]\).
**Lemma 6.3**.: _For \(n\geq n_{1}\),_
\[P\left(\sup_{t\in[0,c]}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq d_{1}\varepsilon_{0} \right)\leq d_{2}\varepsilon_{1}.\]
Proof.: Fix \(n\geq n_{1}\). Using (3.13) and (6.17), note that for each \(t\in[0,c]\),
\[\hat{M}^{n}(t) =\hat{M}^{n}(0)+\int_{t_{n}-T}^{t_{n}-T+t}\sum_{v\in\mathcal{V}^{ d}}(v-\bar{L}^{n}(a(s)))\bar{\Lambda}^{n}(v\mid s)ds\] \[=q+t\beta^{0}_{(1)}-\int_{0}^{t}\hat{M}^{n}(s)ds+\bar{\mathcal{R }}^{n}_{1}(t)+\bar{\mathcal{R}}^{n}_{2}(t)+\bar{\mathcal{R}}^{n}_{3}, \tag{6.26}\]
where
\[\bar{\mathcal{R}}^{n}_{1}\doteq\hat{M}^{n}(0)-q,\ \ \bar{\mathcal{R}}^{n}_{2}(t) \doteq-\int_{t_{n}-T}^{t_{n}-T+t}\left(\bar{L}^{n}(a(s))-\bar{L}^{n}(s)\right)ds,\]
and
\[\bar{\mathcal{R}}^{n}_{3}(t)\doteq\int_{t_{n}-T}^{t_{n}-T+t}\left(\sum_{v\in \mathcal{V}^{d}}v\bar{\Lambda}(v\mid s)-\beta^{0}_{(1)}\right)ds.\]
We begin by considering \(\bar{\mathcal{R}}^{n}_{1}\) and \(\bar{\mathcal{R}}^{n}_{2}\). From Lemma 6.2 we see that
\[P(\|\bar{\mathcal{R}}^{n}_{1}\|\geq 3\varepsilon_{0})\leq 3\varepsilon_{1}, \tag{6.27}\]
while (4.21) and (6.24) ensure that
\[\sup_{t\in[0,c]}\|\bar{\mathcal{R}}^{n}_{2}(t)\|\leq\frac{2c}{m(t_{n}-T)+2} \leq\varepsilon_{0}. \tag{6.28}\]
We now consider \(\bar{\mathcal{R}}^{n}_{3}\). First, observe that
\[\sup_{t\in[0,\varepsilon_{0}]}\|\bar{\mathcal{R}}^{n}_{3}(t)\|\leq 2\varepsilon _{0}, \tag{6.29}\]
which ensures that, for \(t\in(\varepsilon_{0},c]\),
\[\|\bar{\mathcal{R}}^{n}_{3}(t)\|\leq 2\varepsilon_{0}+\|\bar{\mathcal{R}}^{n}_{ 4}\|+\|\bar{\mathcal{R}}^{n}_{5}(t)\|+\|\bar{\mathcal{R}}^{n}_{6}(t)\|, \tag{6.30}\]
where, for \(t\in(\varepsilon_{0},c]\),
\[\bar{\mathcal{R}}^{n}_{4}\doteq\int_{t_{n}-T+\varepsilon_{0}}^{t_{m(t_{n}-T+ \varepsilon_{0})+1}}\left(\sum_{v\in\mathcal{V}^{d}}v\bar{\Lambda}(v\mid s)- \beta^{0}_{(1)}\right)ds,\ \ \bar{\mathcal{R}}^{n}_{5}(t)\doteq\int_{t_{m(t_{n}-T+ \varepsilon_{0})+1}}^{t_{m(t_{n}-T+\varepsilon_{0})+1}}\left(\sum_{v\in \mathcal{V}^{d}}v\bar{\Lambda}(v\mid s)-\beta^{0}_{(1)}\right)ds,\]
and
\[\bar{\mathcal{R}}^{n}_{6}(t)\doteq\int_{t_{m(t_{n}-T+\varepsilon_{0})}}^{t_{n }-T+t}\left(\sum_{v\in\mathcal{V}^{d}}v\bar{\Lambda}(v\mid s)-\beta^{0}_{(1)} \right)ds,\]
Using (6.24) we see that
\[\|\bar{\mathcal{R}}_{4}^{n}\|\leq\varepsilon_{0},\ \ \sup_{t\in(\varepsilon_{0},c]}\| \bar{\mathcal{R}}_{6}^{n}(t)\|\leq\varepsilon_{0}. \tag{6.31}\]
Observe that, for \(t\in(\varepsilon_{0},c]\),
\[\bar{\mathcal{R}}_{5}^{n}(t)=\sum_{k=m_{0}^{n}+p_{0}^{n}(t)}^{m_{0}^{n}+p_{0}^ {n}(t)}\frac{1}{k+1}(\bar{v}^{n,k}-\beta_{(1)}^{0}). \tag{6.32}\]
Define
\[V^{n,r}\doteq\frac{1}{m_{0}^{n}+p_{0}^{n}+r+2}\sum_{k=m_{0}^{n}}^{m_{0}^{n}+p_ {0}^{n}+r+1}\left(\bar{v}^{n,k}-\beta_{(1)}^{0}\right),\ \ r\in\mathbb{N}_{0}.\]
It is easy to verify using an induction argument that
\[V^{n,r}=V^{n,0}+\sum_{k=1}^{r}\frac{1}{m_{0}^{n}+p_{0}^{n}+k+2}\left(\left( \bar{v}^{n,m_{0}^{n}+p_{0}^{n}+k+1}-\beta_{(1)}^{0}\right)-V^{n,k-1}\right). \tag{6.33}\]
From (6.32) and (6.33), we have that, for each \(t\in(\varepsilon_{0},c]\),
\[\begin{split}\bar{\mathcal{R}}_{5}^{n}(t)&=\sum_{k= 1}^{p_{0}^{n}(t)-p_{0}^{n}-1}\frac{1}{k+m_{0}^{n}+p_{0}^{n}+2}\left(\bar{v}^{n,k+m_{0}^{n}+p_{0}^{n}+1}-\beta_{(1)}^{0}\right)\\ &=V^{n,p_{0}^{n}(t)-p_{0}^{n}-1}-V^{n,0}+\sum_{k=1}^{p_{0}^{n}(t) -p_{0}^{n}-1}\frac{1}{k+m_{0}^{n}+p_{0}^{n}+2}V^{n,k-1}.\end{split} \tag{6.34}\]
Using (6.9) and (6.24), note that, for \(t\in(\varepsilon_{0},c]\) and \(r\in\{0,\ldots,p_{0}^{n}(t)-p_{0}^{n}-1\}\), on \((B^{n})^{c}\cap\mathcal{E}_{0}^{n}\),
\[\begin{split}\|V^{n,r}\|&=\frac{p_{0}^{n}+r+2}{m_{0 }^{n}+p_{0}^{n}+r+1}\left\|\frac{1}{p_{0}^{n}+r+2}\sum_{k=m_{0}^{n}}^{m_{0}^{n} +p_{0}^{n}+r+1}\bar{v}^{n,k}-\beta_{(1)}^{0}\right\|\\ &\leq\left\|\frac{1}{p_{0}^{n}+r+2}\sum_{k=m_{0}^{n}}^{m_{0}^{n} +r+1}\bar{v}^{n,k}-\beta_{(1)}^{0}\right\|\\ &=\left\|\frac{1}{p_{0}^{n}+r+2}\sum_{k=0}^{p_{0}^{n}+r+1}\delta_ {U_{k}^{0}}-\beta_{(1)}^{0}\right\|\leq\varepsilon_{0}.\end{split} \tag{6.35}\]
From (6.25), (6.34), and (6.35) we see that for each \(t\in(\varepsilon_{0},c]\), on \((B^{n})^{c}\cap\mathcal{E}_{0}^{n}\),
\[\|\bar{\mathcal{R}}_{5}^{n}(t)\|\leq\varepsilon_{0}+\varepsilon_{0}+\varepsilon _{0}\left(\sum_{k=1}^{p_{0}^{n}(t)-p_{0}^{n}+1}\frac{1}{k+m_{0}^{n}+p_{0}^{n}+ 2}\right)\leq\varepsilon_{0}\left(2+\frac{m(t_{n}-T+c)+1}{m(t_{n}-T+\varepsilon _{0})}\right)\leq(4+c)\varepsilon_{0}. \tag{6.36}\]
Combining (6.26), (6.28), (6.29), (6.30), (6.31), and applying Gronwall's lemma we see that
\[\sup_{t\in[0,c]}\|\hat{M}^{n}(t)-\hat{M}(t)\|\leq e^{c}\left(5\varepsilon_{0} +\|\bar{\mathcal{R}}_{1}^{n}\|+\sup_{t\in(\varepsilon_{0},c]}\|\bar{\mathcal{R }}_{5}^{n}(t)\|\right).\]
From (6.4), (6.18), (6.23), (6.27), (6.36), the last estimate, and the result stated in Lemma 6.2, it follows that
\[P\left(\sup_{t\in[0,c]}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq d_{1} \varepsilon_{0}\right)\] \[\quad\leq P(\|\mathcal{R}_{1}^{n}\|\geq 3\varepsilon_{0})+P\left( \sup_{t\in(\varepsilon_{0},c]}\|\bar{\mathcal{R}}_{5}^{n}(t)\|>(4+c)\varepsilon _{0}\right)\] \[\quad\leq 3\varepsilon_{1}+P(N_{1}>r_{1})+P(\mathcal{D}_{0}^{n})+ P(B^{n})+P\left((B^{n})^{c},\mathcal{E}_{0}^{n},\sup_{t\in(\varepsilon_{0},c]}\| \bar{\mathcal{R}}_{5}^{n}(t)\|>(4+c)\varepsilon_{0}\right)\] \[\quad\leq 3\varepsilon_{1}+\varepsilon_{1}+\varepsilon_{1}+ \varepsilon_{1}+0=6\varepsilon_{1}.\]
The result follows.
We now give a recursion estimate that will allow us to replace \(\sup_{0\leq t\leq c}\) in Lemma 6.3 with \(\sup_{0\leq t\leq T}\). Recall \(n_{1}\) introduced above (6.24).
**Lemma 6.4**.: _Fix \(n\geq n_{1}\). Suppose that for some \(1\leq i\leq l_{0}\) and \(a_{1},a_{2}>0\),_
\[P\left(\sup_{0\leq t\leq ic}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq a_{1}e^{c} \varepsilon_{0}\right)\leq a_{2}\varepsilon_{1}. \tag{6.37}\]
_Then_
\[P\left(\sup_{0\leq t\leq(i+1)c\wedge T}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq(a_{1} +b_{1})e^{c}\varepsilon_{0}\right)\leq(2a_{2}+3)\varepsilon_{1}\]
_where \(b_{1}=4+c\) is as in (6.1)._
Proof.: We will only consider the case where \(i<l_{0}\). The case \(i=l_{0}\) is treated similarly. Note that, for \(t\in[ic,(i+1)c]\),
\[\hat{M}^{n}(t) =\hat{M}^{n}(ic)+\int_{t_{n}-T+ic}^{t_{n}-T+t}\sum_{v\in\mathcal{ V}^{d}}(v-\bar{L}^{n}(a(s)))\bar{\Lambda}^{n}(v\mid s)ds\] \[=\hat{M}(ic)+(t-ic)\beta_{(1)}^{i}-\int_{ic}^{t}\hat{M}^{n}(s)ds+ \bar{\mathcal{R}}_{1}^{n}+\bar{\mathcal{R}}_{2}^{n}(t)+\bar{\mathcal{R}}_{3}^ {n}(t), \tag{6.38}\]
where
\[\bar{\mathcal{R}}_{1}^{n}\doteq\hat{M}^{n}(ic)-\hat{M}(ic),\ \ \bar{\mathcal{R}}_{2}^{n}(t)\doteq-\int_{t_{n}-T+ic}^{t_{n}-T+t}\left(\bar{L}^{ n}(a(s))-\bar{L}^{n}(s)\right)ds,\]
and
\[\bar{\mathcal{R}}_{3}^{n}(t)\doteq\int_{t_{n}-T+ic}^{t_{n}-T+t}\left(\sum_{v \in\mathcal{V}^{d}}v\bar{\Lambda}^{n}(v\mid s)-\beta_{(1)}^{i}\right)ds.\]
We begin by considering \(\bar{\mathcal{R}}_{1}^{n}\) and \(\bar{\mathcal{R}}_{2}^{n}\). From (4.21), (6.24), and the assumption stated in (6.37) we see that
\[P(\|\bar{\mathcal{R}}_{1}^{n}\|\geq a_{1}e^{c}\varepsilon_{0})\leq a_{2} \varepsilon_{1},\ \sup_{t\in[0,c]}\|\bar{\mathcal{R}}_{2}^{n}(t)\|\leq\frac{2c}{m(t_{n}-T)+2} \leq\varepsilon_{0}. \tag{6.39}\]
We now consider \(\bar{\mathcal{R}}_{3}^{n}\). As in the proof of Lemma 6.3, we can write, for \(t\in(\varepsilon_{0},c]\),
\[\|\bar{\mathcal{R}}_{3}^{n}(t)\|\leq 2\varepsilon_{0}+\|\bar{\mathcal{R}}_{4}^{n} \|+\|\bar{\mathcal{R}}_{5}^{n}(t)\|+\|\bar{\mathcal{R}}_{6}^{n}(t)\|, \tag{6.40}\]
where,
\[\|\bar{\mathcal{R}}_{4}^{n}\|\leq\varepsilon_{0},\ \ \sup_{t\in(\varepsilon_{0},c]}\| \bar{\mathcal{R}}_{6}^{n}(t)\|\leq\varepsilon_{0},\ \ \ \text{and, on }(B^{n})^{c}\cap\mathcal{E}_{0}^{n},\ \sup_{t\in( \varepsilon_{0},c]}\|\bar{\mathcal{R}}_{5}^{n}(t)\|\leq(4+c)\varepsilon_{0}. \tag{6.41}\]
Combining (6.38), (6.39), (6.40), (6.41), and applying Gronwall's lemma we see that
\[\sup_{t\in[0,c]}\|\hat{M}^{n}(ic+t)-\hat{M}(ic+t)\|\leq e^{c}\left(5\varepsilon _{0}+\|\bar{\mathcal{R}}_{1}^{n}\|+\sup_{t\in(\varepsilon_{0},c]}\|\bar{ \mathcal{R}}_{5}^{n}(t)\|\right).\]
From (6.39), the last estimate, and the assumption stated in (6.37), it follows that
\[P\left(\sup_{t\in[0,(i+1)c]}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq(a _{1}+b_{1})e^{c}\varepsilon_{0}\right)\] \[\quad\leq P\left(\sup_{t\in[0,ic]}\|\hat{M}^{n}(t)-\hat{M}(t)\| \geq(a_{1}+b_{1})e^{c}\varepsilon_{0}\right)+P\left(\sup_{t\in[ic,(i+1)c]}\| \hat{M}^{n}(t)-\hat{M}(t)\|\geq(a_{1}+b_{1})e^{c}\varepsilon_{0}\right)\] \[\quad\leq a_{2}\varepsilon_{1}+P(\|\mathcal{R}_{1}^{n}\|\geq a_{1 }e^{c}\varepsilon_{0})+P\left(\sup_{t\in(\varepsilon_{0},c]}\|\bar{\mathcal{R }}_{5}^{n}(t)\|>(4+c)\varepsilon_{0}\right)\] \[\quad\leq 2a_{2}\varepsilon_{1}+P(N_{1}>r_{1})+P(\mathcal{D}_{0} ^{n})+P(B^{n})+P\left((B^{n})^{c},\mathcal{E}_{0}^{n},\sup_{t\in(\varepsilon_ {0},c]}\|\bar{\mathcal{R}}_{5}^{n}(t)\|>(4+c)\varepsilon_{0}\right)\] \[\quad\leq 2a_{2}\varepsilon_{1}+\varepsilon_{1}+\varepsilon_{1}+ \varepsilon_{1}+0=(2a_{2}+3)\varepsilon_{1}.\]
The result follows.
As an immediate consequence of the previous two lemmas we have the following corollary. Recall the constants \(d_{3},d_{4}\) defined in (6.1).
**Corollary 6.5**.: _For all \(n\geq n_{1}\)_
\[P\left(\sup_{0\leq t\leq T}\|\hat{M}^{n}(t)-\hat{M}(t)\|\geq d_{3}\varepsilon_ {0}\right)\leq d_{4}\varepsilon_{1}.\]
### Convergence of Costs of Controls
Recall the constants \(A_{1},B_{1},C_{1}\) defined in (6.1), and recall \(\hat{\eta}\), \(\hat{M}\) introduced at the start of Section 6.1.
The following lemma estimates the cost of the constructed controls.
**Lemma 6.6**.: _Let the collection \(\{\bar{\nu}^{n,k},\bar{\mu}^{n,k+1},\bar{L}^{n,k+1},\ k\in\mathbb{N}_{0},n\geq n _{1}\}\) be given by Construction 6.1. Then,_
\[\limsup_{n\to\infty}E\left(n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{ \bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{ \mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\\ \leq e^{-T}\int_{0}^{T}e^{u}R\left(\hat{\eta}(\cdot\mid u)\big{\|} \hat{\eta}_{(1)}(\cdot\mid u)\otimes G(\hat{M}(u)(\cdot,\cdot)\right)du+C_{1} \varepsilon_{1}+\varepsilon+(l_{0}+1)(A_{1}\varepsilon_{0}+B_{1}\varepsilon_ {1}).\]
Proof.: For notational simplicity, denote
\[R^{n,k}\doteq R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta _{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k}, \cdot)\right),\ \ k\in\mathbb{N}_{0},n\geq n_{1},\]
and fix \(n\geq n_{1}\), where \(n_{1}\) is as introduced above (6.24). We begin with the following observations.
* By construction, \[\frac{1}{n}\sum_{k=0}^{N_{1}-1}R^{n,k}=0,\ \ \mathbf{1}_{\{r_{1}<N_{1}\}}\frac{1}{n} \sum_{k=0}^{n-1}R^{n,k}=0,\ \ n\in\mathbb{N}.\]
* On \(\{N_{1}\leq r_{1}\}\), \(\min_{x\in\Delta^{o}}\bar{L}^{n,N_{1}}(x)>a^{*}\), which says that, with \(a_{1}^{*}\doteq\frac{a^{*}}{k_{1}+2}\), we have \[\min_{x\in\Delta^{o}}\inf_{N_{1}\leq k\leq k_{2}}\bar{L}^{n,k+1}(x)\geq a_{1}^ {*}.\] This in turn, from Assumption 2.2(2b) implies that \[\min_{(\bar{v}^{n,k},y)\in A_{+}}\inf_{N_{1}\leq k\leq k_{2}}G^{\mathcal{V}}( \bar{L}^{n,k+1})(\bar{\nu}^{n,k},e_{y})\geq a_{1}^{*}\delta_{0}^{A},\] (6.42) Together, (6.5) and (6.42) imply that \[\frac{1}{n}\sum_{k=N_{1}}^{k_{2}-1}R^{n,k}=\frac{1}{n}\sum_{k=N_{1}}^{k_{2}-1 }R\left(\delta_{\bar{\nu}^{n,k}}\otimes Q^{\mathcal{V}}(\bar{\nu}^{n,k},\cdot) \|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{ n,k},\cdot)\right)\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A}\right)\right| \frac{k_{1}}{n}.\] (6.43)
* Recall from (6.14) that \(\mathcal{D}_{0}^{n}=\{N_{1}\leq r_{1}\}\cap\{\mathcal{J}_{0}^{n}=1\}\), and note that, as in (6.43), if \(\mathcal{D}_{0}^{n}\) occurs, then from Construction 6.1 (iii) \[\frac{1}{n}\sum_{k=0}^{n-1}R^{n,k}=\frac{1}{n}\sum_{k=N_{1}}^{k_{3}-1}R^{n,k}= \frac{1}{n}\sum_{k=N_{1}}^{k_{2}-1}R^{n,k}+\frac{1}{n}\sum_{k=k_{2}}^{k_{3}-1 }R^{n,k}\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A}\right)\right|\frac{k_{1}} {n}+\frac{1}{n}\sum_{k=k_{2}}^{k_{3}-1}R^{n,k}.\] (6.44) Additionally, from (5.56), \(\min_{x\in\Delta^{o}}q_{x}>\delta/4\), and, from (6.2), \(\varepsilon_{0}<\delta/16\), so, on recalling the definition of \(k_{3}\), we see that \[\min_{x\in\Delta^{o}}\inf_{k_{2}\leq k\leq k_{3}-1}\bar{L}^{n,k+1}(x)\geq \delta/8.\] Thus, for each \(k\in\{k_{2},\ldots,k_{3}-1\}\), with \(\delta_{1}\) defined in (6.1), \[R^{n,k}=R\left(\delta_{\bar{\nu}^{n,k}}\otimes Q^{\mathcal{V}}(\bar{\nu}^{n,k},\cdot)\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar {\nu}^{n,k},\cdot)\right)\leq|\log\delta_{1}|.\] (6.45) From (6.44) and (6.45) we obtain \[\mathbf{1}_{\mathcal{D}_{0}^{n}}\frac{1}{n}\sum_{k=0}^{n-1}R^{n,k}\leq\left| \log\left(a_{1}^{*}\delta_{0}^{A}\right)\right|\frac{k_{1}}{n}+|\log\delta_{1} |\frac{m_{0}^{n}}{n}.\] (6.46) Using Lemma 4.2 along with (6.23) and (6.46) we see that \[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{D}_{0}^{n}}\frac{1}{n}\sum_{k=0 }^{n-1}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{ \nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right) \right)\leq|\log\delta_{1}|\varepsilon_{1}.\] (6.47)
* On \(\mathcal{E}_{0}^{n}\), a calculation similar to the one above shows that \[\frac{1}{n}\sum_{k=0}^{m_{0}^{n}-1}R^{n,k} =\frac{1}{n}\sum_{k=0}^{N_{1}-1}R^{n,k}+\frac{1}{n}\sum_{k=N_{1} }^{k_{2}-1}R^{n,k}+\frac{1}{n}\sum_{k=k_{2}}^{m_{0}^{n}-1}R^{n,k}\] \[\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A}\right)\right|\frac{k_ {1}}{n}+|\log\delta_{1}|\frac{m_{0}^{n}}{n}.\] (6.48)
* Next, on \(\mathcal{D}_{1}^{n}\doteq\mathcal{E}_{0}^{n}\cap\{\mathcal{J}_{1}^{n}=1\}\), since \(\min_{x\in\Delta^{\alpha}}\bar{L}^{n,m_{0}^{n}-1}(x)>\delta/8\), we have \[\inf_{x\in\Delta^{\alpha}}\inf_{0\leq k\leq k^{*}}\bar{L}^{n,m_{0}^{n}+k}(x) \geq\frac{\delta}{8(k^{*}+2)}.\] Thus, using (6.48) we see that, since \(\tau^{n,0}\geq k^{*}\) and \(\mathbf{1}_{\mathcal{D}_{1}^{n}}R^{n,k}=0\) for all \(k\in\{m_{0}^{n}+\tau^{n,0},\ldots,n-1\}\), we have, on \(\mathcal{D}_{1}^{n}\), that \[\frac{1}{n}\sum_{k=0}^{n-1}R^{n,k} =\frac{1}{n}\sum_{k=0}^{m_{0}^{n}-1}R^{n,k}+\frac{1}{n}\sum_{k=m _{0}^{n}}^{m_{0}^{n}+k^{*}-1}R^{n,k}+\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{n-1} R^{n,k}+\frac{1}{n}\sum_{k=m_{0}^{n}+\tau^{n,0}}^{n-1}R^{n,k}\] \[\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A}\right)\right|\frac{ k_{1}}{n}+|\log\delta_{1}|\frac{m_{0}^{n}}{n}+\left|\log\left(\frac{\delta_{1}}{ k^{*}+2}\right)\right|\frac{k^{*}}{n}+\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n} +\tau^{n,0}-1}R^{n,k}.\] Now, recall from (6.8) that \(\inf_{x\in\Delta^{\alpha}}\beta_{(1)}^{0}(x)\geq\delta\), and from (5.56) that \(\inf_{x\in\Delta^{\alpha}}q(x)\geq\delta/4\), which, from the definition of \(\tau^{n,0}\) in (6.15) and the fact that \(\varepsilon_{0}\leq\delta/16\), says that \[\inf_{x\in\Delta^{\alpha}}\inf_{m_{0}^{n}+k^{*}\leq k\leq m_{0}^{n}+\tau^{n,0 }-1}\bar{L}^{n,k+1}(x)\geq\delta/8.\] (6.49) It then follows that \[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+\tau^{n,0}-1}R^{n,k}\leq|\log \delta_{1}|,\quad R^{n,m_{0}^{n}+\tau^{n,0}}\leq\left|\log\left(\frac{\delta_ {1}}{2}\right)\right|\] and consequently, on \(\mathcal{D}_{1}^{n}\), \[\frac{1}{n}\sum_{k=0}^{n-1}R^{n,k}\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A} \right)\right|\frac{k_{1}}{n}+|\log\delta_{1}|\frac{m_{0}^{n}}{n}+\left|\log \left(\frac{\delta_{1}}{k^{*}+2}\right)\right|\frac{k^{*}}{n}+|\log\delta_{1} |+\frac{\left|\log\left(\frac{\delta_{1}}{2}\right)\right|}{n}.\] Thus, on using (6.9), we see that \[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{D}_{1}^{n}}\frac{1}{n}\sum_{k= 0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{ \nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot) \right)\right)\leq 2|\log\delta_{1}|\varepsilon_{1}.\] (6.50)
* By a similar calculation, on \(\mathcal{E}_{1}^{n}\), \[\frac{1}{n}\sum_{k=0}^{m_{0}^{n}+l^{n,0}-1}R^{n,k} =\frac{1}{n}\sum_{k=0}^{m_{0}^{n}-1}R^{n,k}+\frac{1}{n}\sum_{k=m_{0}^{n}}^{m_{ 0}^{n}+k^{*}-1}R^{n,k}+\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}- 1}R^{n,k}\] \[\leq\left|\log\left(a_{1}^{*}\delta_{0}^{A}\right)\right|\frac{k_{ 1}}{n}+|\log\delta_{1}|\frac{m_{0}^{n}}{n}+\left|\log\left(\frac{\delta_{1}}{ k^{*}+2}\right)\right|\frac{k^{*}}{n}+\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n} +l^{n,0}-1}R^{n,k}.\] (6.51) Next, recalling the relationship between \(\{\bar{X}^{n,k},\;k\in\mathbb{N}_{0}\}\) and \(\{\bar{v}^{n,k},\;k\in\mathbb{N}_{0}\}\), note that \[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}R^{n,k} =\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}R\left( \delta_{\bar{X}^{n,k}}\otimes\beta_{2|1}^{0}(\cdot\mid\bar{X}^{n,k})\|\delta_{ \bar{X}^{n,k}}\otimes G(\bar{L}^{n,k+1})(\bar{X}^{n,k},\cdot)\right)\] \[=\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{x \in\Delta^{\alpha}}\mathbf{1}_{\{\bar{X}^{n,k}=x\}}h_{x}(\bar{L}^{n,k+1})\] (6.52)
where
\[h_{x}(m)\doteq R\left(\beta^{0}_{2|1}(\cdot\mid x)\|G(m)(x,\cdot)\right),\ \ x\in\Delta^{o},m\in\mathcal{P}(\Delta^{o}).\]
Note that for \(m,m^{\prime}\in\mathcal{M}_{\delta/8}\doteq\{\pi\in\mathcal{P}(\Delta^{o}): \inf_{x\in\Delta^{o}}\pi_{x}>\delta/8\}\),
\[|h_{x}(m)-h_{x}(m^{\prime})|\leq\delta_{1}^{-1}\|m-m^{\prime}\|. \tag{6.53}\]
Additionally, on \(\mathcal{E}_{1}^{n}\), for each \(k\in\{m_{0}^{n}+k^{*},\ldots,m_{0}^{n}+l^{n,0}-1\}\), we have that \(\bar{L}^{n,k+1}\in\mathcal{M}_{\delta/8}\), which, together with (6.53), ensures that
\[h_{x}(\bar{L}^{n,k+1}) =h_{x}(\bar{L}^{n}(t_{k}))=h_{x}(\bar{L}^{n}(t_{n}-T+(t_{k}-t_{n}+ T)))\] \[=h_{x}(\hat{M}^{n}(t_{k}-(t_{n}-T)))\leq h_{x}(\hat{M}(t_{k}-(t_{ n}-T)))+\delta_{1}^{-1}\sup_{t\in[0,T]}\|\hat{M}^{n}(t)-\hat{M}(t)\|,\]
for each such \(k\). Thus,
\[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}R^{n,k}\leq\frac{1}{ n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{x\in\Delta^{o}}\mathbf{1}_ {\{\bar{X}^{n,k}=x\}}h_{x}(\hat{M}(t_{k}-(t_{n}-T)))+\delta_{1}^{-1}\sup_{t\in [0,T]}\|\hat{M}^{n}(t)-\hat{M}(t)\|. \tag{6.54}\]
Next, for \(k\in\{m_{0}^{n}+k^{*},\ldots,m_{0}^{n}+l^{n,0}-1\}\), let
\[H_{k,x}\doteq\frac{1}{k^{*}+1}\sum_{j=0}^{k^{*}}h_{x}(\hat{M}(t_{j+k}-(t_{n}- T))).\]
Then, using (5.55) and (6.24), we see that, for each \(k\in\{m_{0}^{n}+k^{*},\ldots,m_{0}^{n}+l^{n,0}-1\}\),
\[H_{k,n}^{*} \doteq\sup_{x\in\Delta^{o}}|H_{k,x}-h_{x}(\hat{M}(t_{k}-(t_{n}-T) ))|\] \[\leq\delta_{1}^{-1}\max_{0\leq j\leq k^{*}}\|\hat{M}(t_{j+k}-(t_{ n}-T))-\hat{M}(t_{k}-(t_{n}-T))\|\] \[\leq 2\delta_{1}^{-1}|t_{m_{0}^{n}+2k^{*}}-t_{m_{0}^{n}+k^{*}}| \leq\delta_{1}^{-1}\varepsilon_{0}.\]
Using this estimate we see that
\[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{x \in\Delta^{o}}\mathbf{1}_{\{\bar{X}^{n,k}=x\}}h_{x}(\hat{M}(t_{k}-(t_{n}-T)))\] \[\leq\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_ {x\in\Delta^{o}}\mathbf{1}_{\{\bar{X}^{n,k}=x\}}H_{k,x}+\frac{1}{n}\sum_{k=m_ {0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}H_{k,n}^{*} \tag{6.55}\] \[\leq\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_ {x\in\Delta^{o}}\mathbf{1}_{\{\bar{X}^{n,k}=x\}}H_{k,x}+\delta_{1}^{-1} \varepsilon_{0}.\]
Furthermore, on recalling, from (5.56), that \(\sup_{x\in\Delta^{o}}\sup_{t\in[0,T]}h_{x}(\hat{M}(t))\leq|\log\delta_{1}|\), we see that
\[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{x \in\Delta^{o}}\mathbf{1}_{\{\bar{X}^{n,k}=x\}}H_{k,x} \tag{6.56}\] \[\leq\frac{1}{n}\sum_{r=m_{0}^{n}+2k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum _{x\in\Delta^{o}}\left(\frac{1}{k^{*}+1}\sum_{j=0}^{k^{*}}\mathbf{1}_{\{\bar{X} ^{n,r-j}=x\}}\right)h_{x}(\hat{M}(t_{r}-(t_{n}-T)))+\frac{2k^{*}}{n}|\log \delta_{1}|,\]
and, from (6.10), we have that, for each \(r\in\{m_{0}^{n}+2k^{*},\ldots,m_{0}^{n}+l^{n,0}-1\}\),
\[\sum_{x\in\Delta^{o}}E\left|\frac{1}{k^{*}+1}\sum_{j=0}^{k^{*}}\mathbf{1}_{\{ \widehat{X}^{n,r-j}=x\}}-\beta_{(1)}^{0}(x)\right|\leq\varepsilon_{0}. \tag{6.57}\]
From (6.55), (6.56), and (6.57), we see that
\[\frac{1}{n}\sum_{k=m_{0}^{n}+k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{x \in\Delta^{o}}\mathbf{1}_{\{\widehat{X}^{n,k}=x\}}h_{x}(\hat{M}(t_{k}-(t_{n}-T )))\] \[\leq\frac{1}{n}\sum_{r=m_{0}^{n}+2k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum _{x\in\Delta^{o}}\beta_{(1)}^{0}(x)h_{x}(\hat{M}(t_{r}-(t_{n}-T)))+\mathcal{R }^{n} \tag{6.58}\]
where
\[E|\mathcal{R}^{n}|\leq\left(\varepsilon_{0}+\frac{2k^{*}}{n}\right)|\log \delta_{1}|+\varepsilon_{0}\delta_{1}^{-1}. \tag{6.59}\]
Next, letting \(u_{1}^{n}\doteq t_{m_{0}^{n}+2k^{*}}\), we have that
\[\frac{1}{n}\sum_{r=m_{0}^{n}+2k^{*}}^{m_{0}^{n}+l^{n,0}-1}\sum_{ x\in\Delta^{o}}\beta_{(1)}^{0}(x)h_{x}(\hat{M}(t_{r}-(t_{n}-T)))\] \[\quad\leq\frac{1}{n}\int_{u_{1}^{n}}^{t_{m_{1}^{n}}}\psi_{e}(s) \sum_{x\in\Delta^{o}}h_{x}(\hat{M}(a_{n}(s)-(t_{n}-T)))\beta_{(1)}^{0}(x)ds\] \[\quad\leq\frac{1}{n}\int_{u_{1}^{n}}^{t_{m_{1}^{n}}}\psi_{e}(s) \sum_{x\in\Delta^{o}}h_{x}(\hat{M}(s-(t_{n}-T)))\beta_{(1)}^{0}(x)ds+ \varepsilon_{0}\delta_{1}^{-1}\] \[\quad\leq\frac{1}{n}\int_{t_{n}-T}^{t_{n}-T+c}\psi_{e}(s)\sum_{x \in\Delta^{o}}h_{x}(\hat{M}(s-(t_{n}-T)))\beta_{(1)}^{0}(x)ds+\varepsilon_{0} \delta_{1}^{-1}\] \[\quad=\frac{1}{n}\int_{0}^{c}\psi_{e}(t_{n}-(T-s))\sum_{x\in \Delta^{o}}h_{x}(\hat{M}(s))\beta_{(1)}^{0}(x)ds+\varepsilon_{0}\delta_{1}^{- 1}, \tag{6.60}\]
where the second inequality uses (5.55), (6.11), and (6.53). Also, from Lemma 4.2 and recalling the definition of \(h_{x}\) and \(\beta^{0}\),
\[\lim_{n\to\infty}\frac{1}{n}\int_{0}^{c}\psi_{e}(t_{n}-(T-s))\sum_ {x\in\Delta^{o}}h_{x}(\hat{M}(s))\beta_{(1)}^{0}(x)ds\] \[\quad=e^{-T}\int_{0}^{c}e^{s}\sum_{x\in\Delta^{o}}\beta_{(1)}^{0} (x)R\left(\beta_{2|1}^{0}(\cdot\mid x)\|G(\hat{M}(s))(x,\cdot)\right)ds\] \[\quad=e^{-T}\int_{0}^{c}e^{s}R\left(\hat{\eta}(\cdot\mid s)\| \hat{\eta}_{(1)}(\cdot\mid s)\otimes G(\hat{M}(s))(\cdot,\cdot)\right)ds. \tag{6.61}\]
Finally, using the fact that on \(\mathcal{E}_{1}^{n}\), from (6.49), \(\bar{L}^{n,m_{0}^{n}+l^{n,0}}\in\mathcal{M}_{\delta/8}\), we see that
\[R^{n,m_{0}^{n}+l^{n,0}}\leq|\log(\delta_{1}/2)|. \tag{6.62}\]
* Combining the estimates in (6.51), (6.54), (6.58), (6.59), (6.60), and (6.62), we see that \[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{1}^{n}}\frac{1} {n}\sum_{k=0}^{m_{0}^{n}+l^{n,0}}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu }^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})( \bar{\nu}^{n,k},\cdot)\right)\right)\] \[\quad\leq\limsup_{n\to\infty}\left(\left|\log\left(a_{1}^{*} \delta_{0}^{A}\right)\right|\frac{k_{1}}{n}+|\log\delta_{1}|\frac{m_{0}^{n}}{n }+\left|\log\left(\frac{\delta_{1}}{k^{*}+2}\right)\right|\left(\frac{k^{*}+1 }{n}\right)+\left|\log\left(\frac{\delta_{1}}{2}\right)\right|\frac{1}{n}\right.\] \[\quad\quad+\delta_{1}^{-1}E\sup_{t\in[0,T]}\|\hat{M}^{n}(t)-\hat {M}(t)\|+\left(\varepsilon_{0}+\frac{2k^{*}}{n}\right)|\log\delta_{1}|+ \varepsilon_{0}\delta_{1}^{-1}+\varepsilon_{0}\delta_{1}^{-1}\] \[\quad\quad+\frac{1}{n}\int_{0}^{c}\psi_{e}(t_{n}-(T-s))\sum_{x \in\Delta^{a}}h_{x}(\hat{M}(s))\beta_{(1)}^{0}(x)ds\Bigg{)}\] \[\leq|\log\delta_{1}|e^{-T}+\left(|\log\delta_{1}|+(2+d_{3})\delta_ {1}^{-1}\right)\varepsilon_{0}+2d_{4}\delta_{1}^{-1}\varepsilon_{1}\] \[\quad\quad+\limsup_{n\to\infty}\frac{1}{n}\int_{0}^{c}\psi_{e}(t_ {n}-(T-s))\sum_{x\in\Delta^{a}}h_{x}(\hat{M}(s))\beta_{(1)}^{0}(x)ds\] \[\leq\varepsilon+A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}+e^{-T} \int_{0}^{c}\exp(s)R\left(\hat{\eta}(\cdot\mid s)\|\hat{\eta}_{(1)}(\cdot\mid s )\otimes G(\hat{M}(s))(\cdot,\cdot)\right)ds,\] (6.63) where the second inequality follows from Corollary 6.5, and the third inequality follows from Lemma 4.2, our choice of \(T\) in (5.18), and using (6.1), and (6.61).
* Letting, for \(l\in\{1,\ldots,l_{0}+1\}\), \[\mathcal{D}_{l}^{n}\doteq\mathcal{E}_{l-1}^{n}\cap\{\mathcal{J}_{l}^{n}=1\},\] we see exactly as in the proof of (6.47) and (6.50), that \[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{D}_{l}^{n}}\frac{ 1}{n}\sum_{k=0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\| \delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\] \[\quad\leq|\log\delta_{1}|(l+2)\varepsilon_{1}\leq|\log\delta_{1}| (l_{0}+3)\varepsilon_{1}.\] (6.64)
* Next, we show that, for each \(l\in\{0,\ldots,l_{0}\}\), \[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{l+1}^{n}} \frac{1}{n}\sum_{k=0}^{m_{l}^{n}+l^{n,l}}R\left(\delta_{\bar{\nu}^{n,k}} \otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}( \bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\] \[\leq\varepsilon+(l+1)(A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1})+ e^{-T}\int_{0}^{(l+1)c\wedge T}e^{s}R\left(\hat{\eta}(\cdot,\cdot\mid s)\|\hat{\eta}_{(1)}( \cdot\mid s)\otimes G(\hat{M}(s))(\cdot,\cdot)\right)ds.\] (6.65) Note that by (6.63), the statement in (6.65) holds for \(l=0\). Now, suppose, for some \(r<l_{0}\), that the statement in (6.65) holds for all \(l\in\{0,1,\ldots r\}\). We argue that it also holds for \(l=r+1\). We only give the argument for \(r<l_{0}-1\), as the case when \(r=l_{0}-1\) is treated similarly. Note that,
by our inductive hypothesis,
\[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{r+1}^{n}}\frac{1 }{n}\sum_{k=0}^{m_{r+1}^{n}+l^{n,r+1}}R^{n,k}\right)\\ \leq\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{r}^{n}} \frac{1}{n}\sum_{k=0}^{m_{r+1}^{n}+l^{n,r}}R^{n,k}+\mathbf{1}_{\mathcal{E}_{r+ 1}^{n}}\frac{1}{n}\sum_{k=m_{r+1}^{n}}^{m_{r+1}^{n}+l^{n,r+1}}R^{n,k}\right)\\ \leq\varepsilon+(r+1)(A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}) +e^{-T}\int_{0}^{(r+1)c}e^{s}R\left(\hat{\eta}(\cdot\mid s)\|\hat{\eta}_{(1)}( \cdot\mid s)\otimes G(\hat{M}(s))(\cdot,\cdot)\right)ds\\ +\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{r+1}^{n}} \frac{1}{n}\sum_{k=m_{r+1}^{n}}^{m_{r+1}^{n}+l^{n,r+1}}R^{n,k}\right). \tag{6.66}\]
Now, an argument along the lines of the one used for (6.52) - (6.63) shows that, with
\[h_{x}^{r+1}(m)\doteq R\left(\beta_{2|1}^{r+1}(\cdot\mid x)\|G(m)(x,\cdot) \right),\ \ x\in\Delta^{o},m\in\mathcal{P}(\Delta^{o}),\]
we have
\[\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{r+1}^{n}} \frac{1}{n}\sum_{k=m_{r+1}^{n}}^{m_{r+1}^{n}+l^{n,r+1}}R^{n,k}\right)\\ \leq A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}+\limsup_{n\to\infty }\frac{1}{n}\int_{t_{n}-T+(r+1)c}^{t_{n}-T+(r+2)c}\psi_{e}(s)\sum_{x\in\Delta ^{o}}h_{x}^{r+1}(\hat{M}(s-(t_{n}-T)))\beta_{(1)}^{r+1}(x)ds\] \[\leq A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}+\limsup_{n\to \infty}\frac{1}{n}\int_{(r+1)c}^{(r+2)c}\psi_{e}(t_{n}-(T-s))\sum_{x\in\Delta ^{o}}h_{x}^{r+1}(\hat{M}(s))\beta_{(1)}^{r+1}(x)ds\] \[=A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}+e^{-T}\int_{(r+1)c}^{( r+2)c}e^{s}\sum_{x\in\Delta^{o}}h_{x}^{r+1}(\hat{M}(s))\beta_{(1)}^{r+1}(x)ds\] \[=A_{1}\varepsilon_{0}+B_{1}\varepsilon_{1}+e^{-T}\int_{(r+1)c}^{( r+2)c}e^{s}R\left(\hat{\eta}(\cdot\mid s)\|\hat{\eta}_{(1)}(\cdot\mid s)\otimes G (\hat{M}(s))(\cdot,\cdot)\right)ds. \tag{6.67}\]
Combining the estimates in (6.66) and(6.67) we have the inequality in (6.65) for \(l=r+1\), which proves the statement in (6.65) with \(l=r+1\).
* Finally, combining (6.47), (6.64) and (6.65) we see that \[\limsup_{n\to\infty}E\left(n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{ \hat{\wp}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\hat{\wp}^{n,k}}\otimes G^{ \mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\\ \leq\sum_{j=0}^{l_{0}+1}\limsup_{n\to\infty}E\left(\mathbf{1}_{ \mathcal{D}_{j}^{n}}\frac{1}{n}\sum_{k=0}^{n-1}R\left(\delta_{\bar{\nu}^{n,k} }\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}( \bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\\ +\limsup_{n\to\infty}E\left(\mathbf{1}_{\mathcal{E}_{l_{0}^{n+1}} ^{n}}\frac{1}{n}\sum_{k=0}^{m_{0}^{n}+l^{n,l_{0}}}R\left(\delta_{\bar{\nu}^{n,k} }\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{\mathcal{V}}( \bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right)\\ \leq C_{1}\varepsilon_{1}+\varepsilon+(l_{0}+1)(A_{1}\varepsilon _{0}+B_{1}\varepsilon_{1})\\ +e^{-T}\int_{0}^{T}\exp(s)R\left(\hat{\eta}(\cdot,\cdot\mid s) \|\hat{\eta}_{(1)}(\cdot\mid s)\otimes G(\hat{M}(s))(\cdot,\cdot)\right)ds.\]
The result follows.
### Proof of Laplace Lower Bound
We now complete the proof of the Laplace lower bound. Recall the Lipschitz function \(F:\mathcal{P}(\Delta^{o})\to\mathbb{R}\) and \(\varepsilon\in(0,1)\) fixed in Section 5.1. Also recall the constant \(T\in(0,\infty)\) from (5.18) with \(M^{1}\) chosen as in Section 5.2. Let \(\hat{M}^{3}\doteq\hat{M}\) and \(\hat{\eta}^{3}\doteq\hat{\eta}\) be as constructed in (5.54) - (5.56). Also, recall the constants \(c\doteq\kappa_{3}\) and \(l_{0}\doteq\lfloor Tc^{-1}\rfloor\) associated with \(\hat{M}\) and \(\hat{\eta}\) defined in Section 6.1. Fix \(\varepsilon_{0},\varepsilon_{1}\) as in (6.2). Let the collection \(\{\bar{\nu}^{n,k},\bar{\mu}^{n,k+1},\bar{L}^{n,k+1},\,k\in\mathbb{N}_{0},\ n\geq n _{0}\}\) be given by Construction 6.1. Then, using (3.10),
\[-n^{-1}\log E\exp[-nF(L^{n+1})]\\ \leq E\left[F(\bar{L}^{n}(t_{n}))+n^{-1}\sum_{k=0}^{n-1}R\left( \delta_{\bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}} \otimes G^{\mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right].\]
From Corollary 6.5,
\[E(F(\bar{L}^{n}(t_{n})))=E(F(\hat{M}^{n}(T)))\leq E(F(\hat{M}(T))+F_{\text{\tiny ip }}(d_{3}\varepsilon_{0}+2d_{4}\varepsilon_{1})\leq F(m^{3})+\varepsilon,\]
where the second inequality follows on recalling that \(\hat{M}(T)=\hat{M}^{3}(T)=m^{3}\) and on using (6.2). Also, from Lemma 6.6,
\[\limsup_{n\to\infty}E\left[n^{-1}\sum_{k=0}^{n-1}R\left(\delta_{ \bar{\nu}^{n,k}}\otimes\bar{\mu}^{n,k+1}\|\delta_{\bar{\nu}^{n,k}}\otimes G^{ \mathcal{V}}(\bar{L}^{n,k+1})(\bar{\nu}^{n,k},\cdot)\right)\right]\\ \leq e^{-T}\int_{0}^{T}\exp(u)R\left(\hat{\eta}(\cdot\mid u)\big{\|} \hat{\eta}_{(1)}(\cdot\mid u)\otimes G(\hat{M}(u))(\cdot,\cdot)\right)du+C_{1 }\varepsilon_{1}+\varepsilon+(l_{0}+1)(A_{1}\varepsilon_{0}+B_{1}\varepsilon_ {1})\\ \leq e^{-T}\int_{0}^{T}\exp(u)R\left(\hat{\eta}(\cdot\mid u) \big{\|}\hat{\eta}_{(1)}(\cdot\mid u)\otimes G(\hat{M}(u))(\cdot,\cdot)\right) du+2\varepsilon,\]
where the last line follows from (6.2). Combining the last two estimates
\[\limsup_{n\to\infty}-n^{-1}\log E\exp[-nF(L^{n+1})]\\ \leq F(m^{3})+e^{-T}\int_{0}^{T}e^{u}R\left(\hat{\eta}(\cdot \mid u)\big{\|}\hat{\eta}_{(1)}(\cdot\mid u)\otimes G(\hat{M}(u)(\cdot,\cdot) \right)du+3\varepsilon\\ \leq\inf_{m\in\mathcal{P}(\Delta^{o})}[F(m)+I(m)]+(9+2L_{G})\varepsilon,\]
where the last line is from (5.54). Since \(\varepsilon>0\) is arbitrary, the result follows.
## 7 Compactness of level sets.
In this section we show that the function \(I\) defined in (2.4), for each fixed \(A\in\mathcal{A}\), is a rate function. For this we need to show that for any \(k\in(0,\infty)\), the set \(S_{k}=\{m\in\mathcal{P}(\Delta^{o}):I(m)\leq k\}\) is compact in \(\mathcal{P}(\Delta^{o})\). Let \(\{m_{n},\ n\in\mathbb{N}\}\) be a sequence in \(S_{k}\). Since \(\mathcal{P}(\Delta^{o})\) is compact, \(\{m_{n}\}\) converges along a subsequence to some limit point \(m\in\mathcal{P}(\Delta^{o})\). It suffices to show that \(m\in S_{k}\). Since \(m_{n}\in S_{k}\), for each \(n\in\mathbb{N}\) we can find \(\eta^{n}\in\mathcal{U}(m_{n})\) such that
\[\int_{0}^{\infty}\exp(-s)R\left(\eta^{n}(\cdot\mid s)\|\eta^{n}_{(1)}(\cdot \mid s)\otimes G(M^{n}(s))(\cdot,\cdot)\right)ds\leq I(m_{n})+n^{-1}\leq k+n^ {-1}, \tag{7.1}\]
where \(M^{n}\) solves \(\mathcal{U}(m^{n},\eta^{n})\). For each \(n\in\mathbb{N}\), define \(\hat{\gamma}^{n},\hat{\theta}^{n}\in\mathcal{P}(\Delta^{o}\times\Delta^{o} \times\mathbb{R}_{+})\) as, for \(t\in\mathbb{R}_{+}\) and \(x,y\in\Delta^{o}\),
\[\hat{\gamma}^{n}(\{x\}\times\{y\}\times[0,t]) =\int_{0}^{t}\exp(-s)\eta^{n}(x,y\mid s)ds\] \[\hat{\theta}^{n}(\{x\}\times\{y\}\times[0,t]) =\int_{0}^{t}\exp(-s)\eta^{n}_{(1)}(x\mid s)G(M^{n}(s))(x,y)ds.\]
Since \(\Delta^{o}\) is compact and \(\hat{\gamma}^{n}_{(3)}(ds)=\hat{\theta}^{n}_{(3)}(ds)=\exp(-s)ds\) for each \(n\in\mathbb{N}\), it follows that the sequences \(\{\hat{\gamma}^{n},\ n\in\mathbb{N}\}\), \(\{\hat{\theta}^{n},\ n\in\mathbb{N}\}\) are tight in \(\mathcal{P}(\Delta^{o}\times\Delta^{o}\times\mathbb{R}_{+})\). Consider a further subsequence (of the subsequence along which \(m^{n}\) converges) along which \(\hat{\gamma}^{n}\) and \(\hat{\theta}^{n}\) converge to \(\hat{\gamma}\) and \(\hat{\theta}\), respectively, and relabel this subsequence once more as \(\{n\}\). Note that, for each \(n\in\mathbb{N}\), since \(M^{n}\) solves \(\mathcal{U}(m^{n},\eta^{n})\), we have, for \(t\in\mathbb{R}_{+}\),
\[M^{n}(t)=m^{n}-\int_{0}^{t}\eta^{n}_{(1)}(s)ds+\int_{0}^{t}M^{n}(s)ds.\]
A straightforward calculation shows that, for each \(n\in\mathbb{N}\), \(\|M^{n}(t)-M^{n}(s)\|\leq 2(t-s)\) for all \(0\leq s\leq t<\infty\), from which it follows that \(\{M^{n},\ n\in\mathbb{N}\}\) is relatively compact in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\). Assume without loss of generality (by selecting a further subsequence if needed) that \(M^{n}\to M\) in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}))\) as \(n\to\infty\). Note that we can write, for \(t\in\mathbb{R}_{+}\) and \(x\in\Delta^{o}\),
\[M^{n}(t)(x)=m^{n}(x)-\int_{0}^{t}\exp(s)\hat{\gamma}^{n}_{(1,3)}(\{x\}\times ds )+\int_{0}^{t}M^{n}(s)(x)ds.\]
Sending \(n\to\infty\) in the previous display, we get, for each \(t\in\mathbb{R}_{+}\),
\[M(t)(x)=m(x)-\int_{0}^{t}\exp(s)\hat{\gamma}_{(1,3)}(\{x\}\times ds)+\int_{0}^ {t}M(s)(x)ds. \tag{7.2}\]
Furthermore, since \(\hat{\gamma}_{(3)}(ds)=\exp(-s)ds\), we can disintegrate \(\hat{\gamma}\) as
\[\hat{\gamma}(\cdot\times ds)=\hat{\eta}(\cdot\mid s)\exp(-s)ds, \tag{7.3}\]
where \(s\mapsto\hat{\eta}(s)\doteq\hat{\eta}(\cdot\mid s)\) is a measurable map from \(\mathbb{R}_{+}\) to \(\mathcal{P}(\Delta^{o}\times\Delta^{o})\). Also, for \(x,y\in\Delta^{o}\) and \(s\in\mathbb{R}_{+}\), we can disintegrate \(\hat{\eta}(s)(x,y)\) as \(\hat{\eta}_{(1)}(x\mid s)\hat{\eta}_{2\mid 1}(x,y\mid s)\). With this observation and (7.2), we have, for \(t\in\mathbb{R}_{+}\),
\[M(t)=m-\int_{0}^{t}\hat{\eta}_{(1)}(s)+\int_{0}^{t}M(s)ds.\]
Since \(\eta^{n}\in\mathcal{U}(m^{n})\), we have that, with
\[\hat{\beta}^{n}(\{x\}\times\{y\}\times[0,t])\doteq\int_{0}^{t}\eta^{n}(x,y \mid s)ds,\ x,y\in\Delta^{o},t\in\mathbb{R}_{+},\]
(P1) holds with \(\beta\) replaced with \(\hat{\beta}^{n}\) for all \(n\in\mathbb{N}\). Letting
\[\hat{\beta}(\{x\}\times\{y\}\times[0,t])\doteq\int_{0}^{t}\hat{\eta}(x,y\mid s )ds,\ x,y\in\Delta^{o},t\in\mathbb{R}_{+},\]
we have on sending \(n\to\infty\), and recalling the convergence \(\hat{\gamma}^{n}\to\hat{\gamma}\), that (P1) holds with \(\beta\) replaced with \(\hat{\beta}\). Consequently, Property (a) of (2.3) holds.
Next, since \(\hat{\eta}^{n}\in\mathcal{U}(m^{n})\), we have that, for each \(n\in\mathbb{N}\), (P2) holds with \(\eta\) replaced with \(\eta^{n}\). This says that, for each \(n\in\mathbb{N}\),
\[\hat{\gamma}^{n}(\{x\}\times\Delta^{o}\times[0,t])=\hat{\gamma}^{n}(\Delta^{o} \times\{x\}\times[0,t]),\ \ t\in\mathbb{R}_{+},x\in\Delta^{o}.\]
Sending \(n\to\infty\), recalling the convergence \(\hat{\gamma}^{n}\to\hat{\gamma}\), and the definition of \(\hat{\eta}\), we now see that (P2) holds with \(\eta\) replaced with \(\hat{\eta}\) as well, thereby ensuring that Property (b) of (2.3) holds.
Next note that, since \(\eta^{n}\in\mathcal{U}(m^{n})\), for each \(n\in\mathbb{N}\), there is some \(\mathcal{T}^{n}\in C([0,\infty):\mathcal{P}(\Delta^{o}\times\Delta^{o}))\) such that Property (c) of (2.3) holds with \((\mathcal{T},M)\) replaced with \((\mathcal{T}^{n},M^{n})\). Note that this in particular says that \(\{\mathcal{T}^{n},\ n\in\mathbb{N}\}\) is tight in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times\Delta^{o}))\). Thus, by considering a further subsequence if needed, we can assume without loss of generality that \(\mathcal{T}^{n}\) converges to \(\mathcal{T}\) as \(n\to\infty\) in \(C(\mathbb{R}_{+}:\mathcal{P}(\Delta^{o}\times\Delta^{o}))\). It is easily checked that Property (c) of (2.3) holds for \((\mathcal{T},M)\).
Together, the above observations say that \(\hat{\eta}\) satisfies Properties (a), (b), and (c) of (2.3), showing that
\[\hat{\eta}\in\mathcal{U}(m)\ \text{and}\ M\ \text{solves}\ \mathcal{U}(m,\hat{ \eta}). \tag{7.4}\]
Next, from the definition of \(\{(\hat{\theta}^{n},\hat{\gamma}^{n}),\ n\in\mathbb{N}\}\) and (7.1), we see that \(R(\hat{\gamma}^{n}\|\hat{\theta}^{n})\leq k+1/n\) for each \(n\in\mathbb{N}\). Using the fact that \((\hat{\gamma}^{n},\hat{\theta}^{n})\to(\hat{\gamma},\hat{\theta})\) as \(n\to\infty\) and the lower semicontinuity of relative entropy, we have on sending \(n\to\infty\) that \(R(\hat{\gamma}\|\hat{\theta})\leq k\). We now claim that
\[\hat{\theta}(\{x\}\times\{y\}\times[0,t])=\int_{0}^{t}\exp(-s)\hat{\eta}_{(1) }(x\mid s)G(M(s))(x,y)ds,\ t\in\mathbb{R}_{+},\ x,y\in\Delta^{o}. \tag{7.5}\]
Fix \(t\in\mathbb{R}_{+}\) and \(x,y\in\Delta^{o}\). Since \(\hat{\gamma}^{n}_{(1,3)}\to\hat{\gamma}_{(1,3)}\) as \(n\to\infty\), we have on using the continuity of \(G\) and \(M\) that, as \(n\to\infty\),
\[\int_{0}^{t}\exp(-s)\hat{\eta}^{n}_{(1)}(x\mid s)G(M(s))(x,y)ds\to\int_{0}^{t} \exp(-s)\hat{\eta}_{(1)}(x\mid s)G(M(s))(x,y)ds.\]
Also,
\[\int_{0}^{t}\exp(-s)\hat{\eta}^{n}_{(1)}(x\mid s)|G(M^{n}(s))(x,y)-G(M(s))(x,y )|ds\leq L_{G}\sup_{0\leq s\leq t}\|M^{n}(s)-M(s)\|\to 0,\ \ \text{as}\ n\to\infty.\]
Combining the last two observations we have that, as \(n\to\infty\),
\[\hat{\theta}^{n}(\{x\}\times\{y\}\times[0,t]) =\int_{0}^{t}\exp(-s)\eta^{n}_{(1)}(x\mid s)G(M^{n}(s))(x,y)ds\] \[\to\int_{0}^{t}\exp(-s)\eta_{(1)}(x\mid s)G(M(s))(x,y)ds.\]
Combining this with the fact that \(\hat{\theta}^{n}\to\hat{\theta}\) as \(n\to\infty\), we now have (7.5). Finally, on combining (7.3) with (7.5) and using the chain rule for relative entropies, we have
\[k\geq R(\hat{\gamma}\|\hat{\theta})=\int_{0}^{\infty}\exp(-s)R\left(\hat{\eta }(\cdot\mid s)\|\hat{\eta}_{(1)}(\cdot\mid s)\otimes G(M(s))(\cdot,\cdot) \right)ds.\]
Combining this with (7.4) we now see that \(I(m)\leq k\). The result follows.
## 8 Examples
In Section 2.2 (see Example 2.6) we presented one important setting where the conditions of Theorem 2.4 are met. In this section we provide several other examples for which Theorem 2.4 holds.
1. Suppose that \(P^{o}\in\mathcal{K}(\Delta^{o})\) is irreducible and \(G(m)=P^{o}\) for all \(m\in\Delta^{o}\). Clearly this \(G\) satisfies Assumption 2.2 with \(A\) defined as \(A(x,y)=\mathbf{1}_{\{P^{o}(x,y)>0\}}\). Theorem 2.4 in this case is exactly the large deviation principle for empirical measures of irreducible finite state Markov chains (cf. [19, 20]). To see that the rate function \(I\) given in (2.4) coincides with the well known formula
(1.1), we note the following. The inequality \(I(m)\leq\tilde{I}(m)\) was argued in Remark 2.5. Conversely, suppose that \(m\in\mathcal{P}(\Delta^{o})\) is such that \(I(m)<\infty\) and that we are given a \(\eta\in\mathcal{U}(m)\). Define \(\gamma\in\mathcal{P}(\Delta^{o}\times\Delta^{o})\) as \(\gamma\doteq\int_{0}^{\infty}\exp(-s)\eta(s)ds\) and observe that \(\gamma_{(1)}=\gamma_{(2)}\). Also, if \(M\) solves \(\mathcal{U}(m,\eta)\), then it is easily checked by multiplying both sides of (P3) by \(\exp(-t)\) and integrating over \([0,\infty)\) that \(m=\int_{0}^{\infty}\exp(-s)\eta_{(1)}(s)ds=\gamma_{(1)}\), namely \(\gamma\in\mathcal{I}(m)\), where \(\mathcal{I}(m)\) is defined below (1.1). Finally, from the convexity of relative entropy \[\int_{0}^{\infty}\exp(-s)R(\eta(\cdot\mid s)\|\eta_{(1)}(\cdot \mid s)\otimes P^{o}(\cdot,\cdot))ds\] \[\quad\geq R\left(\int_{0}^{\infty}\exp(-s)\eta(\cdot\mid s)ds\| \int_{0}^{\infty}\exp(-s)\eta_{(1)}(\cdot\mid s)\otimes P^{o}(\cdot,\cdot)ds\right)\] \[\quad=R(\gamma\|m\otimes P^{o})\] which shows that \(\tilde{I}(m)\leq I(m)\). This proves that \(I=\tilde{I}\). Note that when \(P^{o}\) is replaced with \(G(\cdot)\) (with a general \(G\)), one cannot carry out a similar convexity argument.
2. Let \(A\) be an irreducible adjacency matrix. Then we have \(\sum_{j=1}^{d}A^{j}>0\). For each \(z\in\Delta^{o}\), let \(M^{z}\in\mathcal{K}(\Delta^{o})\) be such that \(M^{z}(x,y)>0\) if and only if \((x,y)\in A_{+}\). Define \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) as \[G(m)(x,y)\doteq\sum_{z\in\Delta^{o}}m(z)M^{z}(x,y),\ \ x,y\in\Delta^{o}.\] Clearly, Assumption 2.2 part 1 and part 2(a) are satisfied. Assumption 2.2 part 2(b) is also satisfied with \(\delta_{0}^{A}\doteq\min_{(x,y)\in A_{+}}\min_{z\in\Delta^{o}}M^{z}(x,y)\). Also, since \(G(m)\) is irreducible for every \(m\in\mathcal{P}(\Delta^{o})\), from Remark 2.3(3) we see that Assumption 2.2 part 3 holds. Finally, since \(\sum_{j=1}^{d}A^{j}>0\) it follows that for all \(m_{1},\ldots m_{d}\in\mathcal{P}(\Delta^{o})\), \(\sum_{j=1}^{d}G(m_{1})\cdots G(m_{j})>0\). Using Remark 2.3(3) again, we see that Assumption 2.2 part 4 holds as well. Thus, this family of models satisfies all the conditions of Theorem 2.4. This model can be viewed as a **generalized Polya urn** in the following manner. Consider an urn that contains balls of \(d\) different colors. Initially there is a single ball in the urn which is of color \(x_{0}\). At each time instant a ball is selected from the urn, and then that ball, together with a new ball (of possibly different color), is added back to the urn. Given that the ball drawn at time instant \(n\) is of color \(z\) and the new ball added at time instant \(n-1\) was of color \(x\), we return the drawn ball to the urn (namely the ball with color \(z\)) and add a new ball to the urn of color \(y\) with probability \(M^{z}(x,y)\).
3. Let \(M\in\mathcal{K}(\Delta^{o})\) be such that \(M\) is irreducible. Define \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) as \[G(m)(x,y)=\sum_{z\in\Delta^{o}}m(z)M(z,y)=(mM)(y),\ x,y\in\Delta^{o}.\] Under the condition \(M(x,y)>0\) for all \(x,y\in\Delta^{o}\), a large deviation principle of the form in Theorem 2.4 was recently established in [16]. The current work shows that the above strict positivity condition can be relaxed to simply the requirement that \(M\) is irreducible. To see this, it suffices to verify Assumption 2.2. Clearly, part 1 of the assumption holds. Also take \(A\) to be the \(d\times d\) matrix with all entries \(1\). Then, part 2(a) of the assumption holds (vacuously). Also, part 2(b) holds with \(\delta_{0}^{A}\doteq\min_{(x,y)\in M_{+}}M(x,y)\), where \(M_{+}=\{(x,y)\in\Delta^{o}\times\Delta^{o}:M(x,y)>0\}\). The fixed point equation in part 3 in this case reduces to the equation \(\pi^{*}M=\pi^{*}\), which, since \(M\) is irreducible, has a unique solution in \(\mathcal{P}_{+}(\Delta^{o})\). Finally for part 4, note that from the irreducibility of \(M\), \(\inf_{x,y\in\Delta^{o}}\sum_{k=1}^{d}M^{k}(x,y)\doteq\alpha>0\). Now for \(k\in\mathbb{N}\), by a straightforward conditioning argument it follows that, for every \(x\in\Delta^{o}\) \[P(L^{k(d+1)}(x)=0)=P(L^{k(d+1)}(x)=0,L^{kd}(x)=0)\leq(1-\alpha)P(L^{kd}(x)=0).\]
So by Borel Cantelli lemma \(P(L^{k(d+1)}(x)=0\) for infinitely many \(k)=0\), which verifies part 4 of the assumption.
4. For each \(x\in\Delta^{o}\) let \(M^{x}\in\mathcal{K}(\Delta^{o})\) be irreducible, and let \(P\) and \(P^{o}\) be as in Example 2.6. Define \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) as \[G(m)_{x,y}=P_{x,y}+P_{x,0}(mM^{x})_{y},\ \ x,y\in\Delta^{o},m\in\mathcal{P}(\Delta^{o}).\] (8.1) Let \(A\) be as introduced in Example 2.6. Assumption 2.2 part 1 and part 2(a) are clearly satisfied. Also, Assumption 2.2 part 2(b) holds with \[\delta_{0}^{A}\doteq\left(\inf_{(x,y)\in A_{+}}(P_{x,y}+P_{x,0})\right)\inf_{( x,y)\in\Delta^{o}\times\Delta^{o}}\sum_{z\in\Delta^{o}}M^{x}(z,y)\] which is clearly positive from the definition of \(A_{+}\) and the irreducibility assumption on each \(M^{x}\). From the irreducibility of \(P^{o}\) it follows that \(\sum_{k=1}^{d}(P^{o})^{k}\) is strictly positive. This shows that the condition (2.2) in Remark 2.3(3) is satisfied, which, in view of the discussion in the same remark, shows that Assumption 2.2 parts 3 and 4 hold as well. Thus, Theorem 2.4 holds with \(G\) defined as above under the assumed conditions on \(P\) and \(\{M^{x},\ x\in\Delta^{o}\}\). One family of models that fits the above setting is a variant of the Personalized PageRank (PPR) algorithm, see e.g., [11, 36] and the references therein. Consider an individual performing a random walk on the graph of webpages. Denote by \(\Delta^{o}\) the set of webpages and, for each \(x,y\in\Delta^{o}\), let \(\mathcal{V}(x,y)\) denote the number of links from webpage \(x\) to webpage \(y\). Let, for each \(x\in\Delta^{o}\), \(\mathcal{V}(x)\doteq\{y\in\Delta^{o}:\mathcal{V}(x,y)>0\}\) denote the set of webpages that are linked to by webpage \(x\), and assume that \(\mathcal{V}(x)\) is nonempty for each \(x\in\Delta^{o}\). For each \(x\in\Delta^{o}\), let \(D_{x}^{+}\doteq\sum_{y\in\mathcal{V}(x)}\mathcal{V}(x,y)\) denote the out-degree of webpage \(x\). Consider the transition kernel \(Q\) on \(\Delta^{o}\) defined as \(Q_{x,y}\doteq\frac{\mathcal{V}(x,y)}{D_{x}^{+}}\), \(x,y\in\Delta^{o}\). For \(x\in\Delta^{o}\), fix a _damping factor_\(\alpha_{x}\in(0,1)\) and define \(G:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) as \[G(m)(x,y)\doteq(1-\alpha_{x})Q_{x,y}+\alpha_{x}L(m)(x,y),\quad x,y\in\Delta^{o},\] where \(L:\mathcal{P}(\Delta^{o})\to\mathcal{K}(\Delta^{o})\) is defined as \[L(m)=\theta q+(1-\theta)\sum_{z\in\Delta^{o}}m_{z}M^{x}(z,y)\] for some \(\theta\in(0,1]\), \(q\in\mathcal{P}_{+}(\Delta^{o})\), and \(M^{z}\in\mathcal{K}(\Delta^{o})\) for \(z\in\Delta^{o}\). The self-interacting chain defined using the map \(G\) as above, in the special case where \(\alpha_{x}=\alpha\in(0,1)\), \(\theta=1\), and \(q_{y}=\frac{1}{|\Delta^{o}|}\) for all \(y\in\Delta^{o}\), is the well-known PageRank (PR) Markov chain. A limitation of classical PR is that it does not take into consideration the user's preferences. For that reason, variants of the PR algorithm have been proposed that account for personal preferences, see, e.g. [36]. Such variants can be captured by a \(G\) of the above form that reflects an individual's browsing history in determining transition probabilities. It is easy to verify that the above \(G\) can be expressed in the form (8.1) with \(P_{x,y}=(1-\alpha_{x})Q_{xy}+\theta\alpha_{x}q_{y}\) for \(x,y\in\Delta^{o}\) and \(P_{x,0}=\alpha_{x}(1-\theta)\), and that, under the assumption that \(M^{x}\) is irreducible for every \(x\in\Delta^{o}\), Assumption 2.2 holds.
5. As noted in Example (2.6), our assumptions cover certain types of vertex reinforced random walks. We now give an example that shows that certain variants of edge reinforced random walks are also covered by our assumptions. Suppose that \(\mathcal{G}\) is a connected undirected graph on the vertex set \(\mathcal{V}=\{1,\ldots,\ell\}\), with the edge set denoted as \(\mathcal{E}\). For \(x\in\mathcal{V}\), we denote by \(d(x)\) the degree of vertex \(x\). Let \(\tilde{A}\) be the incidence matrix of the graph, namely it is the \(\ell\times\ell\) matrix with entries \(0\) or \(1\) such that \(\tilde{A}(u,v)=\tilde{A}(v,u)=1\) if and only if \(\{u,v\}\in\mathcal{E}\). For simplicity of presentation, we assume that the graph has no self-loops, namely the diagonal entries of \(\tilde{A}\) are \(0\)
Let \(\Delta^{o}=\{(x,y)\in\mathcal{V}\times\mathcal{V}:\{x,y\}\in\mathcal{E}\}\). For \(z\in\Delta^{o}\), \(z_{i}\), \(i=1,2\), will denote the \(i\)-th coordinate of \(z\). Fix \(\{x_{0},y_{0}\}\in\mathcal{E}\) so that \(\tilde{A}(x_{0},y_{0})=1\), and let \(\delta\in(0,1)\). The latter parameter will control the strength of the reinforcement mechanism.
We now define a sequence \(\{X_{n},\ n\in\mathbb{N}_{0}\}\) of \(\mathcal{V}\)-valued random variables, recursively, as follows. Let \(X_{0}=x_{0}\) and \(X_{1}=y_{0}\), and set \(Z_{0}=(X_{0},X_{1})\). Having defined \(\{X_{i},\ 0\leq i\leq n\}\) and \(\{Z_{i}\doteq(X_{i},X_{i+1}),\ 0\leq i\leq n-1\}\), we now define \(X_{n+1}\) according to the following conditional law:
\[P(X_{n+1}=y\mid X_{0},\ldots X_{n})\\ \doteq\tilde{A}(X_{n},y)\left[\delta\hat{L}^{n-1}[(X_{n},y)]+ \frac{1}{d(X_{n})}\left(1-\delta\sum_{\tilde{z}\in\Delta^{o}}\hat{L}^{n-1}( \bar{z})\tilde{A}(X_{n},\bar{z}_{2})\mathbf{1}_{\{X_{n}=\bar{z}_{1}\}}\right) \right], \tag{8.2}\]
where, denoting by \(L^{n-1}\doteq\frac{1}{n}\sum_{i=0}^{n-1}\delta_{Z_{i}}\), and for \(z=(z_{1},z_{2})\in\Delta^{o}\), \(z^{r}=(z_{2},z_{1})\),
\[\hat{L}^{n-1}(z)=\frac{1}{2n}\sum_{i=0}^{n-1}[\delta_{Z_{i}}(z)+\delta_{Z_{i}} (z^{r})]=\frac{1}{2}(L^{n-1}(z)+L^{n-1}(z^{r})),\ z\in\Delta^{o}.\]
Now set \(Z_{n}=(X_{n},X_{n+1})\). The above conditional law can be interpreted as follows. At each time instant \(n\geq 2\), for each neighboring site \(y\), the walker jumps to site \(y\) with probability \(\frac{\delta}{2}\) times the fraction of time the edge connecting with that site has been traversed (in either direction) by the walker by time \(n-1\); and with the remaining probability it selects one of the neighboring sites at random. Thus, the first term on the right side of (8.2) captures the edge-reinforcement mechanism. It is convenient to directly describe the evolution of the sequence \(\{Z_{n},\ n\in\mathbb{N}_{0}\}\). With \(d\doteq|\Delta^{o}|\), define the \(d\times d\) dimensional incidence matrix \(A\) as \(A(z,\tilde{z})=1\) if and only if \(z_{2}=\tilde{z}_{1}\) and \(\tilde{A}(\tilde{z}_{1},\tilde{z}_{2})=1\). Since the graph is connected, \(A\) is irreducible. Then, in terms of \(A\), the conditional law of \(Z_{n}\) can be written as
\[P(Z_{n}=\tilde{z}\mid Z_{0},\ldots,Z_{n-1}=z)=G(L^{n})(z,\tilde{z}),\ z,\tilde {z}\in\Delta^{o},\]
where for \(m\in\mathcal{P}(\Delta^{o})\),
\[G(m)(z,\tilde{z})\doteq A(z,\tilde{z})\left[\delta\hat{m}(\tilde{z})+\frac{1 }{d(z_{2})}\left(1-\delta\sum_{\tilde{z}\in\Delta^{o}}\hat{m}(\bar{z})A(z, \bar{z})\right)\right]\]
and \(\hat{m}(z)=\frac{1}{2}(m(z)+m(z^{r}))\).We now verify that Assumption 2.2 holds. Part 1 and Part 2(a) of the assumption clearly hold with the above definition of \(A\). Also, since \(d(z_{2})\leq\ell\) and \(\sum_{\tilde{z}\in\Delta^{o}}\hat{m}(\bar{z})A(z,\bar{z})\leq 1\), Part 2(b) holds with \(\delta_{0}^{A}=(1-\delta)/\ell\). This observation, together with the fact that \(A\) is irreducible, also shows that \(G(m)\) is irreducible for every \(m\in\mathcal{P}(\Delta^{o})\) and in fact, for every \(j\in\mathbb{N}\) and all \(m_{1},\ldots,m_{j}\in\mathcal{P}(\Delta^{o})\), \(G(m_{1})G(m_{2})\cdots G(m_{j})\geq(\delta_{0}^{A})^{j}A^{j}\), coordinate wise. These observations, in view of Remark 2.3 (3) show that parts 3 and 4 of the assumption are satisfied as well. Thus, Theorem 2.4 holds with \(G\) defined as above for the sequence \(\{Z_{n},\ n\in\mathbb{N}_{0}\}\). Note that the empirical measure \(L^{n,X}\doteq\frac{1}{n+1}\sum_{i=0}^{n}\delta_{X_{i}}\) can be obtained from \(L^{n}\) using the relation \(L^{n,X}(x)=\sum_{y\in\Delta^{o}}L^{n}(x,y)\), \(x\in\Delta^{o}\), and so, by using the contraction principle, one also obtains a large deviation principle for \(\{L^{n,X},\ n\in\mathbb{N}\}\).
## Appendix A Some Auxiliary Results
**Lemma A.1**.: _Let \(\{L^{k,Z},k\in\mathbb{N}\}\) be the sequence introduced in Section 6.1 and \(\varepsilon_{1}\) be as fixed in (6.2). Then, under Assumption 2.2, there is an \(a^{*}>0\) and \(r_{1}\in\mathbb{N}\) such that \(P(N_{1}>r_{1})\leq\varepsilon_{1}\), where \(N_{1}\) is as defined in (6.3)._
Proof.: From Assumption 2.2(4) it follows that, with \(M(\omega)\doteq\inf\{n\in\mathbb{N}:\inf_{x\in\Delta^{o}}L^{n,Z}(\omega)(x)>0\}\), we have \(P(\omega:M(\omega)<\infty)=1\). For \(a\in\mathbb{R}_{+}\), let \(M^{a}(\omega)\doteq\inf\{n\in\mathbb{N}:\inf_{x\in\Delta^{o}}L^{n,Z}(\omega)(x) >a\}\). Note that \(\{M<\infty\}=\cup_{k=1}^{\infty}\{M^{1/k}<\infty\}\). Thus, there exists an \(a^{*}>0\) such that \(P(M^{a^{*}}<\infty)>1-\varepsilon_{1}/2\). Since \(\cup_{m=1}^{\infty}\{M^{a^{*}}<m\}=\{M^{a^{*}}<\infty\}\), we can find an \(r_{1}\in\mathbb{N}\) such that \(P(M^{a^{*}}\leq r_{1})>1-\varepsilon_{1}\). The result follows on noting that \(N_{1}=M^{a^{*}}\).
**Lemma A.2**.: _Suppose that \(G\) satisfies Assumption 2.1 and for some \(K\in\mathbb{N}\) and all \(m_{1},\ldots,m_{K}\in\mathcal{P}(\Delta^{o})\), and \(x,y\in\Delta^{o}\), \(\sum_{j=1}^{K}[G(m_{1})G(m_{2})\cdots G(m_{j})]_{x,y}>0\). Then Assumption 2.2(4) is satisfied._
Proof.: By continuity of \(G\) and compactness of \(\mathcal{P}(\Delta^{o})\)
\[\inf_{x,y\in\Delta^{o}}\inf_{m_{1},\ldots m_{K}\in\mathcal{P}(\Delta^{o})}\sum _{j=1}^{K}[G(m_{1})G(m_{2})\cdots G(m_{j})]_{x,y}\doteq\gamma>0.\]
Then by a straightforward conditioning argument it follows that, for any \(x\in\Delta^{o}\), and \(n>1\),
\[P(L^{nK,X}(x)=0)=P(L^{(n-1)K,X}(x)=0,L^{nK,X}(x)=0)\leq(1-\gamma)P(L^{(n-1)K, X}(x)=0).\]
Thus, \(P(L^{nK,X}(x)=0)\leq(1-\gamma)^{n-1}\) and so the result follows from the Borel-Cantelli lemma.
|
2303.12439 | Recent Cross-Section Measurements of Top-Quark Pair Production in
Association with Gauge Bosons | This article reviews recent cross-section measurements of top-quark pair
production in association with a photon, W or Z boson at the Large Hadron
Collider (LHC). All measurements reviewed use proton-proton (pp) datasets
collected by the ATLAS and CMS experiments between 2016 and 2018 from
collisions at a centre-of-mass energy of 13 TeV during the LHC Run 2.
Differential and inclusive cross-section measurements are discussed along with
the constraints on the effective field theory operators accessible through each
process. Finally, we discuss the potential for measurements of these processes
at future colliders. | Joshuha Thomas-Wilsker | 2023-03-22T10:20:34Z | http://arxiv.org/abs/2303.12439v1 | # Recent Cross-Section Measurements of Top-Quark Pair Production in Association with Gauge Bosons
###### Abstract
This article reviews recent cross-section measurements of \(\mathrm{t\bar{t}}\) production in association with a photon, \(W\) or \(Z\) boson at the Large Hadron Collider (LHC). All measurements reviewed use proton-proton (pp) datasets collected by the ATLAS and CMS experiments between 2016 and 2018 from collisions at a centre-of-mass energy of \(13\,\mathrm{TeV}\) during the LHC Run 2. Differential and inclusive cross-section measurements are discussed along with the constraints on the effective field theory operators accessible through each process. Finally, we discuss the potential for measurements of these processes at future colliders.
top quark; pair production; cross-section; EFT; \(\mathrm{t\bar{t}}\); LHC; CMS; ATLAS +
Footnote †: journal: Physics Letters B
## 1 Introduction
The top quark has several unique features that distinguish it from other Standard Model (SM) particles. With its electroweak (EW) scale mass of approximately \(172\,\mathrm{GeV}\) it is by far the most massive of the fundamental SM particles. This mass, along with an associated Yukawa coupling value close to unity, suggests it may have a special role in the EW symmetry-breaking mechanism. It also has a uniquely short lifetime of \(\mathcal{O}(10^{-25})\) seconds which prevents it from hadronising before it decays[1], making it the only quark for which it is possible to study bare quark properties via its decay products.
This unconventional particle provides us with a tool with which we can scrutinise predictions of SM parameters and test a plethora of Beyond the Standard Model (BSM) hypotheses. Several model-dependent searches for BSM physics look for deviations in top-pair production rates and could verify theoretical models that predict the existence of top super-partners, vector-like quarks or even Dark Matter. There are also many model-independent searches that use an effective field theory framework to search for anomalous couplings. Additionally, there are many measurements at the LHC for which SM top-quark processes are important backgrounds and therefore also benefit from improved measurements in the top sector.
In proton-proton collisions at the LHC, the dominant top-quark production mechanism produces top quarks in pairs via the QCD process \(\mathrm{gg}\to\mathrm{t\bar{t}}\). Due to the CKM matrix element \(|V_{tb}|\) being so large, the top-quark decays almost exclusively via the process \(\mathrm{t}\to\mathrm{bW}\). Thus, most top-quark pairs are produced via the interaction \(\mathrm{gg}\to\mathrm{t\bar{t}}\to\mathrm{bW^{+}bW^{-}}\). The \(\mathrm{t\bar{t}}\) process is often categorised according to the decay of the two \(W\) bosons. These categories are referred to as dileptonic, semi-leptonic or full hadronic, and are often studied independently due to the varying backgrounds and final state signatures.
The focus of this article is on \(\mathrm{t\bar{t}}\) production in association with an additional gauge boson (\(\mathrm{t\bar{t}X}\)), as exemplified in Figure 1. More explicitly, the latest ATLAS and CMS cross-section measurements of \(\mathrm{t\bar{t}}\) production in association with either a photon (\(\gamma\)), \(W\) or \(Z\) boson. These measurements typically assume SM-like processes to obtain inclusive and differential cross-sections; however, several of them also provide interpretations using the Standard Model Effective Field Theory (SMEFT) framework [2; 3]. These processes provide a deep insight into the nature of the couplings in the top-quark interactions with gauge boson. The publications
discussed focus on cross-section measurements performed using datasets collected during the LHC Run 2, where high energy (13 TeV) collisions and huge datasets (approximately 140 fb\({}^{-1}\) integrated luminosity) make it possible investigate these rare \(\mathrm{t\bar{t}}\) processes in more detail than ever before. The future of these measurements is also discussed, focusing on their potential at the HL-LHC and the main future collider candidates.
## 2 \(\mathrm{t\bar{t}Z}\) Measurements
Inclusive and differential measurements of the \(\mathrm{t\bar{t}Z}\) cross-section are interesting because they directly probe the coupling between the top quark and the neutral EW Z boson, also known as the t-Z coupling. Furthermore, several BSM theories [4; 5] also predict anomalous neutral EW top-quark couplings that can drastically change the amplitude and subsequently the measured cross-section. Such couplings have also been interpreted using an effective field theory (EFT) approach [6]. The attraction here is that the EFT approach provides a model-independent way to interpret possible deviations in a cross-section measurement from its SM value.
This process is also an important background for several SM measurements, for example single-top production in association with a Z boson, \(\mathrm{t\bar{t}H}\) and many BSM searches [7]. A precise measurement of the process is therefore beneficial to analyses looking to minimise the uncertainties associated with this process.
The first measurements of \(\mathrm{t\bar{t}Z}\) were performed by ATLAS and CMS at 8 TeV. However, both ATLAS and CMS have also measured this process using partial Run 2 datasets of 36.1 fb\({}^{-1}\) and 35.9 fb\({}^{-1}\), respectively, from 13 TeV collisions, where the production rate increases approximately by a factor of 4 [8; 9].
Events were selected with two or more leptons to simultaneously extract the \(\mathrm{t\bar{t}Z}\) and \(\mathrm{t\bar{t}W}\) production cross-sections. The 3 and 4 lepton categories are the most sensitive to the \(\mathrm{t\bar{t}Z}\) process. Observed and expected significance values in both experiments for the \(\mathrm{t\bar{t}Z}\) process are well above 5\(\sigma\) in these measurements. ATLAS measured the cross-section to be \(\sigma_{\mathrm{t\bar{t}Z}}=0.95\pm 0.08(\mathrm{stat})\pm 0.10(\mathrm{syst})\) pb while CMS measured a value of \(\sigma_{\mathrm{t\bar{t}Z}}=0.99^{+0.09}_{-0.08}(\mathrm{stat})\pm^{0.12}_{0.10 }(\mathrm{syst})\) pb. One can see that, due to the large dataset, the statistical uncertainty is dramatically reduced and the systematic uncertainty on this result is already of a similar size. CMS also provide the first limits on anomalous t-Z couplings with \(\mathrm{t\bar{t}Z}\) data using an effective field theory (EFT) framework. Typically, this process provides the tightest constraints on this coupling.
Both collaborations [10; 11] now measure \(\mathrm{t\bar{t}Z}\) separately from \(\mathrm{t\bar{t}W}\) using Run 2 datasets of 139 fb\({}^{-1}\) and 77.5 fb\({}^{-1}\) for ATLAS and CMS, respectively. In both analyses, events with 3 or 4 isolated leptons (electrons or muons) are selected, targeting processes where one or both top quarks decay leptonically along with leptonic decays of the Z boson. Event and object quality requirements ensure the leptons are isolated and consistent with either the decay of a W boson (from the top-quark decay) or a Z boson. B-tagging algorithms are used to distinguish jets that originate from the hadronisation of b-quarks from those originating from light (up,
Figure 1: Leading-order Feynman diagram for gluon-gluon top-pair production (\(\mathrm{gg}\to\mathrm{t\bar{t}}\)) in association with a boson (\(\mathrm{X}\)).
down, strange or charm) quarks or gluons. Events are then further categorised according to the flavour and multiplicity of the jets in the event.
The ATLAS analysis selects events at detector level (using objects reconstructed from detector signals) with a minimum of two jets along with the aforementioned 3 or 4 lepton signature. Further signal region requirements are applied to maximise the sensitivity to ttZ production while ensuring enough signal events are retained to prevent the statistical uncertainty from becoming too large in the differential measurement. Additionally, control regions are defined to estimate background contributions from processes with prompt leptons from EW boson decays. Control region definitions can be found in Figure 2 where WZ/ZZ plus light jet processes dominate. The event yields from control regions are constrained by the observed data yields in these regions, which are then extrapolated to predict their contribution in the signal regions. WZ/ZZ plus b-jet production is not included in this method and are instead predicted directly using simulated templates which are included in the signal extraction procedure.
Another significant background contribution comes from processes where the selected lepton is not from the prompt decay of a vector boson (aka non-prompt/fake-lepton). This background mostly stems from dilepton processes where additional non-prompt leptons can originate from leptonically decaying heavy-flavour hadrons and/or jets that 'fake' a leptonic signature and is subsequently misidentified as a lepton. The contribution from this background is estimated using the matrix-method [12; 13] which relies on the different probabilities that prompt and fake leptons pass the identification, isolation and impact parameter requirements. All other background processes are estimated from simulation, normalised to the latest theoretical cross-section prediction [14; 15; 16].
In comparison, the latest CMS inclusive cross-section measurement employs a very similar detector-level event selection. The measurement selects events with 3 or 4 lepton signatures and at least one jet. Events are then categorised according to the number of leptons, light (up, down, strange and gluon) flavour jets and heavy (bottom) flavour jets. The background processes are the same and are grouped in a mostly identical manner. All background processes with prompt leptons are modelled using the state-of-the-art simulation and normalised to the latest cross-section calculation. The normalisation of the WZ/ZZ plus jets processes are not extracted in the fit but are assigned uncertainties to cover the difference between data and the simulation in a dedicated control region. Backgrounds with fake/non-prompt leptons are estimated using the "fake rate" method in which estimates are made of the rate at which fake leptons pass the lepton selection requirements in control regions, and then this is extrapolated to the signal regions.
Both analyses extract the inclusive cross-section through a simultaneous maximum likelihood fit of the predicted yields of the signal and background processes to data in the signal regions. The signal strength (\(\mu=\frac{\sigma^{best\,fit}}{\sigma^{SM}}\)) is a free parameter in the fit and uncertainties are included in the fit as nuisance parameters constrained by Gaussian functions. The ATLAS analysis simultaneously fits data in the control regions and the WZ/ZZ plus light jets backgrounds treated as free parameters in the fit. The yields for the fitted simulation and data in the signal regions for both analyses can been seen in Figures 3 and 4.
The inclusive cross-section measured by ATLAS [11] from the combined fit in the 3 and 4 lepton signal regions, corresponding to a fiducial volume in which the Z-boson invariant mass lies between 70 and 110 GeV, is found to be
\[\sigma^{\rm pp\to t\bar{t}Z}_{\rm ATLAS}=0.99\pm 0.05(\rm stat.)\pm 0.08(\rm syst.)\,pb \tag{1}\]
where the dominant systematic uncertainties originate from the \(\rm t\bar{t}Z\) parton shower modelling, \(\rm tWZ\) background modelling and the identification.
Figure 3: ATLAS signal regions [11].
Figure 2: ATLAS control regions [11].
The CMS cross-section measurement [10] yielded a value of
\[\sigma_{\rm CMS}^{\rm pp\to t\bar{t}Z}=1.00^{+0.06}_{-0.05}({\rm stat.})\pm^{+0.0 7}_{-0.06}({\rm syst.})\,{\rm pb} \tag{2}\]
The results are evidently in excellent agreement with one another and reasonable agreement with the SM theoretical prediction [15, 16] of
\[\sigma_{\rm theo.}^{\rm pp\to t\bar{t}Z}=0.88^{+0.09}_{-0.15}\,{\rm pb} \tag{3}\]
Several differential cross-section measurements investigate the kinematics of the t\(\bar{t}\)Z system. In general, these measurements are performed by first subtracting background estimates from the data and then implementing an unfolding procedure which removes detector effects from the data so it can be compared with theoretical predictions. Migration matrices are constructed as part of the method that ensures resolution and acceptance affects are accounted for. The ATLAS measurement uses an iterative Bayesian unfolding to distributions defined using either particle or parton-level objects. Particle-level objects are defined using the collection of stable particles from the full matrix element plus parton shower simulation, i.e., baryons and mesons. Parton-level objects are defined using the unstable particles before any hadronisation effects have been simulated, i.e., quarks and gluons.
The fiducial volumes in which the measurements are made are defined using particle and parton-level objects, respectively, with a selection designed to be as close to the selection used in the inclusive measurement as possible. The background contributions are estimated in the same way as for the inclusive cross-section measurement. The \(\rm WZ/ZZ\) plus jets background normalisation is corrected using normalisation factors obtained in a fit of the inclusive cross-section, based on the 3 and 4 lepton regions. All backgrounds are subsequently subtracted from the data. Several observables are measured, with most resulting in agreement between the background subtracted, unfolded data and the NLO simulation with which it is compared. Figure 5 shows the agreement between the unfolded particle-level data distribution of the \(Z\)-boson transverse momentum and the four theoretical predictions.
Figure 4: CMS signal regions [10].
The differential cross-section measurement from CMS is performed in the same fiducial volume as defined for the inclusive measurement. Data are unfolded to parton level using the TUnfold package [21], which implements a least square fit with a Tikhonov regularisation. The unfolded distribution of the Z-boson transverse momentum is shown in Figure 6 along with the prediction from the MadGraph5_aMC@NLO Monte Carlo simulation.
CMS also provide an interpretation of the results in the context of the Standard Model Effective Field Theory (SMEFT) in the Warsaw basis. Anomalous couplings are parametrised by 59 independent Wilson coefficients (WC's) of mass dimension 6, of which 15 are relevant for top-quark interactions. Of these 15, processes involving t-Z interactions can provide competitive constraints on four Wilson coefficients: \(c_{tZ}\), \(c_{iZ}^{[t]}\), \(c_{\Phi t}\) and \(c_{\delta\phi Q}\). The first two can
Figure 5: Comparison of normalised unfolded particle- and parton-level distribution of the transverse momentum of the Z boson in observed data from ATLAS [11] with Theoretical expectations obtained from different generators: Sherpa 2.2.1 [17] generator at NLO QCD accuracy using either multi-leg or inclusive setups and MadGraph5_aMC@NLO [18] at NLO QCD accuracy interfaced with either the Pythia [19] or Herwig [20] parton shower models.
Figure 6: Comparison of normalised unfolded parton\(-\)level distribution of the transverse momentum of the Z boson in observed data from CMS [10] with Theoretical expectations obtained from different generators: Sherpa 2.2.1 generator at NLO QCD accuracy using either multi-leg or inclusive setups and MadGraph5_aMC@NLO at NLO QCD accuracy interfaced with either the Pythia or Herwig parton shower models.
induce anomalous EW dipole moments while the second two can induce anomalous neutral-current couplings. The values for these parameters will affect the kinematics and normalisation of processes with such vertices and can therefore be probed using differential distributions of the ttZ process. Signal yield predictions for non-zero (and zero = SM point) values of anomalous couplings are simulated in an independent sample at LO accuracy. Ratios of the BSM and SM points in a two-dimensional parton-level plane of the \(p_{T}(Z)\) and \(cos\theta_{Z}^{s}\) distributions are used to re-weight the nominal SM NLO ttZ sample. To validate this procedure, the distributions from the reweighted NLO SM sample and the dedicated LO BSM sample are then compared at various points in the WC parameter space after the full event reconstruction and are found to be in agreement.
A binned likelihood function \(\mathcal{L}(\theta)\) is constructed from the product of Poisson probabilities and nuisance parameters from the bins in the differential distribution. The values of the nuisance parameters are maximised for each point in the BSM parameter plane to find which point maximises the likelihood. The test statistic
\[q=-2log(\frac{\mathcal{L}(\hat{\theta})}{\mathcal{L}(\hat{\theta}_{max})})\]
where \(\mathcal{L}(\hat{\theta})\) is the likelihood function which maximises the nuisance parameters at a given BSM point and \(\mathcal{L}(\hat{\theta}_{max})\) is the maximised likelihood function at the BSM point with the maximum likelihood. The test statistic \(q\) is shown for 1 and 2-dimensional scans of the WCs in Figure 7. For the 1-dimensional scan, all other WCs are fixed to their SM value. All results agree with the SM.
## 3 Simultaneous ttZ and tZ\(q\) Measurements using Machine Learning Techniques
To probe the t-Z interaction even further, CMS has constructed a novel analysis [22] in which EFT effects on t-Z sensitive processes are targeted using bespoke machine learning algorithms. The analysis targets ttZ, tZq and tWZ processes with at least three leptons and uses multivariate algorithms to exploit the EFT effects in a multi-observable phase-space, creating observables which are optimally sensitive to the effects of EFT operators.
As with the aforementioned measurements, the focus of the measurement is on operators that can affect the couplings between third generation quarks and EW vector bosons. Thus, the same operators are studied but excluding the imaginary component of the complex Wilson coefficient \(c_{tZ}^{[I]}\) as it does not conserve CP. Two additional operators are studied however:
probing the t-W EW dipole moment and \(c_{\Phi Q}^{3}\) which probes the left-handed SU(2) triplet current operator.
A multi-classifier is trained to discriminate between the signals and major backgrounds. Separate binary classifiers are trained to discriminate between events generated under the SM and BSM (non-zero WC values) hypotheses. Training datasets are constructed from events randomly sampled from the SM scenario (labelled as background) and BSM scenario (labelled as signal). A novel approach that parameterises the event-weights as a 2nd order polynomial is used [23]. This makes it possible to smoothly interpolate predictions of the yields in bins of kinematic distributions, between the multitude of different combinations of WC values representing different EFT scenarios. It also allows for the interference between EFT operator amplitudes and either other EFT or the SM amplitudes to be taken into account in the simulation making it possible to exploit these kinematic differences of the various scenarios using a neural network. Separate networks are trained for \(\ttbar\)Z and \(\ttbar\)Zq due to their largely different kinematics (tWZ is not explicitly targeted due to its smaller cross-section and similar kinematics to \(\ttbar\)Z). Training is also performed separately for each operator, along with one training course which targets all five operators simultaneously, allowing for a more global EFT interpretation. Post-fit distributions of the 1D and 5D EFT classifiers are shown in Figure 8. It is important to note that for larger WC values, the impact on the yield in the more signal-like bins grows stronger, demonstrating how effective these discriminators can be.
The distributions of these NN's are fit to data in a maximum likelihood fit, where the likelihood is constructed in the same manner as was described in Section 2, to establish 68% and 95% CL confidence intervals on the values of the WC's. Five 1D scans (one for each operator) of the likelihood are performed, maximising the likelihood in steps of the WC value while fixing the other WCs to zero. Two-dimensional and five-dimensional scans are performed; however, the fit in this case uses the NN trained using distributions sampled from simultaneous variations of the 5 WC. The 95% CL confidence intervals for the 1D and 5D fits are shown in Figure 9.
Figure 8: Post–fit distributions of the EFT neural networks in the ttZ and tZq signal regions from [22]. The top row shows the 5D discriminant while the bottom row shows the discriminant trained to target the effects of the \(c_{tZ}\) operator. The middle ratio plot demonstrates the data/MC agreement, while the lower ratio demonstrates the increasing impact on the yields in each bin from larger WC values.
Results of the 2D scans to compare with Figure 7 are shown in Figure 10. One can see very competitive results are obtained for common operators. All reported WC values agree with their expected SM value.
## 4 t\(\bar{\mathrm{t}}\)W Measurements
The t\(\bar{\mathrm{t}}\)W process is unique among processes in which the t\(\bar{\mathrm{t}}\) system is produced with an associated boson. At leading-order the W boson can only be produced in the initial state, as is shown in Figure 11. The dominant contribution to the total amplitude is form quark-initiated processes. The W boson in fact polarises the incoming quarks and subsequently the top-quark pair leading to an enhancement in the decay product asymmetry at LO, exemplifying the need to take special care of spin correlations in any simulation [24]. Furthermore, the dominance of the quark-initiated production also leads to the t\(\bar{\mathrm{t}}\)W\(\pm\) asymmetry, in which t\(\bar{\mathrm{t}}\)W\(+\) production dominates over t\(\bar{\mathrm{t}}\)W\(-\), and is sensitive to the parton density function (PDF) of the proton.
Fixed order calculations of t\(\bar{\mathrm{t}}\)W at NLO in QCD (\(\alpha_{S}^{3}\alpha\)) have existed for a long time [25] and have been matched to parton shower [26, 27], with NLO EW corrections (\(\alpha_{S}^{2}\alpha^{2}\)) coming later [16].
Persistent tensions between the measurements and predictions of the t\(\bar{\mathrm{t}}\)W cross-section have driven a lot of recent activity in the theory community. Calculations have become increasingly more sophisticated despite the many difficulties that arise when calculating the higher-order corrections for this process.
t\(\bar{\mathrm{t}}\)W production with an additional parton (e.g., t\(\bar{\mathrm{t}}\)Wj and t\(\bar{\mathrm{t}}\)Wjj) generate large augmentations to the total cross-section with large NLO corrections as they introduce gluon-initiated production processes [28]. To merge the matrix elements of these processes with PS machinery, dedicated studies have been performed, with an improved multi-leg matching scheme [29].
Figure 10: 95% CL confidence intervals for the 2D fits in [22].
Figure 9: 95% CL confidence intervals for the 1D and 5D fits in [22].
Calculations at NLO in QCD that account for the next-to-next-to-leading logarithmic (NNLL) [30] effects are now available as well as NLO QCD with NNLL effects with NLO EWK corrections [31; 32]. Sub-leading EW corrections (\(\alpha^{3}\alpha_{S}\)) to \(\mathrm{t\bar{t}W}\) have in fact been found to have a larger effect than expected (approximately 10%) [33; 34; 35], primarily due to contributions from amplitudes represented by top-W-boson scattering diagrams.
Recent work has also included calculations of the full NLO cross-section including fixed order corrections and full LO spin correlations of decay products using POWHEG [35]. Some emphasis has also been put on the need for off-shell calculation which culminated in full off-shell calculations at NLO in QCD [36; 37; 38], off-shell calculations incorporating NLO EWK corrections [39] and finally the development of procedures to incorporate off-shell effects into NLO+PS procedures [40]
As mentioned in Section 2, \(\mathrm{t\bar{t}W}\) inclusive cross-section measurements have in the past been simultaneously extracted the \(\mathrm{t\bar{t}Z}\) cross-section due to the difficulties in disentangling these two rare processes. The previous measurements from CMS used data collected in 2016, selecting events with two or more leptons. Events selected with two leptons of the same sign charge provide the most sensitivity to the \(\mathrm{t\bar{t}W}\) process. The inclusive cross-section was measured to be \(\sigma_{\mathrm{t\bar{t}W}}=0.77^{+0.12}_{-0.11}\mathrm{stat.}^{+0.13}_{-0.12} \mathrm{syst}\). pb with an observed (expected) significance of 5.3 (4.5) standard deviations [9]. ATLAS made a similar measurement, extracting a cross-section value of \(\sigma_{\mathrm{t\bar{t}W}}=0.87\pm 0.13\mathrm{stat.}\pm 0.14\mathrm{syst}\). pb and an observed (expected) significance of 4.3 (3.4) [8].
With the full Run 2 dataset available CMS has performed a new analysis that independently measures the inclusive \(\mathrm{t\bar{t}W}\) cross-section in the two lepton (same-sign) and three or more lepton channels. Although the much larger dataset significantly reduces the statistical uncertainty, new techniques have been developed to reduce the systematic uncertainty from 16% in the 2016 measurement to 6%. One of the key developments was a new multivariate analysis (MVA) algorithm designed to distinguish between leptons from the decays of W bosons (prompt leptons) and leptons originating in either the decay of heavy quarks (b or c quarks) or misidentified hadronic jets (non-prompt leptons). Although non-prompt leptons are generally easy to distinguish from prompt leptons, when background processes are large enough, they will still produce many objects with lepton-like signatures, such that further steps are needed to reduce their contribution to a signal region. The non-prompt background in this analysis primarily stems from the \(\mathrm{t\bar{t}}\) process. The new MVA algorithm brings a large improvement in
Figure 11: Leading-order (top left) and next-to-leading-order (top right and bottom) Feynman diagram for the \(\mathrm{t\bar{t}W}\) process. The last diagram is an example of the sub-leading electroweak corrections.
the signal efficiency of the analysis compared with when a cut-based identification method was used in the previous iteration.
In the same-sign dilepton category, a multi-class deep neural network (DNN) is used to discriminate between signal and background using kinematic distributions of the jets and leptons in the event. The network is trained to distinguish between four processes: \(\mathrm{t\bar{t}W}\), non-prompt lepton backgrounds (modelled using \(\mathrm{t\bar{t}}\) simulation), \(\mathrm{t\bar{t}Z}\) or \(\mathrm{t\bar{t}H}\), and \(\mathrm{t\bar{t}\gamma}\). The distribution of the \(\mathrm{t\bar{t}W}\) output node provides an optimally discriminating variable.
A likelihood function is built from the Poisson probabilities to obtain the observed yields in bins of the discriminating variables in several event categories, with terms incorporating the various uncertainties and the correlations. A binned profile likelihood fit to the observed data is then performed using predicted signal and background distributions simultaneously in all event categories.
In the dilepton channel events are categorised according to the selected leptons' flavour and charge. The DNN \(\mathrm{t\bar{t}W}\) output node is the discriminating observable that is then used in the fit. In the \(\mathrm{t\bar{t}}\)-lepton category, events are categorised according to their number of jets, medium b-tags and the charge of the selected leptons, and the tri-lepton mass \(\mathrm{m}(3\ell)\) is used as the fit observable.
The inclusive \(\mathrm{t\bar{t}W}\) production cross-section is measured to be \(\sigma_{\mathrm{t\bar{t}W}}=868\pm 40(\mathrm{stat})\)\({}^{+52}_{-50}(\mathrm{syst})\) fb [41], which is the most precise measurement to date. A breakdown of the cross-section measurement in the different channels is found in Figure 12 where it is compared with two theory predictions. The SM prediction at NLO+NNLL accuracy with \(\mathrm{FxFx}\) jet merging represents the latest theory prediction [29] giving a cross-section of \(\sigma_{\mathrm{t\bar{t}W}}^{\mathrm{theo}}=592^{+155}_{-96}(\mathrm{scale})\pm 1 2(\mathrm{PDF})\,\mathrm{fb}\). Measured and predicted cross-sections are within two standard deviations of one another. The central value of the measurement in data is approximately 1.5 times larger than the comparative theory prediction.
The dominant systematic uncertainties originate from the uncertainty on the luminosity determination, the background estimation of the electron charge misidentification rate and the b-jet identification. All these uncertainties have significantly reduced with respect to the last iteration.
A simultaneous measurement of the \(\mathrm{t\bar{t}W^{+}}\) and \(\mathrm{t\bar{t}W^{-}}\) cross-sections is performed. The results in Figure 13 show that the measured cross-sections are significantly lower than the theoretical prediction. A measurement of the ratio of these two cross-sections is performed, as there are partial correlations between the systematic uncertainties of the two cross-sections that are reduced when measuring the ratio directly. This measurement is shown in Figure 14 to also be low in the theoretical prediction, but in agreement within the uncertainties.
Figure 12: Measurements of the inclusive \(\mathrm{t\bar{t}W}\) cross\(-\)section [41]. The combined result is shown with a breakdown of the measurement obtained in the different dilepton and \(\mathrm{tri}-\)lepton channels, as well as the measurement obtained in the different lepton flavour categories of the dilepton channel. The black inner error bar indicates the statistical uncertainty, while the green outer error bar represents the full systematic plus statistical uncertainty. The measurements are compared with two SM predictions. The prediction shown by the black line is from Ref. [31] while the prediction represented by the red line comes from Ref. [29] and includes FxFx predictions. The central lines of these two vertical lines represent the nominal prediction, while the band represents the combined uncertainty from the scale and PDF theory variations in the calculation.
## 5 \(\mathrm{t\bar{t}\gamma}\) Measurements
Studies of the \(\mathrm{t\bar{t}\gamma}\) production process probe the behaviour of the \(\mathrm{t\gamma}\) electroweak coupling. The cross-section is sensitive to new physics that can occur via anomalous dipole moments of the top. Differential measurements provide additional sensitivity to said modifications that may affect spectra more or less in a particular kinematic regime. Such measurements typically compare state-of-the-art theory predictions with data to stress test the SM, and can be used to probe for BSM physics in a mode independent way.
The \(\mathrm{t\bar{t}\gamma}\) process is the rarest of the processes discussed in this review. Despite the small production cross-section, the associated production of a photon creates a very distinctive
Figure 14: Negative \(\log-\)likelihood scan for values of the ratio of \(\mathrm{t\bar{t}W^{+}}\) and \(\mathrm{t\bar{t}W^{-}}\) cross-sections. The best fit value is found at the minimum of the curve, while the dashed horizontal lines represent the CL limits [41]. The red line and hatched band represent the central value and total uncertainty of the theory prediction without the FxFx merging in Ref. [31].
Figure 13: Contours showing the 68% and 95% CL intervals from the likelihood fit in which the \(\mathrm{t\bar{t}W^{+}}\) and \(\mathrm{t\bar{t}W^{-}}\) processes are measured simultaneously as independent parameters [41]. The best fit value of the fit is indicated by the black cross, with the theory prediction from Ref. [31] shown by the red cross. The theory prediction included is without the FxFx jet merging.
signature that manifests as an isolated energy deposit in the electromagnetic calorimeter without any associated tracks in the silicon tracker. This, along with several jets and leptons, facilitates a high purity event selection. As a result, evidence of this process was first seen by the CDF Collaboration in \(\sqrt{s}=1.96\,\mathrm{TeV}\) collisions [42]. It was subsequently observed at the LHC by the ATLAS Collaboration in \(\sqrt{s}=7\,\mathrm{TeV}\) proton-proton collisions [43] and has been measured by both ATLAS and CMS in \(\sqrt{s}=8\,\mathrm{TeV}\)[44; 45].
Both collaborations have now also measured this process using 13TeV pp collisions. The first measurement at this energy scale was performed by the ATLAS collaboration in leptonic final states [46] using a luminosity of 36.1 fb\({}^{-1}\), which accounts for a subset of the full Run 2 dataset. Subsequently, measurements using the full Run 2 dataset of 138 fb\({}^{-1}\) were performed by CMS in the single-lepton [47] and dilepton [48] final states. Similarly, ATLAS uses a full Run 2 dataset of 139 fb\({}^{-1}\) use targeting the dilepton (e\(\mu\)) [49] final state only.
The targeted signals in all analysis includes the processes demonstrated in Figure 15, in which the photon not only originates from the top-quark decay but also the charged fermions radiated from the decay products of the top quark, and from the incoming parton. No attempt to differentiate between the sources is made, but requirements on the photon kinematics are implemented to suppress photons from the top-quark decay products.
ATLAS performed its latest dilepton measurement in the e\(\mu\) channel only, due to the clean final state and small background contribution. This enables an analysis strategy without having to implement complicated MVAs to discriminate signal from background thus simplifying the subsequent comparison with theoretical calculations. In particular, the analysis targets a comparison with the pp \(\to\) bWbW\(\gamma\) calculation in reference [50; 51]. The calculation includes all resonant and non-resonant diagrams, interference and off-shell effects of the top quarks and W bosons, meaning the signal considered combines both resonant t\(\bar{\mathrm{t}}\gamma\) and non-resonant tW\(\gamma\) production as demonstrated in Figure 16.
Figure 16: Leading-order Feynman diagram for the tW\(\gamma\) process. Red gauge boson lines represent W bosons while blue gauge boson lines represent photons [49].
Figure 15: Leading-order Feynman diagram for the t\(\bar{\mathrm{t}}\gamma\) process. Each diagram demonstrates a different production mechanism for the high energy photon in the process.
Each analysis defines its own signal region at the detector-level where events are selected with exactly one photon, at least one b-tagged jet, and a channel-dependent number leptons (electrons or muons) and jets. After the full event selection, the persisting backgrounds can be broadly categorised as coming from four sources, three of which originate from events in which the photon or the lepton has been misidentified. Each measurement then defines a fiducial volume using particle-level objects, except for the ATLAS dilepton e\(\mu\) measurement, which uses parton-level objects. A summary of the different fiducial volumes is shown in Table 1.
Events in which the selected photon candidate originates from a misidentified jet or non-prompt photon from the decay of a hadron make up the hadronic-fake background. The main process contributing to this background is \(\mathrm{t\overline{t}}\) where one of the jets in the final state is misidentified as a photon. All analyses use data-driven methods to derive scale factors in regions enriched with the hadronic-fake background which are then applied to the simulated hadronic-fake background prediction in the signal region.
Events in which the selected photon candidate originates from an electron make up the electron-fake background. This is the dominant background source in the dilepton channels. Electron-to-photon fake rates are measured using the tag-and-probe method in control regions using the \(\mathrm{Z}\to\mathrm{ee}\) process. The fake rate scale factors are determined by taking the ratio between the fake rate measured in the data and simulation in bins of \(p_{T}\) and \(\eta\).
Additionally, the backgrounds in which one or more leptons result from either a jet or a non-prompt lepton from heavy-flavour decays (fake-lepton) are estimated directly from data, contributing mainly to the single-lepton channel. The main contribution to this background comes from SM processes in which jets are produced uniquely through the strong interaction i.e., QCD events. The photon in such events can be either prompt or fake. The background contributions from events with a prompt photon, excluding signal events and fake-lepton
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Experiment** & **Final State** & **Photon** & **Leptons (e/\(\mu\))** & **Jets** & **b-jets** \\ \hline \multirow{4}{*}{ATLAS [46]} & \(N_{\ell}=1(=2)\), & \multirow{2}{*}{\(p_{T}>20\,\mathrm{GeV}\), \(N_{\gamma}=1\), \(N_{\mathrm{j}}\geq 4(\geq 2)\), \(N_{\mathrm{b}}\geq 1\)} & \multirow{2}{*}{\(p_{T}>25\,\mathrm{GeV}\), \(p_{T}>25\,\mathrm{GeV}\), \(|\eta|<2.5\)} & \multirow{2}{*}{\(p_{T}>25\,\mathrm{GeV}\), \(|\eta|<2.5\)} \\ & \(N_{\mathrm{j}}\geq 1\) & & & \\ \hline \multirow{4}{*}{ATLAS [49] (parton level)} & \(N_{e}=1\), \(N_{\mu}=1\), \(N_{\gamma}=1\), \(N_{\mathrm{b}}=1\) & \multirow{2}{*}{\(|\eta|<2.37\)} & \multirow{2}{*}{\(p_{T}>25\,\mathrm{GeV}\), \(|\eta|<2.5\)} & \multirow{2}{*}{\(p_{T}>25\,\mathrm{GeV}\), \(|\eta|<2.5\)} \\ & \(N_{\mathrm{f}}=1\), \(N_{\mathrm{f}}=1\) & & & \\ \hline \multirow{4}{*}{CMS [47]} & \(N_{\ell}=1\), \(N_{\gamma}=1\), \(N_{\mathrm{j}}\geq 3\), \(N_{\mathrm{j}}\geq 1\) & \multirow{2}{*}{\(p_{T}>20\,\mathrm{GeV}\), \(|\eta|<1.44\)} & \multirow{2}{*}{\(p_{T}>35\,\mathrm{GeV}\), \(|\eta|<2.37\)} & \multirow{2}{*}{\(p_{T}>30\,\mathrm{GeV}\), \(|\eta|<2.4\)} \\ & \(N_{\mathrm{b}}\geq 1\) & & & \\ \hline \multirow{4}{*}{CMS [48]} & \(N_{\ell}=2(\mathrm{OS})\), \(N_{\gamma}=1\), \(N_{\mathrm{j}}\geq 1\) & \multirow{2}{*}{\(p_{T}>20\,\mathrm{GeV}\), \(|\eta|<1.44\)} & \multirow{2}{*}{\(p_{T}^{\mathrm{1}}>25\,\mathrm{GeV}\), \(|\eta|<2.4\)} & \multirow{2}{*}{\(p_{T}>30\,\mathrm{GeV}\), \(|\eta|<2.4\)} \\ & \(N_{\mathrm{b}}\geq 1\) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table outlining the fiducial selections made in each analysis. All selections are made on particle-level objects except for the ATLAS dilepton measurement. Additional requirements ensure leptons (in all cases, only electrons and muons are considered) are isolated and that the lepton energy incorporates that of radiated photons. Additional photon requirements also ensure isolation and that it does not originate from hadronic decay. Additional vetoes are applied to events in which leptons and photons are produced in close proximity. In particle-level selections, b-jets are defined using ghost-matching [52]. Leptons in the parton-level definition are required to come from the W-boson decay. Superscripts 1 and 2 refer to objects ordered by transverse momentum from highest to lowest.
backgrounds with a prompt photon, are estimated using simulated samples. These include \(\mathrm{W}\gamma\), \(Z\gamma\), single-top+\(\gamma\), diboson, and \(\mathrm{t}\mathrm{f}\mathrm{V}\).
All the analyses discussed report inclusive and differential cross-sections measured in fiducial volumes defined according to the kinematics of the final state particles. Differential distributions of certain variables provide information on specific aspects of the \(\mathrm{t}\mathrm{f}\gamma\) process. Photon kinematics such as its \(p_{T}\) and \(\eta\) are sensitive to the coupling between the top quark and photon. Distributions of the angular separation between the photon and the top quarks decay products are sensitive to the origin of the photon. Furthermore, studying observables that do not involve the photon provide information on the \(\mathrm{t}\mathrm{f}\) system itself.
### Inclusive Cross-Section Measurements
The latest ATLAS measurement of the inclusive cross-section in the single-lepton channel also includes a simultaneous measurement on the dilepton channel [46]. This measurement was performed using a smaller dataset collected in 2016 only consisting of 36.1 fb\({}^{-1}\), somewhat smaller than the more recent ATLAS dilepton e\(\mu\) measurement [49] that will be described later. Using a neural network to discriminate the \(\mathrm{t}\mathrm{f}\gamma\) signal from backgrounds at detector-level, this distribution was then used as the input distribution to a profiled likelihood fit in which the fiducial cross-section is extracted. Several fits are performed, either independently fitting to the data in each channel or fitting to the data in each channel simultaneously. A correction factor for the signal efficiency and event migration into the fiducial region is also used when quoting the results. The measured inclusive fiducial cross-section measurements from [46] are found to be
\[\sigma_{fid}^{SL} =521\pm 9(\mathrm{stat})\pm 41(\mathrm{sys})\,\mathrm{fb}\] \[\sigma_{fid}^{DL} =69\pm 3(\mathrm{stat})\pm 4(\mathrm{sys})\,\mathrm{fb}\]
A breakdown of the results, normalised to their corresponding NLO SM predictions, can be seen in Figure 17. In the single-lepton channel, the dominant uncertainties are related to the estimates of the jet energy and resolution scales as well as the background modelling, which is dominated by \(\mathrm{t}\mathrm{f}\) modelling, used to model the hadronic and electron-fake backgrounds. In the dilepton channel the uncertainty is still dominated by the statistical uncertainty of the data, with the largest systematic uncertainty coming from the signal and background modelling, which is dominated by \(Z\gamma\) modelling.
CMS performs a similar measurement of the inclusive cross-section in the single-lepton channel. The fiducial phase-space is defined at particle level and can be found in Table 1. It is the same for both the inclusive and differential measurements. Signal regions are defined at detector-level and are designed to be as close as possible to the fiducial volume as possible. Additionally, orthogonal control regions are defined, enriched in the major backgrounds, are used in a fit to data to constrain associated uncertainties. The observed and expected yields in the signal and control regions along with the systematic uncertainties, are used to construct a binned likelihood function. The likelihood fit performed to extract the inclusive fiducial cross-section is performed separately to the one for the differential measurement. For the inclusive measurement, events in the signal and control regions are first categorised according to the flavour of the lepton. In the control regions, events are further categorised according to the photon transverse momentum, whereas in the signal regions the \(M_{3}\) variable is used. This \(M_{3}\) variable represents the invariant mass of the three jets that maximises their vector \(p_{T}\) sum. Nuisance parameters are assigned to account for the normalisation of the misidentified electron, \(Z\gamma\) and \(W\gamma\) backgrounds. The resulting fiducial inclusive cross-section measurement [47] is found to be
\[\sigma(\mathrm{f}\mathrm{f}\gamma)=798\pm 7(\mathrm{stat})\pm 48(\mathrm{syst}) \,\mathrm{fb}\]
A breakdown of the inclusive measurement in the different channels can be seen in Figure 18. The leading systematic uncertainties according to their post-fit impact on the measured cross-section come from the normalisation of the \(W\gamma\) background, the non-prompt background estimation and the integrated luminosity estimation.
Figure 17: Inclusive \(\mathrm{t}\mathrm{f}\gamma\) production cross-section measurements by ATLAS in leptonic channels [46]. The NLO prediction from theory is shown in the dashed vertical line, with the uncertainty shown in the beige band. The measured values in data are represented by the black points, where the associated total and statistical uncertainties are shown in the red and blue lines, respectively. Results in each of the different lepton flavour channels are also shown.
To extract the inclusive \(\mathrm{t}\bar{\mathrm{t}}\gamma_{\mathrm{f}}\) fiducial cross-sections in the dilepton channel, the CMS measurement uses a very similar strategy to the single-lepton case, making the two measurements easier to combine. The fiducial phase-space is defined at particle level, for which the full definition can be found in Table 1. A profile likelihood fit to the photon \(p_{T}\) distribution in data across the three data taking periods of Run 2 is performed. The resulting inclusive fiducial cross-section is found to be [48]
\[\sigma_{fid}=175.2\pm 2.5(\mathrm{stat})\pm 6.3(\mathrm{syst})\,\mathrm{fb}\]
This agrees with the predicted inclusive cross-section of
\[\sigma_{\mathrm{SM}}=155\pm 27\,fb\]
The predicted inclusive cross-section is about 12% (0.7 standard deviations) lower than the measurement. This is shown in Figure 19 along with the breakdown of the fit in the individual channels. However, the large theory uncertainties that impact the prediction from Madgraph make it difficult to draw strong conclusions on the agreement between the prediction and the unfolded data. The predicted cross-section is scaled to the NLO \(2\to 3\,\mathrm{pp}\to\mathrm{t}\bar{\mathrm{t}}\gamma\) process, but does not include processes in which the photon is radiated from the final state decay products of the top quark. This is one potential cause of the discrepancy between the results.
Figure 18: Inclusive \(\mathrm{t}\bar{\mathrm{t}}\gamma_{\mathrm{f}}\) production cross–section measurements by CMS in the single–lepton channel [47]. Results are also shown for the individual lepton flavour channels.
ATLAS measures dilepton fiducial cross-sections in the \(\mathrm{e}\mu\) final state using a profile likelihood fit to the \(S_{T}\) distribution (scalar sum of all transverse momenta in the event) in data. This variable provides good separation between the signal and backgrounds. The fiducial volume is defined in Table 1 and is the same for both the inclusive and differential cross-section measurements. The selection mimics that of the theory calculation with which the experimental results are compared [50; 51]. The inclusive cross-section is measured to be
\[\sigma_{fid}=39.6\pm 0.8\,(\mathrm{stat})^{+2.6}_{-2.2}\,(\mathrm{syst})\, \mathrm{fb}\]
Ref. [49], which agrees with the dedicated theoretical calculation which predicts a value of
\[\sigma_{fid}=38.50^{+0.56}_{-2.18}\,(\mathrm{scale})^{+1.04}_{-1.18}\,(\mathrm{ PDF})\,\mathrm{fb}\]
Refs. [50; 51]. As is shown in Table 2, the cross-section measurements all tend to agree with the predicted values at NLO within uncertainties when taking the branching ratios into consideration. Differences in the fiducial cross-sections between the experiments stem from the differences in the fiducial volumes outlined in Table 1. In particular, the CMS single-lepton fiducial cross-section is measured to be much higher than in ATLAS due to the inclusion of events with three jets and a looser dR() selection.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Experiment** & \(\mathrm{t}\bar{\mathrm{t}}\)**Decay Channel** & \(\sigma_{fid}^{\mathrm{t}\bar{\mathrm{t}}\gamma}\)**(fb)** \\ \hline CMS [47] & Single lepton & 798 \\ ATLAS [46] & Single lepton & 521 \\ CMS [48] & Dilepton & 175 \\ ATLAS [49] & Dilepton (\(\mathrm{e}\mu\)) & 39.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Table of the inclusive \(\mathrm{t}\bar{\mathrm{t}}\gamma\) production fiducial cross-section measurements from ATLAS and CMS.
Figure 19: Inclusive \(\mathrm{t}\gamma\) production cross-section measurements by CMS in the dilepton channel [48]. Results are also shown for both the combined measurement and the breakdown for the individual dilepton channels.
### Differential Cross-Section Measurements
CMS has reported differential \(\mathrm{t}\bar{\mathrm{t}}\gamma\) fiducial cross-sections in both the single lepton [47] dilepton [48] channels. The single-lepton publication reports differential fiducial cross-section measurements as a function of the photons \(p_{T}\), \(|\eta|\) and the difference in angle between the lepton and photon (\(\Delta R(\ell,\gamma)\)). Results were obtained simultaneously for the 3 and 4 jet regions, the lepton flavour channels, and the different data taking periods. The same control regions are used as in the inclusive measurement. After the profile likelihood fit, backgrounds are subtracted from the observable distribution in data and subsequently unfolded to particle level. The unfolded differential cross-section is defined in the same fiducial phase-space as the inclusive cross-section. Distributions of the unfolded observables are shown in Figure 20 where a comparison with simulations obtained using MadGraph5_aMC@NLO interfaced with three different parton shower algorithms is shown. In the bulk of the distribution, the dominant uncertainties are similar to those in the inclusive cross-section measurement. For \(p_{T}(\gamma)>120\) GeV, the uncertainties in the jet energy scale, photon identification efficiency and colour re-connection modelling are the largest sources of uncertainty.
In the dilepton channel, differential cross-sections are reported with respect to 12 observables that are unfolded to particle level in the same fiducial volume as the inclusive cross-section measurement. These are compared with two predictions using MadGraph5_aMC@NLO event generator interfaced with two parton shower simulations: Pythia8 with the CP5 tune [53] and Herwig [54] v7.14 with the CH3 tune [55]. An example of the unfolded distribution of the transverse momentum of the photon at particle level in the dilepton channel is shown in Figure 21. No significant deviation between the measured distribution and either of the predictions is observed, but due to the size of the theory uncertainties it is once again difficult to come to a conclusion regarding their agreement.
Figure 20: Differential \(\mathrm{t}\bar{\mathrm{t}}\gamma\) production cross-section measurements by CMS in the single-lepton channel [47]. Results are also shown as a function of the transverse momentum of the photon at particle level.
ATLAS has reported differential cross-section measurements in both the leptonic [46] and dilepton (e\(\mu\)) [49] channels. To extract the distributions, no fit to data is performed. The major backgrounds are subtracted from the data using the estimates outlined earlier after which detector effects are removed using an unfolding procedure which is applied to the observed detector level distributions to obtain the true distribution of the signal at particle or parton level. The differential cross-section is normalised to unity resulting in distributions shown in Figure 22 for [46]. Absolute differential distributions are also provided and can be found in the paper.
In the case of the dilepton (e\(\mu\)) channel, ATLAS measures differential cross-sections as a function of a similar set of variables described in the CMS dilepton measurement. Distributions are unfolded to parton level and can therefore be directly compared with the aforementioned theory prediction via both normalised and absolute differential cross-sections. Additionally, a comparison is made with two leading-order simulations using Madgraph interfaced with Pythia or Herwig. A comparison of the parton-level cross-section as a function of the photon \(p_{T}\) in simulation and the unfolded data are shown in Figure 23. In general, all distributions agree well; however, one trend that was recognised was that the NLO prediction tends to describe most distributions better than the LO prediction.
Figure 21: Distribution of the absolute production cross-section of \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{t}}}}}\gamma\) in the dilepton channel as a function of the \(p_{T}\) of the photon, as measured by the CMS experiment [48]. Observed data unfolded to particle level is compared with the predicted distribution from the Madgraph generator with two different parton shower models. Theoretical uncertainties evaluated using the Pythia 8 prediction are shown in the shaded grey bands.
### EFT Interpretations
The CMS measurements also provide limits on Wilson coefficients that induce electroweak dipole moments
\[c_{tZ} = Re(-sin\theta_{W}C_{uB}^{33}+cos\theta_{W}C_{uW}^{33})\] \[c_{tZ}^{t} = Im(-sin\theta_{W}C_{uB}^{33}+cos\theta_{W}C_{uW}^{33})\]
Figure 23: Distribution of the absolute production cross–section of \(\mathrm{t}\bar{\nu}\gamma\) in the \(\mathrm{e}\mu\) channel as a function of the \(p_{T}\) of the photon, as measured by the ATLAS experiment [49]. Observed data unfolded to parton level is compared with the predicted distribution from the theoretical prediction from [50; 51]. The systematic and statistical uncertainties are shown in the grey bands.
Figure 22: Normalised differential cross-section as a function of the photon transverse momentum [46]. Unfolded distributions are compared with predictions using the MadGraph5_aMC@NLO + Pythia8 together with up and down variations of the Pythia8A14 tune parameters, the MadGraph5_aMC@NLO + +Herwig7 and POWHEG + Pythia8 \(\mathrm{t}\bar{\mathrm{t}}\) where the photon radiation is modelled in the parton shower.
A maximum likelihood fit using the \(\mathrm{p_{T}}(\gamma)\) spectrum, which is sensitive to such modifications, is performed to obtain 68% and 95% CL intervals on the targeted coefficients. The fit is performed in the signal regions only. The intervals for a given Wilson coefficient are obtained by either fixing the other WC to its SM value (1D), or simultaneously profiling the two WCs (2D). The results of both tests are shown in Figure 24. No deviation from the SM values is observed. The 1D scans show more stringent intervals than the \(\mathrm{t\bar{t}Z}\) measurements. This is partially because models with non-zero WC values predict a harder \(\mathrm{p_{T}}(\gamma)\) spectrum, which is not observed in the tails of the data distribution. The precision with which CMS can reconstruct photon kinematics is a major contributing factor to this measurements ability to improve upon the latest limits.
The CMS dilepton measurement performs a profile likelihood fit in the same way as the inclusive measurement, to obtain the best fit values for the Wilson coefficients probed. A combined profile likelihood fit is also performed with the single-lepton analysis. Although the dilepton channel benefits from a higher purity of signal, the single-lepton channel profits from a higher number of signal events with a high \(p_{T}\) photon, making it sensitive to modifications in the kinematics of the photon caused by anomalous Wilson coefficient values. The 1D and 2D scans of the Wilson coefficients in both the dilepton and combined fits can be found in Figure 25. No sign of anomalous couplings is observed. A comparison with the constraints from other measurements is also shown in Figure 25. The results in this publication provide the best limits to date on the \(c_{tZ}\) and \(c_{tZ}^{l}\) Wilson coefficients in Figure 26.
Figure 24: Best fit values for the explored EFT Wilson coefficients by CMS in the single\(-\)lepton channel [47]. Both the 1D and 2D scans are shown.
Figure 25: Distributions of the observed (solid line) and expected (dashed line) negative log-likelihood difference from the best fit value for the one-dimensional and two-dimensional scans of the studied Wilson coefficients. The results are obtained from the fit to data using the photon \(p_{T}\) distribution. The plots shown here are from the combination of the single lepton and dilepton analyses [48].
## 6 Measurements at the HL-LHC and Future Colliders
Cross-section measurements of rare \(\mathrm{t}\overline{\mathrm{t}}+X\) process are incredibly useful probes of top-quark couplings to gauge bosons and are therefore a key ingredient to furthering our knowledge at the high energy frontier. Anomalous couplings are predicted by several BSM theories including composite Higgs models, models with extra dimensions and those predicting vector-like quarks [4; 56; 57].
The LHC has already produced a sizeable sample of rare top-quark processes, which has been used to measure the cross-sections for several \(\mathrm{t}\overline{\mathrm{t}}+X\) processes with an uncertainty that is considered to be on the cusp of what is commonly referred to as a 'precision measurement'. The large dataset from the HL-LHC will cement these measurements in the precision regime and allow more precise probes of anomalous couplings affecting these amplitudes. Extrapolations of current measurements to future datasets and accelerators provide estimates of what might be achieved, help to establish physics goals and highlight the improvements required to achieve them.
Although the HL-LHC will provide a huge rare top dataset, enhancing the boosted regime in particular, it is also interesting to look towards machines planned even further in the future. Several machines fall into this category and are typically designed to push the precision frontier through the clean environment provided via lepton collisions, or the energy frontier through the high energies achievable at large circular hadron colliders. Results from future lepton colliders are particularly interesting in the context of this article as ultra-precise measurements of top-quark EW interactions will be achievable. Future colliders, of both hadrons and leptons, at or above the energy frontier (\(\geq\)10 TeV) have the potential to improve the sensitivity of
Figure 26: Comparison of observed 95% CL intervals for the two Wilson coefficients, \(c_{tZ}\) (**upper panel**) and \(c_{tZ1}\) (**lower panel**) from CMS measurements of: \(\mathrm{t}\overline{\mathrm{t}}Z\), \(\mathrm{t}\overline{\mathrm{t}}\gamma\) single lepton, \(\mathrm{t}\overline{\mathrm{t}}\gamma\) dilepton. The results are shown from the one-dimensional scans, i.e., all other Wilson coefficients have values set to zero. The dashed lines indicate the results from the combination with the single\(-\)lepton channel. In the case of the global fit and the \(\mathrm{t}\overline{\mathrm{t}}Z+tZq\), the solid lines represent the result where all Wilson coefficients are fixed to zero, whereas the dashed lines show the results from the marginalised limits. The tightest constraint to date on these Wilson coefficients comes from the combination of the \(\mathrm{t}\overline{\mathrm{t}}\gamma\) single lepton and dilepton channels [48].
Standard Model EFT (SMEFT) fits to new physics, particularly to four-fermion operators for which there is a strong increase in sensitivity at higher energies.
To fully harness the power of precision measurements in a truly model-independent search for new physics, it is best to take a global approach to SMEFT fits [58]. This requires a combination of the broadest dataset possible in a high-dimensional fit of many operators affecting several SM processes into account. Several of these operators are of particular interest given the scope of this article, namely operators affecting top EW couplings. So far, this article has only discussed measurements of SMEFT parameters using \(\mathrm{t}\bar{\mathrm{t}}Z/\gamma\) however, operators can affect several SM processes in many different ways and hence a global fit of these operators using many processes can provide important constraints.
The outlook for measurements of EFT parameters affecting top EW couplings has in fact been studied and reported in several publications. A comparison of the expected 95% confidence interval for several EFT operators, using the LHC Run 2 dataset and the extrapolated values using the HL-LHC dataset are shown in Figure 27. The figure shows the results from a global EFT fit performed in Ref. [59]. A full list of analyses included in the global fit can be found in Table 3. It should be noted that, although the HL-LHC data are shown to bring an improvement to the global fits of almost all of the operators in question (Figure 27), the individual 95% confidence intervals on operators \(C_{\overline{\varphi}Q}\) and \(C_{\overline{\varphi}Q}^{3}\) are not enhanced. This is due to their reliance on the legacy \(e^{+}e^{-}\to\mathrm{b}\overline{\mathrm{b}}\) measurements of \(R_{b}\) and \(A_{F\mathrm{R}LR}^{bb}\) at the Z-pole from LEP and SLC. The inclusion of the Tevatron s-channel single-top measurement provides complementary constraining power on these operators and is still the most sensitive measurement of this process which, at the time of writing this article, still includes measurements at the LHC.
Not all processes used in the global fit are relevant for this article; however, the plot highlights the importance of \(\mathrm{t}\bar{\mathrm{t}}+X\) measurements at the present and in the future. All projections (including for lepton colliders discussed later) are based on similar approximations to the 'S2' scenario used in projections of Higgs boson measurements [60] where many statistical and experimental uncertainties scale as \(\frac{1}{\sqrt{L_{\mathrm{t}\pi t}}}\), with \(L_{int}\) representing the integrated luminosity. With respect to uncertainties at the end of the LHC Run 2, the complete HL-LHC program approximates that experimental uncertainties will reduce by a factor of 5, while theory and modelling uncertainties are reduced by a factor of two. The reduction in theory uncertainties assumes that N\({}^{2}\)LO calculations will be achieved for the rare top processes and that large steps forward in Monte Carlo modelling are made in the next 10 years, ready for when the new colliders are expected to start running.
This study highlights the need for further advances in theoretical calculations and modelling for HL-LHC measurements where, according to the current 'S2' model for projections, theory uncertainties will for the first time dominate over experimental and statistical sources. The current state of the art for the theory predictions of the relevant processes, as well as the desires of the experimental community for future predictions are reported here.
The latest \(\mathrm{t\bar{t}}+W\) calculations have been discussed at length in the relevant section in this article as the area is particularly active. To summarise, the latest calculations have been performed using matrix element using perturbative calculations with precision up to the NLO terms in QCD and include additional next-to-next-to-leading log (NNLL) effects [30], as well as predictions for NLO+NNLL in QCD with NLO EW corrections. Full off-shell calculations up to NLO in QCD [36; 37; 38] have also recently been developed and now with the possibility to combine NLO EW and QCD corrections to off-shell \(\mathrm{t\bar{t}W}\)[39] and a procedure to apply the full off-shell corrections within the NLO+PS setup [40]. Future NNLO calculations could bring a factor of two improvement in the precision of the calculation.
NLO QCD calculations of \(\mathrm{t\bar{t}\gamma}\) have been available for a while [50]. The most recent NLO calculation was in fact for the process \(\mathrm{t\bar{t}\gamma}+tW\gamma\)[51] in the \(\mathrm{e\mu}\) final state. The inclusion of NNLO QCD corrections in a full \(\mathrm{t\bar{t}\gamma}\) calculation will become necessary if the full potential of the data at the HL-LHC is to be exploited.
The latest \(\mathrm{t\bar{t}Z}\) cross-section calculation is of NLO QCD+EW precision. This not only takes into account the \(Z/\gamma\) interference, but also includes the off-shell \(\mathrm{t\bar{t}\gamma^{*}}\) contributions. The theory uncertainty in this calculation is \({}^{+0.09}_{-0.10}\)[14; 15; 16]. This is mainly a result of the proton PDF, QCD scale and \(\alpha_{S}\). The measurements in Section 2 show that the total systematic uncertainty of the inclusive and differential cross-section measurement are already very close to this. A more precise theory calculation in the future would have a great impact on the achievable precision of future EFT measurements sensitive to effects from the \(\mathcal{O}_{IZ}\) operator.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Process** & **Observable** & \(\sqrt{s}\) & **Luminosity** & **Experiment** \\ \hline \(\mathrm{pp}\to\mathrm{t\bar{t}}\) & \(\frac{d\sigma}{dm_{\mathrm{t}}}\) & 13 TeV & 140 fb\({}^{-1}\) & CMS \\ \(\mathrm{pp}\to\mathrm{t\bar{t}}\) & \(\frac{d\Lambda_{S}}{dm_{\mathrm{t}}}\) & 13 TeV & 140 fb\({}^{-1}\) & ATLAS \\ \(\mathrm{pp}\to\mathrm{t\bar{t}}H+tHq\) & \(\sigma\) & 13 TeV & 140 fb\({}^{-1}\) & ATLAS \\ \(\mathrm{pp}\to\mathrm{t\bar{t}Z}\) & \(\frac{d\sigma}{d\bar{t}^{\prime}}\) & 13 TeV & 140 fb\({}^{-1}\) & ATLAS \\ \(\mathrm{pp}\to\mathrm{t\bar{t}\gamma}\) & \(\frac{d\sigma}{d\bar{t}^{\prime}}\) & 13 TeV & 140 fb\({}^{-1}\) & ATLAS \\ \(\mathrm{pp}\to tZq\) & \(\sigma\) & 13 TeV & 77.4 fb\({}^{-1}\) & CMS \\ \(\mathrm{pp}\to\mathrm{t\gamma q}\) & \(\sigma\) & 13 TeV & 36 fb\({}^{-1}\) & CMS \\ \(\mathrm{pp}\to\mathrm{t\bar{t}W}\) & \(\sigma\) & 13 TeV & 36 fb\({}^{-1}\) & CMS \\ \(\mathrm{pp}\to\mathrm{t\bar{b}}\) (s-chan) & \(\sigma\) & 8 TeV & 20 fb\({}^{-1}\) & LHC \\ \(\mathrm{pp}\to tW\) & \(\sigma\) & 8 TeV & 20 fb\({}^{-1}\) & LHC \\ \(\mathrm{pp}\to\mathrm{tq}\) (t-chan) & \(\sigma\) & 8 TeV & 20 fb\({}^{-1}\) & LHC \\ \(\mathrm{pp}\to Wb\) & \(F_{0},F_{L}\) & 8 TeV & 20 fb\({}^{-1}\) & LHC \\ \(\mathrm{p\bar{p}\to\mathrm{t\bar{b}}}\) (s-chan) & \(\sigma\) & 1.96 TeV & 9.7 fb\({}^{-1}\) & Tevatron \\ \(e^{+}e^{-}\to\mathrm{b\bar{b}}\) & \(R_{b},A^{nb}_{FBLR}\) & 91 GeV & 202.1 pb\({}^{-1}\) & LEP/SLC \\ \hline \hline \end{tabular}
\end{table}
Table 3: Measurements included in the top-quark EW sector EFT fit [59]. The table includes the process, observable, centre-of-mass energy, integrated luminosity and experiment for each measurement. Where the experiment is cited as LHC, a combination of ATLAS and CMS measurements were used. Where Tevatron is cited, a combination of CDF and D0 results were used. LEP/SLD refers to different experiments from these two accelerators.
Figure 27 shows the Wilson coefficients for several EFT operators along the x-axis. ttX processes are sensitive to the first six couplings from the left. The remaining couplings often affect top-pair production via QCD mechanisms and can be investigated more precisely using other tt processes. Differential measurements of \(\ttbar Z\) and \(\ttbar\gamma\) as a function of the Z boson or photon transverse momentum, respectively, are essential probes of the effects of the \(\OtZ\) operator. With increasing statistics, several measurements of rare top processes could be measured to much greater precision. Notably, precise differential measurements of \(\ttbar W\) would provide essential information on this key background to measurements of \(\ttbar H\) and four-top process in multi-lepton final states to name but a few.
Across all selected operators, a factor of two to four times the current Run 2 limits is expected with the HL-LHC dataset, both for the individual and marginalised bounds. The exceptions to this are the individual bounds of \(C_{\varphi Q}^{-}\) and \(C_{\varphi Q}^{3}\), which are very dependent on the bounds from the \(\Zb\bar\) measurements at the Z-pole. Sensitivity to operators affecting EW couplings could be dramatically improved in the future through the harvesting and analysis of data large datasets in the boosted regime [61]. An additionally interesting insight from this reference is that, although not included in the fits performed in the document, the two-quark two-lepton (\(\mathcal{O}_{qq\ell}\)) operators can be probed at the LHC and beyond and by including analyses targeting for instance the off-Z-peak dilepton invariant mass region in \(\ttbar\ell^{+}\ell^{-}\), the sensitivity of EFT fits can be enhanced.
Although the HL-LHC provides a much larger dataset with which to study EW couplings, the processes that provide the most sensitivity remain \(\pp\to\ttbar Z\) and \(\pp\to\ttbar\gamma\). Future lepton colliders provide an excellent opportunity to perform high-precision tests for anomalous EW couplings affecting top-quark pair processes. One of the benefits of \(e^{+}e^{-}\) machines is that once the centre-of-mass energy exceeds twice the top mass, the dominant \(\ttbar\) production mechanism becomes \(\ee^{+}e^{-}\to Z/\gamma\to\ttbar\), providing direct access to the top-quark EW couplings in a very clean environment. Furthermore, lepton colliders can distinguish the coupling between
Figure 27: Comparison of expected 95% confidence intervals on Wilson coefficients for dimension-six operators affecting top-quark production and decay measurements using the LHC Run 2 dataset and the HL-LHC dataset [59]. Only linear terms proportional to \(\Lambda^{-2}\) are accounted for in the dependence of the observables on the Wilson coefficients. The solid bars show the constraint of from the single parameter fits, while the translucent bars show the marginalised constraints from the global fit.
the top quark and photon from the top-quark coupling with a \(Z\) boson. At circular lepton colliders, this is facilitated via a measurement of the final state polarisation in semileptonic top-quark decays, whereas at a linear collider this can be done using different beam polarisations configurations [62; 63; 64; 65].
Figure 28 compares expected limits on the different EFT operator coefficients using combinations of the data collected from the HL-LHC combined with data taken in the final stages of four different future lepton colliders: the CEPC, FCC, ILC and CLIC [59]. Important information on the different working configurations of each future machine is shown in Table 4. Though not as important for the processes and operators discussed here, it is worth noting that the different runs have different centre-of-mass energies above the top-quark pair production threshold, which can be used to disentangle the four-fermion \(e^{+}e^{-}\mathrm{t}\overline{\mathrm{t}}\) operator coefficients from the two-fermion operator coefficients. This is because the four-fermion operators scale quadratically with energy whereas the two-fermion operators either remain constant or grow linearly. Given the energies above the threshold in the circular collider scenarios are very close, this disentanglement is more difficult with such machines.
The data from circular colliders (FCC-ee and CEPC) operating at centre-of-mass energies equal to and slightly above the threshold, are expected to improve in the constraints on the bottom and top operators at the HL-LHC by a factor of 2 to 5 for several two-fermion operators. The constraining power on four-fermion operators is limited by the energy reach. The data for the linear colliders (ILC and CLIC) was simulated at two centres of mass energies above the threshold and provides impressive constraints on all operators. As mentioned, it is due to these different collision energies that even the bounds on the four-fermion operators become competitive once the centre-of-mass energy surpasses 1 TeV.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Machine** & **Polarisation** & **Energy** & **Luminosity** \\ \hline ILC [66] & P(\(e^{+},e^{-}\)): (\(\pm 30\%\),\(\mp 80\%\)) & 250 GeV & \(2ab^{-1}\) \\ & & 500 GeV & \(4ab^{-1}\) \\ & & 1 TeV & \(8ab^{-1}\) \\ CLIC [67] & P(\(e^{+},e^{-}\)): (\(\pm 30\%\),\(\mp 80\%\)) & 380 GeV & \(1ab^{-1}\) \\ & & 1.4 TeV & \(2.5ab^{-1}\) \\ & & 3 TeV & \(5ab^{-1}\) \\ FCC-ee [68] & & Z-pole & \(150ab^{-1}\) \\ & Unpolarised & 240 GeV & \(5ab^{-1}\) \\ & & 350 GeV & \(0.2ab^{-1}\) \\ & & 365 GeV & \(1.5ab^{-1}\) \\ CEPC [68] & & Z-pole & \(57.5ab^{-1}\) \\ & Unpolarised & 240 GeV & \(20ab^{-1}\) \\ & & 350 GeV & \(0.2ab^{-1}\) \\ & & 360 GeV & \(1ab^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Table of the working configurations for several future \(e^{+}e^{-}\) colliders from Ref. [59]. The machines listed in the table are: the International Linear Collider (ILC), the Circular Electron–Positron Collider (CEPC), the Compact Linear Collider (CLIC) and the Future Circular Lepton Collider (FCC-ee). The polarisation, energy and luminosity for 3 to 4 different running stages are listed along with references to the relevant documentation.
Looking further ahead, collisions at higher centre of mass (beyond 10TeV) could be achieved with for example a 100 km hadron collider, a linear electron-positron collider or compact circular muon collider [69; 70; 71]. As was alluded to earlier, the energy-growing sensitivity of the global SMEFT fits to new physics, especially through four-fermion operators, makes measurements at such machines invaluable. Given the absence of new physics signals, model-independent searches such as this provide one of the best chances of finding deviations from the SM and guiding the future of HEP.
## 7 Conclusions
The top quark is a unique particle in the known universe and while there are many priorities for high energy physics research, its distinctive features suggest it may have a special role in the SM. Therefore, understanding the top quark with absolute clarity remains a top priority for high energy physics experiments. The absence of new resonant particles has driven the development of novel methods to detect the presence of new physics, including indirect searches looking for anomalous couplings involving SM particles using EFT's. Such measurements require immense precision and a wealth of data. This has been the case for several years regarding the dominant QCD top-pair production mechanism. However, over the coming years, several rare top-pair processes will enter this regime, providing essential probes of anomalous couplings and new insights into where to look for this evasive new physics.
tt\(Z/\gamma\) measurements have had a sub 10% precision for some time and have provided differential measurements and constraints on the relevant EFT operators while the most recent ttW measurements have a precision of around 7%, though unfortunately no differential or EFT measurement has been performed at the time of writing this article. With an influx of more data from Run 3 and beyond, the collected dataset of all the rare top-pair processes will be large enough to perform both differential and EFT measurements. Additionally, as we increase
Figure 28: Comparison of expected 95% confidence intervals combining data from the HL-LHC with data from several proposed lepton collider experiments [59]. \(q\overline{q}\)tt and \(C_{LG}\) coefficients are not shown in the figure as \(e^{+}e^{-}\) collider measurements provide no additional sensitivity; however, all operators are included in the global fit. The solid bars show the constraint of from the single parameter fits, while the translucent bars show the marginalised constraints from the global fit. N.B. label HL-LHC+CC refers to the addition of FCC results.
the dataset size the boosted regime will become more populated and, due to energy-growing effects in certain EFT operators, these regimes will become much more important and provide complementary constraints.
EFT measurements become so important going forward, allowing us to scrutinise the SM and use the power of precision measurement across diverse datasets to probe a wide range of operators in a model-independent manner to perform comprehensive searches for new physics. It is clear from the projections that as we look towards the HL-LHC, the achievable constraints on EFT parameters grow 2-4 times stronger in the top EW sector. However, these constraints will grow even stronger at future lepton colliders, which show further improvements of between a factor of 2-5.
This research received no external funding. Not applicable. Not applicable. Not applicable. Not applicable. Not applicable. Not applicable. Not applicable. Not applicable. The authors declare no conflict of interest.
The authors declare no conflict of interest.
NotesNotes
|
2308.08713 | Decoding Emotions: A comprehensive Multilingual Study of Speech Models
for Speech Emotion Recognition | Recent advancements in transformer-based speech representation models have
greatly transformed speech processing. However, there has been limited research
conducted on evaluating these models for speech emotion recognition (SER)
across multiple languages and examining their internal representations. This
article addresses these gaps by presenting a comprehensive benchmark for SER
with eight speech representation models and six different languages. We
conducted probing experiments to gain insights into inner workings of these
models for SER. We find that using features from a single optimal layer of a
speech model reduces the error rate by 32\% on average across seven datasets
when compared to systems where features from all layers of speech models are
used. We also achieve state-of-the-art results for German and Persian
languages. Our probing results indicate that the middle layers of speech models
capture the most important emotional information for speech emotion
recognition. | Anant Singh, Akshat Gupta | 2023-08-17T00:30:56Z | http://arxiv.org/abs/2308.08713v1 | # Decoding Emotions: A Comprehensive Multilingual Study
###### Abstract
Recent advancements in transformer-based speech representation models have greatly transformed speech processing. However, there has been limited research conducted on evaluating these models for speech emotion recognition (SER) across multiple languages and examining their internal representations. This article addresses these gaps by presenting a comprehensive benchmark for SER with eight speech representation models and six different languages. We conducted probing experiments to gain insights into inner workings of these models for SER. We find that using features from a single optimal layer of a speech model reduces the error rate by 32% on average across seven datasets when compared to systems where features from all layers of speech models are used. We also achieve state-of-the-art results for German and Persian languages. Our probing results indicate that the middle layers of speech models capture the most important emotional information for speech emotion recognition. GitHub1
Footnote 1: [https://github.com/95anantsingh/Decoding-Emotions](https://github.com/95anantsingh/Decoding-Emotions)
Anant Singh
Akshat Gupta
Speech Emotion Recognition, HuBERT, wav2vec2, Edge Probing, Feature Extraction.
## 1 Introduction
In recent years, several transformer-based speech representation models, trained on massive amounts of speech data, have been introduced [1][2][3][4]. These models have demonstrated remarkable improvements in performance across various downstream tasks. Although several benchmarks exist for evaluating their performance [5][6], insufficient attention has been given to evaluating these models across multiple languages. Previous large-scale works on speech emotion recognition either do not evaluate the performance of the latest speech representation models [7] or they only evaluate a subset of models or languages [8][9][10]. Consequently, there is a gap in the literature when it comes to comprehensive evaluations of these models for speech emotion recognition across various languages.
Another critical aspect that has received limited attention is analyzing the inner workings of these models using probing techniques, which can provide valuable insights into how these models process and encode various linguistic and acoustic features. By examining the responses of these models to specific linguistic or emotional cues, we can develop a deeper understanding of the strengths and limitations of these models. Recognizing emotion in speech requires an understanding of both phonetic and the prosodic content in the spoken utterance [11]. While previous work has explored probing techniques to understand the phonetic content of speech representation models [12][13][14], we haven't seen studies doing the same for tasks that have a higher dependence on prosodic content.
This paper has a dual focus. Firstly, we present a comprehensive benchmark of multiple speech representation models for speech emotion recognition across a range of languages. This benchmark ensures that the models are just as applicable and relevant in diverse cultural and linguistic contexts. One of the problems in SER is the lack of standardized training and testing splits which allows different papers to report different performances for the task. To counter this, we adopt a standardized train-dev-test split of Scheidwasser et al. (2022)[7] to facilitate consistent comparisons. By providing a standardized evaluation framework, we enable effective comparisons and assessments of the performance of different speech emotion recognition models.
Secondly, we conduct probing experiments to gain insights into the underlying mechanisms of speech emotion recognition across multiple languages. Unlike [11], we find that the most important layer for speech emotion recognition are the center layers, as can be seen in Figure 1. This observation then inspired us to just utilize one single layer instead of the final layer or aggregate all layers [8] for SER. We surprisingly find that when features are extracted from the optimal layer, which are usually the center layers of the model, we achieve best performance. This is contrary to previous work where conventionally only final layer features or aggregating feature from different layers [8] have lead to best results. As a result, we report state-of-the-art results for German and Persian where only speech input is used for SER. Our findings challenge the prevailing notion and underscore the importance of selecting the appropriate layer to maximize the efficacy of speech representation models in SER tasks.
## 2 Background
Previous research on probing speech representation models have primarily focused on studying how speech models understand phonetic content [12][13][14]. In order to probe each layer, these studies adopt a technique of obtaining the time-average of feature vectors from the target layer and then passing them through a linear layer for downstream tasks. The idea is to use the least complex model for classification so that the descriptive power of the model can be studied. Notably, these investigations reveal that wav2vec2 exhibits an auto-encoder-like behavior, with the initial and final layers resembling the input, while the intermediate layers generate higher-level representations that encapsulate maximum contextual information. In a study conducted by Lin et al. (2023) regarding the utilization of speech representation models for prosodic tasks, it was discovered that only the initial layers encoded prosodic information. We find that speech emotion recognition is a task done best by the center layers of the model, producing state-of-the-art results with a very simple linear model when applied on the right layer.
A large number of research articles exists in literature for making systems for speech emotion recognition. In this paper, we focus on doing speech emotion recognition with transformer-based speech representation models. We specifically focus on using three pre-trained speech models - wav2vec2 [1], XLSR [2] and HUBERT [3]. We do not fine-tune the weights of the speech representation models [15][9][16][17] and only use them as feature extractors as we believe such systems are more practical in the real world setting - where a single feature extractor is used to extract speech features and multiple systems perform different tasks using the same set of features. Unlike pre-trained language models in natural language processing where the final layer features are used to do most tasks, Pepino et al [8] showed that the best way to use speech models for downstream task is not to use the final layer representations. They took a weighted averaged the features of across all layers of wav2vec2 and then used an LSTM model to achieve state-of-the-art results for the IEMOCAP (English) dataset. In our paper, we show that better results can be achieved using information from the optimal layer.
In this paper, we also study speech emotion recognition across multiple languages. Most studies on speech emotion are centred around English [8][18][19] or use one or two additional languages [20][10][21]. Our work also presents a benchmark for speech emotion recognition across multiple languages using pre-trained speech representation models. Previous multilingual SER works either don't evaluate these representation models [7] or only use one such model for multiple languages [9]. While such benchmarks exist in low-resourced languages [22][23][24] for other tasks [24][6], we do not yet have a comprehensive benchmark for speech emotion recognition.
## 3 Datasets
This research explores seven speech emotion classification tasks conducted on six distinct language datasets. The study focuses on English, French, German, Greek, Italian, and Persian. Specifically, the English datasets used in this study consist of two widely recognized collections, namely IEMOCAP (IEM4) [25] and RAVDESS [26]. Additionally, the other languages are represented by CaFE (French) [27], EmOB (German) [28], AESDD (Greek) [29], EMOVO (Italian) [30], and ShEMO (Persian) [31] dataset.
The datasets exhibit variations in several aspects, including size (number of utterances), number of speakers, class distribution, and number of classes as described in Table 1. While emotions such as anger, happiness, and sadness are present across all datasets, additional emotions such as disgust, fear, neutral emotion, surprise, calm, and boredom appear in at least one of the datasets. Each dataset consists of speech samples characterized by three crucial attributes: audio data represented as raw single channel waveforms, speaker identification, and emotion labels encompassing various emotions such as anger, happiness, and sadness. It is worth noting that all the datasets exhibit similar average utterance durations, which range from 2.5 to 4.5 seconds.
The selection of benchmark datasets for this study was primarily based on two key factors: dataset popularity and language diversity. The chosen benchmark datasets, includ
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Dataset & Language & Classes & Utterances & Speakers & Average Duration (s) & Total Duration (h) \\ \hline AESDD & Greek & 5 & 604 & 6 & 4.2 & 0.7 \\ CaFE & French & 7 & 864 & 12 & 4.5 & 1.1 \\ EmoDB & German & 7 & 535 & 10 & 2.8 & 0.4 \\ EMOVO & Italian & 7 & 588 & 6 & 3.1 & 0.5 \\ IEMOCAP & English & 4 & 5,531 & 10 & 3.4 & 7.0 \\ RAVDESS & English & 8 & 1,440 & 24 & 3.7 & 1.5 \\ ShEMO & Persian & 6 & 3,000 & 87 & 4.0 & 3.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key Attributes of Datasets Used: Average Duration, Language, Number of Classes, Utterances, Speakers, and Total Duration of All Samples
ing EmoDB [28], IEMOCAP [25], and RAVDESS [26], are widely used in the field of speech emotion recognition. To address class imbalance, a subset of the IEMOCAP [25] dataset, specifically the four emotion classes (IEM4), was used. For the remaining tasks, all samples and classes from the original datasets were retained. To represent Italic languages, CaFE [27] and EMOVO [30] were chosen, while AESDD [29] and ShEMO [31] represented the Hellenic and Indo-Iranian branches of the Indo-European family, respectively. The majority of the benchmark datasets primarily comprise scripted and acted speech, with IEM4, RAVDESS [26], and ShEMO [31] also incorporating spontaneous utterances.
To train, optimize, and evaluate language-specific speech emotion classifiers, each dataset, following Scheidwasser et al. [7], was divided into training, validation, and testing sets. The standard split employed for most datasets involved allocating 60% of the data for training, 20% for validation, and 20% for testing purposes. Speaker independence was carefully maintained in each partition, ensuring that the sets of speakers in each partition were mutually exclusive. This fixed data split design facilitated the assessment of the performance of experimental setups using different amounts of data, considering the variations in dataset sizes within the benchmark.
## 4 Models
In this paper, we work with three pre-trained speech models - wav2vec2 [1], XLSR [2] and HUBERT [3]. We use these models as feature extractors. A summary of these models can be found in Table 2, presenting a concise overview of their key characteristics. For classification, we use to different heads on top of the features extracted from these speech representation models, as described in section 4.2.
### Feature Extractors
We selected feature extraction models that vary in terms of pre-training data and the number of languages involved. By incorporating models with distinct pre-training data and linguistic coverage, we aimed to enhance the comprehensiveness and robustness of our research findings.
#### 4.1.1 wav2vec2
A total of 3 versions of wav2vec2 were studied. Among them, two versions are pre-trained models, namely Wav2vec2 Base and Wav2vec2 Large. Additionally, there is one version of wav2vec2 finetuned for ASR called wav2vec2 ASR Large. These models differ in several aspects, including the number of layers in the transformer encoder and the training data hours, as well as the number of languages they were pre-trained on. These statistics are shown in Table 2.
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & Layers & Training Data \\ \hline wav2vec2 Base & 12 & 960 (1) \\ wav2vec2 Large & 24 & 960 (1) \\ wav2vec2 XLSR 53 & 24 & 56,000 (53) \\ wav2vec2 XLSR 300M & 24 & 436,000 (128) \\ wav2vec2 ASR Large & 24 & 960 + 960 (1) \\ HuBERT Base & 12 & 960 (1) \\ HuBERT Large & 24 & 60,000 (1) \\ HuBERT ASR Large & 24 & 60,000 + 960 (1) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of Feature Extractor Models with Number of Layers and Training Data. Dataset size is in Hours and number of languages are mentioned inside parenthesis and ’+’ indicates additional finetuning training data hours.
Figure 1: Dataset-wise accuracy for Probing using a _Linear_ classifier with various feature extractors. The presented data is an average of five runs conducted on the test set.
The pre-training of wav2vec2 base, wav2vec2 large, and wav2vec2 ASR large was performed using the LibriSpeech dataset, which consists of English speech data [32].
#### 4.1.2 Xls-r
We studied two versions of the XLS-R model: wav2vec2 XLSR 53 and wav2vec2 XLSR 300M. These models share the same architecture, with 24 encoder layers and 300M parameters.
Wav2vec2 XLSR 53 was pre-trained on a diverse dataset consisting of 53 languages. On the other hand, wav2vec2 XLSR 300M was pre-trained on an even more extensive dataset, comprising 128 languages. The inclusion of these two models allowed us to explore the impact of different language coverage on the performance and capabilities of the XLS-R model in our research.
#### 4.1.3 HuBERT
In our research experiments, we explored HuBERT by utilizing two pretrained models and one finetuned for ASR version. The pretrained versions of HuBERT include HuBERT Base, which has 12 encoder layers, and HuBERT Large, which has 24 encoder layers. HuBERT Base was pretrained on 960 hours of the LibriSpeech English dataset [32]. On the other hand, HuBERT Large was pretrained using a significantly larger dataset, specifically 60,000 hours of the Libri-Light English dataset [33]. We also employed a finetuned version of HuBERT for ASR, called HuBERT ASR Large. This model consists of 24 layers in its encoder and it was pretrained on 60,000 hours of the Libri-Light dataset and then further fine-tuned on 960 hours of the LibriSpeech dataset.
### Classification Heads
Classification head is a model which takes in the learned features or representations of the input speech data from the preceding feature extractor and make predictions about the target class or emotion which a given input belongs. We used two classification heads which are described below.
#### 4.2.1 Linear
The _Linear_ classification head consists of two linear layers. First layer takes the averaged input features over the time sequence, and the rectified linear unit (ReLU) activation function is applied to the output with hidden dimension as 128. The resulting tensor is then passed through the second layer, which produces the final logits for each label. This classification head is based on the probing models used in [12][13][14]. The idea here is to use the simplest feed forward neural network to be able to study the speech representation power of pre-trained models.
#### 4.2.2 Dense
_Dense_ classification model uses piece-wise linear layers at every time step of a layer or a CNN layer with kernel size 1. Two such layers are used with the hidden dimensions of 256 hidden units. These features are then averaged across time and then passed through a final classification layer. This model is used with both single layer features and multilayer features. When using features from multiple layers, we aggregate the features following [8] before using the dense layer.
## 5 Experiments
We conducted two fundamental experiments by integrating all the feature extractors and classification heads. The first experiment involved a straightforward aggregation model, while the second experiment focused on edge probing to gain insights into the internal encoding schema of emotions in speech representation models. As part of our research, we treated the CNN layer that precedes the transformer layer as a separate layer, which we have denoted as the "zeroth" layer [8]. To enhance the statistical validity and ensure the consistency of our findings, we conducted each individual trial five times.
Figure 3: Error reduction percentage from Aggregate _Dense_ classifier to Probing with _Linear_ and _Dense_ classifier.
Figure 2: Maximum accuracies for all the classifiers models and datasets.
### Edge Probing - Linear
Edge probing involves conducting targeted analyses and experiments to investigate how the model's internal representations and attention mechanisms respond when an input is sent through the model. For a given model with layers L, we attached a separate classification head to each layer and trained them independently to predict a target label. The classifier solely relies on input from a single layer. We conducted this experiment using both a _Linear_ classification head and a _Dense_ classification head. Consequently, the classifier heavily relies on the encoder to provide meaningful information regarding the relationship between spans and their respective roles within a sentence. This approach empowered us to evaluate the encoder's proficiency in capturing and conveying emotional features from the high density speech input data.
The results of the probing experiments for five models are shown in Figure 1. We find that the initial and final layers perform worst for the task of speech emotion recognition. The initial layers are unable to create a rich enough representation of speech to classify emotions and classifying emotions requires both the understanding of the phonetic as well as prosodic content. The final layers of these models, as shown in [12][13][14] are focused on reconstructing the input and lose the rich contextual representations for emotion recognition to be able to provide enough phonetic content for speech reconstruction. This can also be seen in Figure 1 where the performance of the final ASR layers are always worse than the non-ASR models, showing that models trained to convert speech to text lose out on a lot more prosodic information than their non-ASR counterparts. The center layers contain the richest contextual features that contain enough phonetic and prosodic content to do speech emotion recognition. This observation is true across six different languages.
### Aggregation
The aggregation experiments follow the work in [8], where feature representations from all the layers are used to train an SER model. To combine features from different layers, we use a weighted average for each layer as done in [8], where these weights are learnable parameter. We then use the dense layer to work with these weighted multi-layer features. Upon performance evaluation of this model, we found that the aggregation based models were performing worse than the linear edge-probing models. This can also be seen in Figure 2, where Agg. Dense performance if always worse than the Probing with_Linear_ classifier performance except for the IEMOCAP (English) dataset. This motivated us to push the performance using a single layer of the pre-trained models using the same classification head.
### Edge Probing - Dense
To extract even more out of the edge probing models, we use the dense classification head on the features extracted from a single layer. We find that the the dense probing models outperform both linear probing and dense aggregate models as shown in Figure 2. This means that features extracted from a single layer are enough to achieve best performance on SER over using features extracted from all layers, even though the features are combined using learnable weights.
The improvement in performance over aggregation experiments is highlighted further in Figure 3. The plots show the error reduction percentage over the aggregation models when using the linear and dense probing models. We observed that for 5 out of 7 datasets, the linear probing model performs better than the aggregation models, and the dense probing model is better than the aggregation model for all datasets. We also noticed that the dense probing model closes the error of the aggregation model by 5-100%, with an average error reduction about 32% across 7 datasets and 6 languages.
Figure 3 also shows that the maximum improvements with dense probing models happen for the smallest datasets. This means that for low resourced scenarios, if the training dataset is small, the middle layers become even more crucial for accurately classifying emotions and aggregation models as proposed in [8] require a larger amount of data to achieve optimal
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Models**}} & \multicolumn{6}{c}{**Datasets**} \\ \cline{2-9} & **AESDD** & **CaFE** & **EmoDB** & **EMOVO** & **IEMOCAP** & **RAVDESS** & **ShEMO** \\ \hline wav2vec 2 Base & 69.6 [5] & 70.9 [2] & 91.7 [3] & 44.4 [5] & 66.8 [6] & 65.3 [8] & 92.2 [3] \\ wav2vec 2 Large & 79.7 [15] & 79.9 [8] & 98.8 [9] & 50.5 [16] & 67.7 [7] & 73.7 [12] & 91.9 [5] \\ wav2vec 2 ASR LARGE & 79.2 [8] & 75.2 [12] & 97.6 [9] & 56.1 [9] & 69.0 [13] & 70.3 [11] & 92.2 [4] \\ wav2vec 2 XLSR 53 & **85.0 [14]** & **84.2 [7]** & **100 [13]** & 61.7 [9] & 71.7 [10] & 77.7 [9] & **95.2 [6]** \\ wav2vec 2 XLRS 300 & 83.1 [16] & 83.3 [13] & **100 [7]** & **69.9 [14]** & **73.9 [13]** & **79.0 [18]** & 94.6 [17] \\ HuBERT Base & 72.9 [7] & 73.1 [7] & 96.4 [4] & 48.5 [6] & 69.2 [4] & 65.7 [7] & 91.7 [1] \\ HuBERT Large & 80.2 [14] & 83.8 [13] & **100 [15]** & 62.8 [12] & 72.5 [22] & 76.3 [11] & 93.5 [5] \\ HuBERT ASR LARGE & 82.6 [13] & 82.5 [10] & **100 [10]** & 67.3 [12] & 71.4 [10] & 75.3 [12] & 93.8 [8] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Maximum accuracies for Probing using the _Dense_ classifier with various feature extraction models. The corresponding layer of the encoder in the feature extractor is denoted within square brackets.
performance. The exact values of classification accuracies for the dense probing head with the corresponding best layer for each model are shown in Table 3. Unsurprisingly, we find the XLSR models trained on large multilingual data performs best for speech emotion recognition, including doing SER for English.
## 6 Conclusion
We conducted a comprehensive evaluation of transformer-based speech representation models for speech emotion recognition (SER) across multiple languages. Our findings challenge prevailing notions and provide valuable insights for optimizing SER models in multilingual and low-resourced scenarios. Our probing experiments show that the middle layers of the speech representation models capture the most important features for SER. This is contrary to previous work [11], which shows that the early layers of speech models capture prosodic information, highlighting the importance of both phonetic and prosodic information for recognizing emotion in speech.
We also discovered that single-layer probing models consistently outperformed aggregation models in terms of accuracy and variability. The findings of this study challenge the previous work by Pepino et al. [8], as they demonstrate that utilizing features from a single layer of a speech representation model outperforms the approach of aggregating features from all layers of the model. The optimal layers were usually the center layers of the model, thus further highlighting the importance of the center layers for the task of speech emotion recognition. The optimal single layer although seems to differ from task to task and dataset to dataset, and finding the optimal layer is a part of our future investigations.
Additionally, dataset size played a crucial role, with aggregation models requiring more data to perform well. Furthermore, we observed that models trained on a larger number of languages exhibited better encoding of emotions, emphasizing the importance of linguistic diversity in pre-training.
|
2302.12710 | Electrode Clustering and Bandpass Analysis of EEG Data for Gaze
Estimation | In this study, we validate the findings of previously published papers,
showing the feasibility of an Electroencephalography (EEG) based gaze
estimation. Moreover, we extend previous research by demonstrating that with
only a slight drop in model performance, we can significantly reduce the number
of electrodes, indicating that a high-density, expensive EEG cap is not
necessary for the purposes of EEG-based eye tracking. Using data-driven
approaches, we establish which electrode clusters impact gaze estimation and
how the different types of EEG data preprocessing affect the models'
performance. Finally, we also inspect which recorded frequencies are most
important for the defined tasks. | Ard Kastrati, Martyna Beata Plomecka, Joël Küchler, Nicolas Langer, Roger Wattenhofer | 2023-02-19T18:42:57Z | http://arxiv.org/abs/2302.12710v1 | # Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation
###### Abstract
In this study, we validate the findings of previously published papers, showing the feasibility of an Electroencephalography (EEG) based gaze estimation. Moreover, we extend previous research by demonstrating that with only a slight drop in model performance, we can significantly reduce the number of electrodes, indicating that a high-density, expensive EEG cap is not necessary for the purposes of EEG-based eye tracking. Using data-driven approaches, we establish which electrode clusters impact gaze estimation and how the different types of EEG data preprocessing affect the models' performance. Finally, we also inspect which recorded frequencies are most important for the defined tasks.
EEG, Clustering, Deep Learning, Gaze Estimation, Bandpassing
## 1 Introduction
The ability to track eye movement patterns offers insights into the cognitive processes underlying a wide variety of human behaviour. Eye tracking allows researchers to recognize and quantify visual attention, fatigue and performance in various scientific studies (Eriksson and Papanikotopoulos (1997); Holmqvist et al. (2011); Liu and Heynderickx (2011)). Nowadays, infrared video-based eye trackers are the most common approach in research labs (Cornelissen et al. (2002)). This eye tracking technique uses infrared light to create a dark pupil and a corneal reflection to provide contrast in locating the center of the pupil (Holmqvist et al. (2011)). Although accurate, there are various limitations (Holmqvist et al. (2012)). Examples include individual differences in the contrast of the pupil and iris, time-consuming setup and calibration for each scanning session (Carter and Luke (2020)). Moreover, installing such a system involves setting up an optical path to align infrared light to the cornea without interference from the visual paradigm display.
Another line of research demonstrated the feasibility of Electroencephalography (EEG) and Electrooculography (EOG) signal decoding for gaze estimation purposes. EOG is often concurrently measured with EEG by using electrode pairs placed horizontally and/or vertically around the eye to record changes in electric potentials that originate from movements of the eye muscles (Martinez-Cervero et al. (2020)). However, gaze estimation based on Electroencephalography (EEG) data was mainly overlooked in practice (Manabe et al.
(2015); Hladek et al. (2018)), even though the solution has been known for years. One potential reason is the necessity to obtain a rich enough set of data with concurrently recorded EEG data and infrared video-based eye tracking data serving as ground truth. In addition, this solution requires equipment and expertise for both EEG acquisition and eye tracking (Dimigen et al. (2011)). Moreover, the noisiness of EEG data poses an additional challenge leaving the research and development in the EEG-based gaze estimation area behind (Grech et al. (2008)). Nonetheless, with the rapid progress in Machine Learning and the increase in the collection of datasets, the EEG modality became more approachable and easier to process (Kastrati et al. (2021)).
The primary goal of this study is to investigate which spatio-spectral brain signal components are most relevant for decoding eye movement and understanding what is the minimal and best placement of the electrodes. More importantly, we show that good-performing models with high accuracy can be achieved even when the number of electrodes is significantly reduced compared to a high-density, 128-electrodes EEG cap. Additionally, we demonstrate that when using a standard pipeline for EEG preprocessing that includes independent component analysis (ICA) (Pion-Tonachini et al. (2019)), one can infer gaze direction also from electrodes placed in the occipital part of the head, albeit with much lower accuracy. Finally, we include an experimental analysis of the importance of different frequency intervals of the EEG signals for gaze direction.
## 2 Related Work
To date, only a few EEG studies attempted to estimate the actual eye gaze position on a computer screen, and those resulted in high inaccuracies (estimation error \(15^{*}\)) (Borji and Itti (2012)), and complicated analyses (Manabe et al. (2015)). Moreover, there are a number of limitations associated with EEG and specifically EOG electrodes, including a variety of metabolic activities over time and potential drifts (Martinsen and Grimnes (2011)). There has been some investigation into the traditional methods of supervised machine learning; for instance, (Bulling et al. (2010)) categorized the various directions and durations of saccades with a mean accuracy of 76.1%. Another study (Vidal et al. (2011)) developed an EOG feature-based approach that discriminated between saccades, smooth pursuits, and vestibulo-ocular reflex movement achieving quite good results, all of them being above 80%. Latest approaches in machine learning (ML) have shown promise for the development of more precise EEG/EOG-based eye tracking systems. For example, the EEGEyeNet (Kastrati et al. (2021)) benchmark, along with the rich dataset of simultaneous Electroencephalography and eye tracking recordings, has demonstrated promising results for further development of EEG-EOG-based gaze estimation using deep learning frameworks. Recently, (Wolf et al. (2022)) proposed a novel framework for time-series segmentation, creating ocular event detectors that rely solely on EEG data. This solution achieved state-of-the-art performance in ocular event detection across diverse eye tracking experiment paradigms, showing the high potential of EEG-based eye tracking solution, used not only as a complimentary modality but in some instances, it can also be beneficial when classic eye tracker hardware is unavailable.
## 3 Methods
### Dataset, Benchmarking Tasks and Models
We use EEGEyeNet dataset and benchmark (Kastrati et al. (2021)) to run our studies. EEGEyeNet consists of synchronized EEG and eye tracking data and consists of collected from three different experimental paradigms. It also establishes a benchmark consisting of three tasks with an increasing level of difficulty:
* Left-Right task (LR): This is a binary classification task, where the goal is determining the direction of the subject's gaze along the horizontal axis.
* Direction task: The task is to regress the two target values, i.e., angle and amplitude of the relative change of the gaze position during the saccade.
* Absolute position task: The goal of the task is determining the absolute position of the subject's gaze in the screen, described in terms of XY-coordinates.
In each task, a window of 1 second (with sampling rate of 500Hz) of EEG data is given as an input sample, and the goal is to classify or regress (depending on the task) the target value. The data acquisition, models used, preprocessing methods, and the whole pipeline is explained in much more detail in the original work (Kastrati et al. (2021)).
### EEG data preprocessing
In this manuscript, we use two types of EEG -data preprocessing. The first one, from now on referred to as "Minimal Preprocessing", includes algorithms implemented in the EEGlab plugin: to identify bad channels (clean_rawdata1), to reduce the noise (Zapline) and bandpass filtering (0.5-40Hz). Detected bad channels were automatically removed and later interpolated using a spherical spline interpolation. The details of the preprocessing pipeline can be found in the EEGEyeNet paper (Kastrati et al. (2021)). The second preprocessing type, i.e. "Maximal preprocessing", is the state-of-the-art preprocessing used for neuroscientific applications. In addition to the "Minimal Preprocessing" pipeline, it includes a Non-Brain artifactual source components removal based on the automatic classification result as provided by Independent Component Label (ICLabel) (Pion-Tonachini et al. (2019)) algorithm.
Footnote 1: [http://sccn.ucsd.edu/wiki/Plugin_list_process](http://sccn.ucsd.edu/wiki/Plugin_list_process)
### Gradient-Based Feature Importance
In order to indicate whether a feature (in our case an electrode) contributes significantly to the prediction, the gradient concerning the input can be computed (Simonyan et al. (2013)). This shows how much the model relies on the feature. In our case, we are interested in aggregating this score over an electrode to get its importance. First, the resulting gradients are normalized for each input separately. Next, all absolute gradient values of an electrode are summed up. If an electrode is left with a higher score comparatively, it indicates that it has a more significant contribution to the input.
## 4 Electrode Clustering on Minimally Preprocessed Data
A dense, 128-electrode EEG cap is often infeasible to be used in practice. In this section, we show that most of the important information for eye movement is highly concentrated in the frontal electrodes.
### Finding Important Electrodes
We use gradient-based methods to rank the most important electrodes. From the spatial distribution (topoplots) in Figure 1 we can see a clear symmetrical structure and that the most information used for decoding eye movement is in the frontal electrode. Interestingly, the topoplots show that the central electrode (Cz), which was the recording reference electrode also carries some relevant information. This can be explained, by the fact that the data were offline re-referenced to average reference and thus Cz was interpolated by all other electrodes. To produce the topoplots we used the average performance across all models and for each task presented in the benchmark (Kastrati et al. (2021)).
Figure 1: Topoplots for (a) LR Task, (b) Position Task, (c) Amplitude Task and (d) Angular Task. Colors of higher intensity represent a higher importance. The colors are shown on a logarithmic scale. The score is averaged over all five deep learning architectures presented in (Kastrati et al. (2021)).
### Choice of Electrodes
Due to the symmetrical results on the topoplots, the most important electrodes are chosen as follows: first, all electrodes are ranked according to gradient-based analysis (see Figure 2 for the importance of each electrode). The best electrode, which is not yet in the cluster, is added together with its symmetrical counterpart. This method responds to the brain's natural symmetry and focuses on localized head areas that we expect to be more resilient and insightful than single isolated electrodes. This procedure is repeated until the loss stops increasing. With this method, we converged to 23 electrodes which are mainly on the frontal part (as can be observed in the topoplots and Figure 3). In order to compare models with each other and for convenience, it is ensured that the same configurations are used across all tasks and models.
Figure 2: Gradient Based Feature Importance for (a) LR Task, (b) Position Task, (c) Amplitude Task and (d) Angular Task. The normalized gradient results for each electrode and each task from the experiments on the minimally preprocessed data.
### Choice of Clusters
We observed that the best electrodes are of similar significance in all architectures for all the tasks and the accuracy. In addition, the choice of 23 electrodes achieves the same accuracy that one achieves with all 128 electrodes. This high accuracy encouraged us to train models on an even more reduced number of electrodes and this way we created several different and smaller clusters. We choose clusters of sizes 2, 3, 8, and 23. For all tasks and the models in EEGEyeNet benchmark, we converged to the same clusters shown below.
Figure 3: **Electrode Clustering Visualisation.** This figure shows the electrode placement. Colour-coded electrodes belong to a configuration. Pink nodes form the Top2 configuration. For Top3, the blue electrode is added. By combining Top3 with the teal electrodes, we get the Top8 configuration. The final 23 electrode composition consists of all coloured nodes.
### Evaluation
We evaluate all proposed electrode configurations on all deep learning models. The benchmark is run for all the tasks separately on NVIDIA GeForce GTX 1080 Ti. We see that on average equal or better results (in comparison with networks that have access to the full information) can be achieved by just using a fraction of electrodes.
We can observe that in Table 2, running the benchmark with only 23 electrodes the models perform equally and sometimes even better than training the models with all 128 electrodes. This can be since many of the other electrodes do not add any useful information but just noise. This can be seen for example in the Left-Right task where PyramidalCNN achieves a score of 98.46 which is better than any model trained with 128 electrodes. Similar behavior can be seen also in the amplitude task. We can also observe that decreasing the number of electrodes to 8, decreases the performance of the models in the position task, however, the performance of all the other tasks with 8 electrodes still remains equally good compared to the performance with 128 electrodes. Decreasing the number of electrodes to 3 and 2, leads to a decrease in performance on all the tasks except the left-right task which can be decoded with high accuracy with only 2 electrodes. Nonetheless, with only 2-3 electrodes the performance on the amplitude task and the position task decreases significantly.
## 5 Electrode Clustering on Maximally Preprocessed Data
In the previous section, we saw that the most important electrodes for decoding eye movement are the frontal electrodes. More specifically, if we choose 23 frontal electrodes then we achieve the same performance as with the full cap which consists of 128 electrodes.
In this section, we investigate how state-of-the-art preprocessing methods used for neuroscientific applications (maximally preprocessing) affect these results. Maximally preprocessing steps exclude ocular artifacts and include ideally only neurophysiological information. This makes the estimation of gaze position harder, however, it also reveals other insights how brain activity is related to eye movement.
Interestingly, the analysis of the maximmally preprocessed data shows that the electrodes in the occipital part of the brain are also important for inferring gaze direction if one considers only the neurophysiological information. However, even after maximally preprocessing the data, the frontal electrodes can still be used for inferring gaze direction, indicating that the preprocessing might be suboptimal in removing ocular artifacts from the recorded EEG signal. Alternatively one could also speculate that the actual neuronal activity in the frontal electrodes (e.g. frontal eye fields) actually entails information about the eye movement.
### Finding Important Electrodes
We use again gradient-based methods to find important electrodes.
We observe in Figure 4 again symmetry but, compared to minimally preprocessed data, we can see more sparsity in the distribution of the important brain regions. In particular, in contrast to minimally preprocessed data, we can identify in left-right and direction task an additional important region. Most of the importance is still located in the frontal electrodes, but now the second region of interest around the occipital region can also be identified.
### Choice of Electrodes
Due to the symmetry in the topoplots we use the same technique as in the minimally preprocessed data. That is, we rank the electrodes according to the gradient-based analysis (see Figure 5 for the importance of each electrode). With maximally preprocessed data we identified 40 most important electrodes, which achieve almost the same performance compared to the dense 128-electrode cap.
Figure 4: Topoplots for (a) LR Task, (b) Position Task, (c) Amplitude Task and (d) Angular Task. Colors of higher intensity represent a higher importance. The colors are shown on a logarithmic scale. The score is averaged over all five deep learning architectures presented in (Kastrati et al. (2021)).
### Choice of clusters
If we decrease the number of electrodes below 40, then the performance starts decreasing. In this section, we investigate how the performance decreases for several different smaller clusters. Motivated by the fact that there are two different regions in maximally preprocessed data, we distinguish between the following clusters: front, back, front extended, and back extended. The extended version includes more electrodes in each region which results in more stable training. In Figure 6, we show all the main clusters for the best 40 electrodes. In Figure 6 the most important front electrodes are marked with pink color. The front extended cluster is marked with a purple cluster, composed of less important front electrodes. The back cluster is marked in yellow, which is extended with the green cluster (back extended cluster).
Figure 5: Gradient Based Feature Importance for (a) LR Task, (b) Position Task, (c) Amplitude Task and (d) Angular Task. The normalized gradient results for each electrode and each task from the experiments on the maximally preprocessed data.
### Evaluation
We evaluate all proposed electrode configurations on all deep learning models for the maximally preprocessed data as well. In Table 3, we can see that with only 40 electrodes the models perform equally well and except for LR task, the models perform even better. For example, in the angle task, PyramidalCNN achieves a score of 0.68 if trained with 40 electrodes compared to the score of 0.76 radians if trained with 128 electrodes. Similar behavior can be seen also for the amplitude task. We can also observe that the models achieve competitive performance compared to the trained models with the full cap, also if they are trained with only the front (extended) cluster. Here we can also see for the angle task,
Figure 6: **Electrode Clustering Visualisation. The best 40 electrodes according to our gradient feature importance method are marked. The pink-colored nodes correspond to the front part. For the extended version, the purple electrodes are added. The same is done for back electrodes. Its base consists of the yellow nodes, the extended version consists of the green ones.**
PyramidalCNN trained only on the front extended task achieving a score of 0.67 radian, which is better than all models trained on the full 128-electrode cap.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} Cluster & Model & LR. (\%) \(\uparrow\) & Amp. (mm) \(\downarrow\) & Ang. (rad) \(\downarrow\) & Pos. (mm) \(\downarrow\) \\ \hline \multirow{6}{*}{All} & InceptionTime & \(\mathbf{93.61}\pm 0.67\) & \(61.42\pm 3.55\) & \(0.78\pm 0.08\) & \(125.07\pm 3.43\) \\ & EEGNet & \(86.31\pm 1.47\) & \(60.42\pm 0.7\) & \(1.18\pm 0.05\) & \(\mathbf{114.06}\pm 0.38\) \\ & CNN & \(90.48\pm 1.87\) & \(64.31\pm 2.77\) & \(0.89\pm 0.14\) & \(122.59\pm 3.81\) \\ & PyramidalCNN & \(93.08\pm 0.49\) & \(\mathbf{57.73}\pm 1.44\) & \(\mathbf{0.76}\pm 0.09\) & \(133.59\pm 1.62\) \\ & Xception & \(89.36\pm 3.01\) & \(64.2\pm 1.41\) & \(0.9\pm 0.19\) & \(125.34\pm 2.79\) \\ \hline \multirow{6}{*}{Top40} & InceptionTime & \(\mathbf{93.59}\pm 1.05\) & \(60.42\pm 2.31\) & \(0.75\pm 0.06\) & \(122.48\pm 1.98\) \\ & EEGNet & \(87.95\pm 0.09\) & \(59.66\pm 0.58\) & \(1.09\pm 0.03\) & \(\mathbf{112.9}\pm 0.19\) \\ & CNN & \(90.64\pm 1.09\) & \(62.38\pm 1.4\) & \(0.83\pm 0.09\) & \(119.28\pm 0.62\) \\ & PyramidalCNN & \(92.01\pm 1.66\) & \(57.04\pm 0.82\) & \(\mathbf{0.68}\pm 0.02\) & \(134.6\pm 2.45\) \\ & Xception & \(92.12\pm 0.8\) & \(\mathbf{63.0}\pm 1.5\) & \(0.87\pm 0.13\) & \(125.71\pm 1.96\) \\ \hline \multirow{6}{*}{Front \& InceptionTime} & \(\mathbf{92.6}\pm 0.17\) & \(59.34\pm 3.23\) & \(0.86\pm 0.19\) & \(121.48\pm 2.17\) \\ & EEGNet & \(87.26\pm 0.55\) & \(60.52\pm 0.68\) & \(1.12\pm 0.02\) & \(\mathbf{112.91}\pm 0.38\) \\ & CNN & \(90.99\pm 0.8\) & \(61.86\pm 2.52\) & \(0.82\pm 0.03\) & \(118.94\pm 1.69\) \\ Back & PyramidalCNN & \(91.36\pm 1.0\) & \(\mathbf{58.63}\pm 0.47\) & \(\mathbf{0.75}\pm 0.03\) & \(134.0\pm 2.99\) \\ & Xception & \(90.07\pm 0.15\) & \(63.15\pm 3.3\) & \(0.96\pm 0.11\) & \(125.17\pm 1.59\) \\ \hline \multirow{6}{*}{Front \& PREMATELN} & \(\mathbf{92.47}\pm 0.81\) & \(\mathbf{60.72}\pm 1.78\) & \(0.8\pm 0.15\) & \(124.68\pm 2.12\) \\ & EEGNet & \(83.94\pm 1.41\) & \(63.35\pm 0.92\) & \(1.08\pm 0.01\) & \(\mathbf{113.83}\pm 0.36\) \\ & CNN & \(91.42\pm 0.48\) & \(69.73\pm 5.59\) & \(0.8\pm 0.03\) & \(119.23\pm 0.92\) \\ Ext. & PyramidalCNN & \(91.28\pm 1.69\) & \(60.94\pm 1.98\) & \(\mathbf{0.67}\pm 0.03\) & \(135.0\pm 2.27\) \\ & Xception & \(91.76\pm 0.25\) & \(66.74\pm 3.45\) & \(0.81\pm 0.09\) & \(124.06\pm 1.94\) \\ \hline \multirow{6}{*}{Front \& InceptionTime} & \(\mathbf{91.68}\pm 0.19\) & \(\mathbf{61.31}\pm 2.5\) & \(0.91\pm 0.15\) & \(123.05\pm 0.9\) \\ & EEGNet & \(83.33\pm 0.63\) & \(63.25\pm 0.27\) & \(1.12\pm 0.02\) & \(\mathbf{114.19}\pm 0.39\) \\ & CNN & \(90.57\pm 0.82\) & \(61.84\pm 2.28\) & \(0.85\pm 0.04\) & \(120.67\pm 3.23\) \\ & PyramidalCNN & \(89.54\pm 1.8\) & \(61.36\pm 1.2\) & \(\mathbf{0.78}\pm 0.02\) & \(135.15\pm 3.52\) \\ & Xception & \(91.04\pm 0.69\) & \(62.6\pm 2.98\) & \(0.85\pm 0.07\) & \(125.44\pm 2.74\) \\ \hline \multirow{6}{*}{Back \& InceptionTime} & \(71.42\pm 2.48\) & \(78.46\pm 3.53\) & \(\mathbf{1.52}\pm 0.08\) & \(130.57\pm 1.05\) \\ & EEGNet & \(\mathbf{74.32}\pm 0.43\) & \(\mathbf{67.93}\pm 0.19\) & \(1.81\pm 0.01\) & \(\mathbf{119.88}\pm 0.1\) \\ \cline{1-1} & CNN & \(71.39\pm 1.02\) & \(75.04\pm 1.81\) & \(1.54\pm 0.07\) & \(127.8\pm 1.55\) \\ \cline{1-1} & PyramidalCNN & \(70.91\pm 1.32\) & \(70.85\pm 0.58\) & \(1.56\pm 0.03\) & \(145.75\pm 2.04\) \\ \cline{1-1} & Xception & \(70.67\pm 1.29\) & \(81.64\pm 1.15\) & \(1.66\pm 0.06\) & \(132.16\pm 0.66\) \\ \hline \multirow{6}{*}{Back \& InceptionTime} & \(69.21\pm 1.85\) & \(75.45\pm 2.48\) & \(1.67\pm 0.08\) & \(127.42\pm 1.63\) \\ \cline{1-1} & EEGNet & \(\mathbf{72.98}\pm 0.49\) & \(\mathbf{68.72}\pm 0.37\) & \(1.78\pm 0.07\) & \(\mathbf{119.87}\pm 0.18\) \\ \cline{1-1} & CNN & \(70.02\pm 0.63\) & \(79.54\pm 8.81\) & \(\mathbf{1.54}\pm 0.04\) & \(126.45\pm 1.93\) \\ \cline{1-1} & PyramidalCNN & \(67.88\pm 3.94\) & \(73.03\pm 2.25\) & \(1.61\pm 0.02\) & \(145.6\pm 1.69\) \\ \cline{1-1} & Xception & \(68.19\pm 2.03\) & \(81.93\pm 2.57\) & \(1.7\pm 0.05\) & \(132.81\pm 2.23\) \\ \hline \end{tabular}
\end{table}
Table 3: **Benchmark.** The performance of all models in EEGEyeNet benchmark for each task and each chosen cluster for maximally preprocessed data. The error of left-right task (LR) is measured in accuracy, amplitude and position task is measured in pixels, and angle is measured in radians.
The position task for maximally preprocessed data is difficult for all the models even with the full cap, and as stated in (Kastrati et al. (2021)) it is not clear how good this task can be solved with only neurophysiological data. Finally, compared to the minimally preprocessed data, we can also see eye movement information can be decoded with only the electrodes on the occipital regions of the scalp, however, the performance decreases significantly. For instance, the best model trained with frontal electrodes achieves an accuracy of 92.58%, whereas the best performing model trained with the electrodes on the occipital part achieves an accuracy of 74.53%. If only the occipital electrodes are used the performance on the other tasks decreases significantly. This can be seen for the angle task where the performance of the models changes from 0.67 to 1.55 radians. This error is close to the naive baselines reported in Kastrati et al. (2021), showing the difficulty in learning more complex eye movements other than left-right classification solely from occipital electrodes.
## 6 Bandpassing
The final analysis of the impact of preprocessing consists of bandpassing our data before training.
### Choice of frequency bands
We decided to focus our attention on a limited number of intervals, roughly based on historically pre-defined frequency bands (Newson and Thiagarajan (2019)). We define four frequency intervals: Delta : 1-4Hz, Theta : 4-8Hz, Alpha : 8-13Hz and Beta : 13-32Hz. The raw EEG measurements are bandpass for each subject before preparation of the input data, to allow precise frequency selection before reducing the signals to 1-second intervals to feed our model. This bandpassing is performed on both the maximally and minimally preprocessed data, before training respectively on each band.
### Results
We present the results below, including two frequency intervals obtained by merging frequency intervals. We report the average results of the EEGEyeNet benchmark for the left-right task. In Figure 7, we observe an important drop in accuracy for each of our bandpass datasets, with expectedly higher results when combined.
Interestingly, the maximally preprocessed dataset seems to contain most of its helpful information in the higher frequencies, namely the 13-32Hz dataset, with an accuracy above the combination of the three other intervals. It seems to be the contrary for our minimally preprocessed dataset, with a high accuracy in 4-8Hz frequency band and 1-13Hz frequency band prediction. We see no significant improvement from the inclusion of the 13-32Hz interval. We hypothesize that the significant front electrodes' importance observed during clustering on the minimally preprocessed data contains mainly sub-13Hz signals used for prediction. A potential reason for this is that saccades happen only every 200-300ms. Additionally, the fact that 13-32Hz frequency band is important for maximally preprocessed data indicates that actual neuronal activity is used for decoding.
## 7 Conclusion
The study's major finding shows that minimally preprocessed data used for prediction contains most of its information for the task in a limited number of electrodes in the frontal part. Therefore, by reducing their number to one-quarter of its original number, we decreased the input data size significantly and stabilized the prediction results with special cases where the performance even increases. Moreover, the short fit of our data and the high stability of the results might suggest the possibility of reducing the model's complexity without accuracy loss.
It is interesting to note that also, for the maximally preprocessed dataset, the frontal electrodes are most important for gaze prediction. Since this dataset was treated with a non-brain artifactual source components removal based on the automatic classification result as provided by Independent Component Analysis, we can speculate the signal stems from the frontal eye field area, which plays an essential role in controlling visual attention and eye movements (Armstrong et al. (2009)). The oculomotor artefact removal in the maximally preprocessed dataset makes the gaze estimation position task more challenging. Nevertheless, this dataset contains mostly neurophysiological information and reveals other insights into how brain activity relates to eye movement. In particular, it is interesting to note how the second region of interest, located at the occipital part of the head, was important for inferring gaze direction. Since, most likely, the electrodes located in this part of the head are not influenced by any residual oculomotor noise, we can conclude that this signal contains information measured in the region of the visual cortex, revealing how neurophysiological brain activity is related to eye movement.
Finally, we analyzed the impact of bandpassing our data before training. As expected, the low frequencies were most significant for the minimally preprocessed dataset, as they are related to ocular artefacts. However, the maximally preprocessed dataset revealed a different pattern, showing that the 13-32Hz frequencies contained the most meaningful information
Figure 7: **Bandpass analysis. The relation between the frequency intervals and the performance of the models in the LR task for the (a) maximally preprocessed data and (b) minimally preprocessed data.**
for our tasks. A current limitation in this work is that the bandpass analysis is performed only on the left-right task and for limited windows of frequency bands. Extensions of this work call for a fine-grained spectral analysis and how the performance changes across different eye movement patterns (other than left-right).
|
2308.07420 | Multiple-Hypothesis Path Planning with Uncertain Object Detections | Path planning in obstacle-dense environments is a key challenge in robotics,
and depends on inferring scene attributes and associated uncertainties. We
present a multiple-hypothesis path planner designed to navigate complex
environments using obstacle detections. Path hypotheses are generated by
reasoning about uncertainty and range, as initial detections are typically at
far ranges with high uncertainty, before subsequent detections reduce this
uncertainty. Given estimated obstacles, we build a graph of pairwise
connections between objects based on the probability that the robot can safely
pass between the pair. The graph is updated in real time and pruned of unsafe
paths, providing probabilistic safety guarantees. The planner generates path
hypotheses over this graph, then trades between safety and path length to
intelligently optimize the best route. We evaluate our planner on randomly
generated simulated forests, and find that in the most challenging
environments, it increases the navigation success rate over an A* baseline from
20% to 75%. Results indicate that the use of evolving, range-based uncertainty
and multiple hypotheses are critical for navigating dense environments. | Brian H. Wang, Beatriz Asfora, Rachel Zheng, Aaron Peng, Jacopo Banfi, Mark Campbell | 2023-08-14T19:19:47Z | http://arxiv.org/abs/2308.07420v1 | # Multiple-Hypothesis Path Planning with Uncertain Object Detections
###### Abstract
Path planning in obstacle-dense environments is a key challenge in robotics, and depends on inferring scene attributes and associated uncertainties. We present a multiple-hypothesis path planner designed to navigate complex environments using obstacle detections. Path hypotheses are generated by reasoning about uncertainty and range, as initial detections are typically at far ranges with high uncertainty, before subsequent detections reduce this uncertainty. Given estimated obstacles, we build a graph of pairwise connections between objects based on the probability that the robot can safely pass between the pair. The graph is updated in real time and pruned of unsafe paths, providing probabilistic safety guarantees. The planner generates path hypotheses over this graph, then trades between safety and path length to intelligently optimize the best route. We evaluate our planner on randomly generated simulated forests, and find that in the most challenging environments, it increases the navigation success rate over an A* baseline from 20% to 75%. Results indicate that the use of evolving, range-based uncertainty and multiple hypotheses are critical for navigating dense environments.
Motion and Path Planning, Collision Avoidance, Probability and Statistical Methods, Path Planning Under Uncertainty
## I Introduction
Autonomous navigation in unknown, unstructured, and obstacle-dense environments requires a robot to recognize obstacles and plan a safe path around them, typically by using a 3D sensor such as a stereo camera or lidar. This problem becomes much more challenging as the desired navigation speed and complexity of the environment increase. The further ahead the robot can robustly perceive and react to upcoming obstacles, the more safely it will be able to navigate through increasingly challenging and obstacle-dense environments.
However, 3D sensors used in robotics are often affected by measurement errors that increase with distance from the sensor. Stereo camera depth errors grow quadratically with range, due to disparity estimation errors [1, 2]. Lidar sensors, while more precise, return very sparse points at longer ranges.
Current pipelines for autonomous navigation in unknown environments plan iteratively, by constructing a map of the robot's immediate surroundings (often as a 3D occupancy grid), optimizing a path to a global goal in this map, then recursively updating the local map and planned trajectory as the robot moves towards the goal and obtains new sensor measurements [3, 4]. These methods have demonstrated significant successes in autonomous path planning. However, due to increasing 3D sensor noise with range, existing methods typically plan only in a short range near the robot, where sensor noise is minimal and reliable map construction is possible. The robot discards noisier sensor measurements beyond this range entirely, ultimately limiting the robot's ability to quickly and effectively traverse complex environments.
Ryll _et al._[5] point out this constrained perception horizon as a key limitation on the speed of safe autonomous navigation in agile unmanned aerial vehicle (UAV) flight. With no knowledge of the environment past the short, minimal-noise sensor range, the robot must plan conservatively, always ready to come to a stop at the boundary of this sensor range in case new obstacles appear. Ideally, the robot should be able to reason about potential obstacles at far ranges, enabling preemptive avoidance of obstacle-dense areas.
Long-range sensor measurements, while noisy, still contain usable information for recognizing obstacles. Works such as [5, 6], and [7] presented planning methods that use camera image semantics to augment short range 3D sensor measurements. Recent works in 3D object detection [8, 9, 10, 11, 1] have demonstrated accurate object detection at longer ranges than reliable occupancy grid mapping is typically possible. These methods use machine learning to recognize patterns in 3D point clouds and detect objects even within noisy sensor data.
Fig. 1: Overview of the contributions of our multiple-hypothesis planner.
These object detectors provide a promising way to extend the amount of actionable information available to a robot, by enabling longer-range obstacle recognition. However, object detections still contain errors in measuring obstacle positions and sizes, as shown previously in [12]. The robot must therefore reason about this sensing uncertainty in the path planning algorithm, in order to effectively use these object detections.
As illustrated in Figure 1, failing to consider this uncertainty can result in unsafe planned paths, while using uncertainty naively can result in inefficient planning. In this paper, we present a multiple-hypothesis planner which uses uncertain object detections to help the robot navigate to its goal, while considering the length and safety of multiple candidate paths to determine the overall best path to the goal.
Our work is motivated by navigation through a forest while avoiding tree trunks, previously studied in [13, 14], and [15]. To model this problem, we consider a robot navigating through an environment densely populated with obstacles of a single class, but with different sizes and locations, that can be detected with noise that varies with range. We first estimate the positions and sizes of obstacles using these noisy object detections. We then construct a graph representation of the scene based on pairwise connections between obstacles, which stores the probability that the robot can safely navigate between each pair of nearby obstacles in the world. This graph can be updated in real-time based on new measurements, and pruned if edges do not provide safe paths, thus providing probabilistic guarantees. Using this probabilistic graph, we plan multiple path hypotheses between pairs of obstacles to the navigation goal. The planner then intelligently optimizes the best route to the goal by trading off between safety and expected path length.
Our contributions include the following:
* A high-level graph representation of forest environments, that represents and stores the probabilities of the robot being able to safely navigate between pairs of obstacles in the forest.
* A multiple-hypothesis path planning method which uses this probabilistic navigation graph to generate and evaluate multiple candidate paths to the goal, accounting for both expected path distance and safety.
* An experimental simulation study in randomly generated forest environments, demonstrating that our probabilistic model of long-range object detection, when compared to a baseline planner using a 2D A* search, allows a robot to more safely reach its goal. In the most challenging and obstacle-dense forests, while the A* baseline is able to reach the goal in only 20% of environments, our planner increases the navigation success rate to 75%.
* An open-source, publicly available implementation of our planner and simulation, available at **[https://github.com/brian-h-wang/multiple-hypothesis-planner](https://github.com/brian-h-wang/multiple-hypothesis-planner)**.
## II Related work
### _Path planning in unknown environments_
Traditionally, path planning methods take as input a known map of the robot's surroundings, then use this map to plan a collision-free path to a given goal state. Robot path planning methods include sampling-based approaches such as rapidly-exploring random trees (RRTs) [16, 17], as well as minimum cost search methods, often based on the A* algorithm [18]. Hybrid A* [19, 20] extends this algorithm by associating each discrete step of the A* search with a continuous vehicle state, ensuring that the final path is valid with respect to the robot kinematics.
In practice, a robot generally lacks a detailed map of the environment in advance and must construct one in real time using its onboard sensors. Therefore, typical path planning pipelines work iteratively, re-planning as the robot traverses the environment and updates its map with new sensor measurements. A commonly used map structure is a binary 3D occupancy grid, where cells are classified as either occupied or unoccupied. These occupancy grids can be constructed using 3D sensors such as stereo cameras [3] or lidar [4, 21, 22]. These works have significantly advanced the state of the art in 3D autonomous path planning, implementing the iterative re-planning approach on 3D grids.
A key limiting factor on autonomous navigation is the maximum perception horizon of the robot's sensors [5]. At long ranges, sensor noise grows significantly, with stereo cameras suffering from a quadratic increase in depth estimation error with range [1, 2], and lidar point measurements becoming sparse. Longer range planning carries computational costs as well; for example, the cost to maintain a 3D occupancy grid grows cubically with the size of the grid, forcing size- and weight-constrained robots to use a smaller grid that does not make use of all available sensor information [4].
Planning only at short range limits the speed of safe robotic navigation, as the robot must move slowly enough to be able to come to a stop at the boundary of the perceived space, in case new obstacles are detected. [23] address this issue by planning in the unknown space, allowing faster flight while maintaining a safe backup trajectory that exists entirely in the nearer-range known space. However, the planner in this work assumes the unknown space to always be free. Ideally, a robot should be able to reason about uncertainty in noisier, longer-range sensor measurements, and plan efficient paths that also account for the imperfect knowledge of distant regions.
Additionally, complex and cluttered environments make efficient and agile planning especially difficult. Banfi _et al_. [24] previously suggested that reasoning about multiple path hypothesis could be beneficial when planning in uncertain environments. However, the map model used in that work is based on a 3D occupancy grid, whose typical resolutions (15-30 cm, to enable real-time processing) might preclude the existence of safe paths in dense environments. The present work fills this gap.
Other previous works have specifically addressed navigation in forests, a common outdoors environment which demonstrates the challenges of unstructured and obstacle-dense environments. [25] and [14] develop theoretical guarantees on the speed of flight through a forest modelled as a Poisson point process, with the former additionally analyzing the effects of limiting the robot's sensing range. [26] and [27] develop methods for 2D localization in forests, addressing the chal
lenges of obstacle-dense environments with limited semantic information differentiating tree trunk obstacles. [15] present a computationally efficient method for path planning in a forest, by constructing a lattice of candidate paths, then pruning it according to detected obstacle locations. [22] present a forest search and rescue system, mapping a UAV's surroundings using a hybrid approach including a 3D occupancy grid along with a data-efficient object-based representation of tree trunks, then planning paths using A* search. This paper builds upon previous works by reasoning about the _uncertainty_ in tree trunk obstacle estimates, when planning paths through the forest.
### _Perception for planning_
Robotics researchers have begun using machine learning-based perception to enable robots to more meaningfully understand their surroundings. [28, 29], and [30] use 2D bounding box object detections as measurements for landmark-based simultaneous localization and mapping (SLAM). In contrast to previous SLAM methods, which heuristically pick out landmarks from 2D image pixels or 3D point cloud scans, the object detector enables the robot to reason at a higher level, recognizing objects of interest, estimating their positions and sizes, and localizing with respect to them.
Researchers have also used learned perception to augment robotic path planning and navigation. [6] use a Bayesian neural network to identify potentially unsafe regions in stereo images (for example, distinguishing muddy terrain from a paved sidewalk), then use this information to plan safe paths that also decrease the robot's uncertainty about its surroundings, using a next-best view planning approach. [7] train a neural network to predict drivable versus unsafe terrain from sparse lidar scans. [31] extend A* search with semantic weights applied to the graph edges. Finally, [5] use semantic labels on RGBD camera images to identify from a distance regions where a UAV can safely fly in urban environments. Ryll _et al._[5] motivate their work with the specific goal of raising the maximum speed for safe UAV flight, by increasing the robot's perception range. The common theme throughout these works is that semantic information allows a robot to reason about further away regions in the environment, where traditional 3D sensors give sparser and/or noisier measurements. This capability increases the robot's effective perception range, allowing it to recognize potential unsafe areas at a further distance.
Similarly motivated, in this paper we explore the usage of 3D object detectors for path planning. Recent works in 3D object detection have shown impressive results for object detection using sensors such as lidar and stereo cameras. Lidar provides highly precise 3D point clouds which have been used successfully as inputs to 3D object detection methods [9, 10]. Recently, [1] and [8], motivated by the lower cost, weight, and power consumption of stereo cameras compared to lidar, proposed a pseudo-lidar data representation that significantly closed the accuracy gap between lidar- and stereo camera-based object detection methods.
One challenge with stereo camera 3D object detection is the fact that due to significant noise, long-range stereo point clouds cannot be annotated with 3D bounding boxes in a reliable and unbiased way. Previously, in [12], we addressed this problem by introducing a self-supervised data labeling and detector training pipeline. Our approach fuses a series of noisy point clouds into a cleaned global point cloud, clusters obstacles in the fused point cloud, then propagates obstacle locations back onto individual stereo point clouds, using the pose history of the camera.
## III Problem definition
### _Forest environment_
As an example unstructured and obstacle-dense environment for robotic path planning, we consider the planar forest model previously studied in [25, 14], and [15]. The robot's objective is to navigate through the forest to a 2D goal point \(\vec{x}_{\mathrm{G}}=(x_{\mathrm{G}},y_{\mathrm{G}})\) specified in a global coordinate frame, while avoiding densely distributed 2D obstacles representing tree trunks. We model tree trunks as circular obstacles, as seen from a top-down 2D perspective. The set
\[\boldsymbol{O}=\{\vec{O}_{i}=(x_{i},y_{i},d_{i}),\forall i\in[1,n_{\mathrm{ obs}}]\} \tag{1}\]
represents the actual obstacles in the environment, where \(x_{i}\) and \(y_{i}\) are the 2D coordinates of tree \(i\) in the global reference frame, and \(d_{i}\) is its diameter.
### _Obstacle position and size estimation_
The robot does not know the number of obstacles \(n_{\mathrm{obs}}\), their locations, or their sizes ahead of time, and estimates these variables using noisy sensor measurements as it navigates.
We assume the robot estimates the 2D positions and diameters of obstacles as Gaussian-distributed random variables. We define an obstacle estimate as
\[\hat{O}_{i}\sim\mathcal{N}\left[\vec{\mu}_{O_{i}},\Sigma_{O_{i}}\right], \tag{2}\]
where the estimate's mean state vector contains the expected 2D position and diameter of tree \(i\),
\[\vec{\mu}_{O_{i}}=\begin{bmatrix}\mu_{x,i}&\mu_{y,i}&\mu_{d,i}\end{bmatrix}, \tag{3}\]
and \(\Sigma_{O_{i}}\) is a 3D covariance matrix encoding the uncertainty in the estimate. The elements on the diagonal of \(\Sigma_{O_{i}}\), which we denote \(\sigma^{2}_{xx,i}\), \(\sigma^{2}_{yy,i}\), and \(\sigma^{2}_{dd,i}\), are the variances of the estimates of \(x_{i}\), \(y_{i}\), and \(d_{i}\) respectively. Gaussian estimates in this form can be produced by various estimation algorithms such as the Kalman filter and its variants, or factor graph SLAM [32].
These obstacle estimates are updated iteratively using newly received obstacle measurements. We assume that at each time step \(k\), the robot detects some subset of the trees in the forest-for example, all visible trees within a certain maximum range and angular field of view, except for any of those that register as false negatives due to detector errors. We denote the set of trees detected at time step \(k\) as \(\boldsymbol{D}_{k}\subseteq\boldsymbol{O}\).
For each tree \(i\) in \(\boldsymbol{D}_{k}\), the robot receives a range, bearing and diameter measurement:
\[\vec{z}_{i,k}=\begin{bmatrix}r_{i,k}&\phi_{i,k}&d_{i,k}\end{bmatrix}^{T}. \tag{4}\]
We model each as a Gaussian variable,
\[r_{i,k}=\bar{r}_{i,k}+W_{i,k}^{r}\left(\bar{r}_{i,k}\right), \tag{5}\]
\[\phi_{i,k}=\bar{\phi}_{i,k}+W_{i,k}^{\phi}, \tag{6}\]
\[d_{i,k}=\bar{d}_{i,k}+W_{i,k}^{d}\left(\bar{r}_{i,k},\bar{d}_{i,k}\right), \tag{7}\]
where \(\bar{r}_{i,k},\ \bar{\phi}_{i,k}\) and \(\bar{d}_{i,k}\) are the measurement means, i.e. the true range and bearing from the robot to the obstacle, and the obstacle's true diameter. \(W_{i,k}^{r},W_{i,k}^{\phi}\) and \(W_{i,k}^{d}\) are zero mean, Gaussian distributed random variables modeling the sensor noise on these quantities. The range measurement noise depends on the range from the robot to the obstacle, and we assume generally that the measurement variance will increase with range, as studied in [2]. We also assume that the diameter measurement noise may vary with range and with the true size of the detected obstacle.
Given these range-bearing measurements, we calculate the 2D obstacle estimates in (2) by solving a factor graph landmark SLAM problem, defining range and bearing factors between the detected trees at time step \(k\) and the corresponding robot pose factor. Additionally, we estimate the obstacle sizes from the diameter measurements.
### _Local planner definition_
Existing planning pipelines [3, 4, 23] generally include a _local planner_ which generates a dynamically feasible short-term trajectory for the robot. This local planner complements the global planner, which plans a high-level path that reaches all the way to the global goal, but does not necessarily account for the robot dynamics due to computational constraints.
We assume the local planner takes as input a local goal position \(\vec{x}_{\mathrm{L}}=\left(x_{\mathrm{L}},y_{\mathrm{L}}\right)\), the robot dynamics function, and a map representation, and outputs a series of 2D waypoints
\[W_{\mathrm{L}}=\left[\begin{array}{ccc}x_{0}&y_{0}\\ x_{1}&y_{1}\\ \vdots&\vdots\\ x_{n_{w}}&y_{n_{w}}\\ x_{L}&y_{L}\end{array}\right] \tag{8}\]
which form a path from the robot's current state \(\vec{q}_{0}=\begin{bmatrix}x_{0}&y_{0}&\theta_{0}\end{bmatrix}^{T}\) to the goal position \(\vec{x}_{\mathrm{L}}\) through \(n_{w}\) waypoints. In the following sections of this paper, we describe a modular high-level planner which augments the local planner by providing a local goal computed using long-range, uncertain object detections.
## IV Multiple-hypothesis planning under uncertainty
We now present the steps of our multiple-hypothesis path planning method, illustrated in Figure 2. Our planner takes as input the Gaussian obstacle estimates defined in section III-B, constructs a navigation graph that probabilistically represents possible routes through the forest, generates and evaluates multiple path hypotheses, and finally outputs a local goal point for the local planner described in section III-C.
### _Probability model for safe navigation between estimated obstacles_
At a high level, a path through the forest is defined by a series of pairs of obstacles, between which the robot passes as it moves towards the goal. In order to reason about the safety of a given path through the forest, we derive a model for calculating the probability that the robot will be able to safely move between a pair of obstacles whose positions and sizes are estimated assuming Gaussian uncertainty, as defined in section III-B. Figure 3 illustrates this problem.
We begin by calculating the probability that a robot is able to safely move between a pair of one-dimensional obstacles, then generalize this simpler calculation to the two-dimensional case.
#### Iv-A1 1D safety probability calculation
In one dimension, two obstacles \(\vec{O}_{i}\) and \(\vec{O}_{j}\) have scalar positions \(x_{i}\) and \(x_{j}\), and widths \(d_{i}\) and \(d_{j}\). Assuming that \(x_{i}<x_{j}\), if the gap between the right edge of obstacle \(\vec{O}_{i}\) and the left edge of obstacle \(\vec{O}_{j}\) is greater than the robot width, then the robot will be able to safely move between the obstacles. For clarity in the following derivations, we use the obstacle radii \(r_{i}=\frac{d_{i}}{2}\), \(r_{j}=\frac{d_{j}}{2}\). The right edge point of \(\vec{O}_{i}\) is then \(e_{i}=x_{i}+r_{i}\), and the left edge of \(\vec{O}_{j}\) is \(e_{j}=x_{j}-r_{j}\). Then, the width of the free space between these obstacles is
\[S =e_{j}-e_{i} \tag{9}\] \[=x_{j}-x_{i}-(r_{i}+r_{j}). \tag{10}\]
Figure 4 shows an example illustration of the free space \(S\), calculated based on the 1D positions and radii of a pair of obstacles.
Since we do not have perfect environment knowledge, we model the obstacle positions and sizes as Gaussian random variables, with the uncertain position and size of obstacle \(\hat{O}_{i}\) being
\[\hat{x}_{i} \sim\mathcal{N}\left(\mu_{x,i},\sigma_{x,i}^{2}\right), \tag{11}\] \[\hat{r}_{i} \sim\mathcal{N}\left(\mu_{r,i},\sigma_{r,i}^{2}\right), \tag{12}\]
and likewise for \(\hat{O}_{j}\).
The estimated width of the free space \(S\), as a linear combination of Gaussian random variables according to (9), is then also Gaussian distributed as
\[\hat{S}\sim\mathcal{N}\left(\mu_{S},\sigma_{S}^{2}\right), \tag{13}\]
where
\[\mu_{S}=\left(\mu_{x,j}-\mu_{r,j}\right)-\left(\mu_{x,i}+\mu_{r,i}\right), \tag{14}\]
\[\sigma_{S}^{2}=\sigma_{x,i}^{2}+\sigma_{r,i}^{2}+\sigma_{x,j}^{2}+\sigma_{r,j} ^{2}. \tag{15}\]
Let the known width of the robot be \(w_{R}\). Then, the probability that our robot can move between obstacles \(\hat{O}_{i}\) and \(\hat{O}_{j}\) is the probability that the space between them is enough to avoid collision, i.e. \(P(S>w_{R})\). We can exactly compute this probability as \(1\) minus the cumulative distribution function (CDF) of \(S\), giving us an expression for the safety probability using the Gaussian CDF equation:
\[P(S>w_{R})=\frac{1}{2}\left(1-\text{erf}\left(\frac{x-\mu_{S}}{\sigma_{S}\sqrt {2}}\right)\right), \tag{16}\]
where the mean and standard deviation of \(S\), respectively \(\mu_{S}\) and \(\sigma_{S}\), are given by (14) and (15). erf denotes the error function used to compute the Gaussian CDF.
#### Iii-B2 2D safety probability calculation
We now solve for the probability that the robot can safely navigate between a pair of 2D circular obstacles specified by their positions and diameters, showing that the 2D problem can be transformed to the 1D case without loss of generality.
Given a pair of 2D obstacle estimates \(\hat{O}_{i}=\left(\vec{\mu}_{\hat{O},i},\Sigma_{\hat{O},i}\right)\) and \(\hat{O}_{j}=\left(\vec{\mu}_{\hat{O},j},\Sigma_{\hat{O},j}\right)\), as defined in (2), the angle between the world frame \(x\)-axis and the line passing through both position estimate means is
\[\theta_{ij}=\arctan\left(\frac{\mu_{y,j}-\mu_{y,i}}{\mu_{x,j}-\mu_{x,i}}\right). \tag{17}\]
We can then compute the rotation matrix \(R_{ij}\in SO(2)\), which rotates the obstacle mean positions into a reference frame where the obstacle centers lie along the transformed x-axis,
\[R_{ij}=\begin{bmatrix}\cos\left(\theta_{ij}\right)&\sin\left(\theta_{ij} \right)\\ -\sin\left(\theta_{ij}\right)&\cos\left(\theta_{ij}\right)\end{bmatrix}. \tag{18}\]
Note that the negative \(\sin\) term is located at the bottom-left of \(R_{ij}\), as this is a rotation by \(-\theta_{ij}\).
We define the mean vector and covariance matrix which relate to only the position of obstacle \(i\), omitting the obstacle diameter, as
\[\vec{\mu}_{xy,i}=\begin{bmatrix}\mu_{x,i}&\mu_{y,i}\end{bmatrix}^{T} \tag{19}\]
\[\Sigma_{xy,i}=\begin{bmatrix}\sigma^{2}_{xx,i}&\sigma^{2}_{xy,i}\\ \sigma^{2}_{xy,i}&\sigma^{2}_{yy,i}\end{bmatrix}, \tag{20}\]
where \(\Sigma_{xy,i}\) is the upper-left \(2\times 2\) submatrix of the full covariance matrix \(\Sigma_{i}\).
We transform the position mean and covariance using \(R_{ij}\), obtaining the transformed mean and transformed covariance matrix
\[\vec{\mu}^{\prime}_{xy,i}=R_{ij}\vec{\mu}_{xy,i} \tag{21}\]
\[\Sigma^{\prime}_{xy,i}=R_{ij}\Sigma_{xy,i}R^{T}_{ij}, \tag{22}\]
and likewise for the transformed mean and covariance of obstacle \(\hat{O}_{j}\).
We then marginalize over the transformed \(y\)-axis, to find the position mean and variance along the transformed \(x\) axis. This leave us with the simpler 1D probability calculation, and we can then apply the equations for the 1-D safe navigation probability, along the transformed \(x\)-axis.
Defining the components of the transformed mean and covariance of obstacle \(i\) as
\[\vec{\mu}^{\prime}_{xy,i}=\begin{bmatrix}\mu^{\prime}_{x,i}&\mu^{\prime}_{y, i}\end{bmatrix}^{T} \tag{23}\]
\[\Sigma^{\prime}_{xy,i}=\begin{bmatrix}\sigma^{2^{\prime}}_{xx,i}&\sigma^{2^{ \prime}}_{xy,i}\\ \sigma^{2^{\prime}}_{xy,i}&\sigma^{2^{\prime}}_{yy,i}\end{bmatrix}, \tag{24}\]
Fig. 4: Illustration of the width of the safe space between a pair of obstacles in 1D, which is used to calculate the probability that the robot can safely pass between the obstacles.
Fig. 3: Our approach models the probability that a robot of a given width is able to safely pass between a pair of circular obstacles with Gaussian distributed uncertainty in their 2D positions and radii.
Fig. 2: Illustration of the components of our high-level planning method, and their inputs and outputs.
we can then express the mean and variance of the free space \(S\) between the two obstacles, _in the case of two-dimensional circular obstacles_, as
\[\mu_{S}=\left(\mu_{x,j}^{\prime}-\mu_{r,j}\right)-\left(\mu_{x,i}^{ \prime}+\mu_{r,i}\right), \tag{25}\] \[\sigma_{S}^{2}=\sigma_{xx,i}^{2^{\prime}}+\sigma_{r,i}^{2}+\sigma _{xx,j}^{2^{\prime}}+\sigma_{r,j}^{2}. \tag{26}\]
Using these terms, we can then compute the probability of safe navigation between the two obstacles using equation (16).
### _Navigation graph construction_
Given estimates of the obstacle positions and sizes, we construct a graph which represents the high-level paths the robot can take through the environment to reach its goal. Our graph encodes information on _path distance_ and _path safety_, while also taking into account the _uncertainty in obstacle estimates_. Figure 5 illustrates the graph construction, and Algorithm 1 lists the steps of the process.
#### Iii-B1 Delaunay triangulation cell decomposition
Previous works have used the Delaunay triangulation for localization in forests, due to its ability to qualitatively describe regions in the environment [26, 27]. Using the Delaunay triangulation algorithm [33], we divide the workspace into triangular cells with obstacle center points as their vertices. The Delaunay triangulation computes these cells such that each triangular cell, defined by a triplet of obstacles, contains no other obstacle center points within its boundaries. Figure 5.b) shows an example Delaunay triangulation.
The edges between the Delaunay triangulation cells, which we will refer to as _cell faces_ (to avoid confusion with the navigation graph edges which we will define later), represent thresholds where the robot passes _between_ a pair of obstacles, as described in [34].
The Delaunay triangulation is defined for our purposes as \(DT(\hat{\mathbf{O}})=(V_{\mathrm{DT}},E_{\mathrm{DT}},T_{\mathrm{DT}})\), where \(\hat{\mathbf{O}}\) is the set of estimated obstacles. \(V_{\mathrm{DT}}\) are the vertices of the triangulation, placed at the mean center positions of the estimated obstacles, so that \(V_{\mathrm{DT}}=\{\vec{\mu}_{xy,i}\mid\hat{O}_{i}\in\hat{\mathbf{O}}\}\). \(E_{\mathrm{DT}}\) is the set of Delaunay triangulation cell faces, while \(T_{\mathrm{DT}}\) is the set of obstacle triplets that form the cells; therefore, \(\exists(\hat{O}_{i},\hat{O}_{j},\hat{O}_{k})\in T_{\mathrm{DT}}\iff\exists e_{ ij},e_{jk},e_{ik}\in E_{\mathrm{DT}}\). The circle circumscribed around the vertices of the triangle formed by these obstacles contains no other obstacle center points \(\vec{\mu}_{\hat{O},l}\) such that \(l\notin\{i,j,k\}\).
For each Delaunay cell face between a pair of obstacles \(\hat{O}_{i}\) and \(\hat{O}_{j}\), we use Equation 16 to calculate the probability that the robot can safely pass between \(\hat{O}_{i}\) and \(\hat{O}_{j}\), transitioning from one Delaunay cell to the next. Since at a high level, a path through the forest can be thought of as a series of obstacle pairs between which the robot should pass, we can therefore equivalently define a path by a series of Delaunay triangulation cells. In the following sections of this paper, we define a graph structure over the Delaunay cells, over which we can search in order to determine possible paths to the goal.
#### Iii-B2 Definitions of graph terms
Before introducing our navigation graph construction procedure, we define the following terms:
* \(P_{\mathrm{target}}\in[0,1)\), the _desired safety probability_, specified as a planner parameter. The planner will attempt to find a path whose safety (i.e. the probability that the robot will be able to reach the global goal without crashing into an obstacle by following this path) exceeds this threshold.
* \(G_{\mathrm{N}}=(V_{\mathrm{N}},E_{\mathrm{N}})\), the **navigation graph**, defined by its vertices and edges. The vertices of this graph are placed on the Delaunay triangulation cell faces, and are connected by edges to other vertices on adjacent cell faces. A search over this graph therefore finds a path passing between a series of obstacle pairs in the forest. Figure 5.h) illustrates an example navigation graph, showing the placement of vertices and edges.
Fig. 5: Steps of the graph construction process.
```
0: A method for computing the probability that the robot can safely pass between a pair of obstacles, \(P_{safe}(\hat{O}_{i},\hat{O}_{j})\), which returns a probability in the set \(\{p\in\mathcal{R}\mid 0\leq p\leq 1\}\).
0:\(\hat{O}_{i}=\left(\vec{\mu}_{O,i},\Sigma_{O,i}\right),i\in[1,n_{\mathrm{obs}}]\), obstacle estimates. \(\vec{q}_{start}\), the robot's starting state. \(\vec{x}_{\mathrm{G}}\), the global goal position. \(p_{\mathrm{target}}\in[0,1]\), the desired safety probability. \(r_{\mathrm{short}}\in\mathbb{R}_{\geq 0}\), the threshold between the short and long range zones. \(w_{robot}\in\mathbb{R}_{\geq 0}\), the robot width.
0:\(V_{\mathrm{N}},E_{\mathrm{N}}\), the navigation graph structure. \(c_{\mathrm{dist}}\), the edge distance costs. \(p_{\mathrm{safe}}\), the vertex safety probabilities. \(R\), the vertex range zones.
1:\(V_{\mathrm{DT}},E_{\mathrm{DT}}=DT(\hat{\mathbf{O}})\)// Delaunay triangulation.
2:\(V_{\mathrm{N}}\leftarrow\emptyset\)// Initialize navigation graph vertices.
3:\(E_{\mathrm{N}}\leftarrow\emptyset\)// Initialize navigation graph edges.
4:for all\(e_{DT,ij}\in E_{\mathrm{DT}}\)do// Delaunay edge between obstacles i, j.
5: // Compute distance from robot to obstacles.
6:\(r_{i}=compute\_distance(\vec{\mu}_{O,i},\vec{q}_{start})\)
7:\(r_{j}=compute\_distance(\vec{\mu}_{O,j},\vec{q}_{start})\)
8:if\(r_{i}>r_{\mathrm{short}}\)or\(r_{j}>r_{\mathrm{short}}\)then
9:\(r\gets long\)
10:else
11:\(r\gets short\)
12:endif
13:\(prob=P_{safe}(\hat{O}_{i},\hat{O}_{j})\)
14:if\(prob<p_{\mathrm{target}}\)then
15:if\(r=short\)then
16:\(V_{new}=\emptyset\)
17:else
18:\(V_{new}=\{midpoint(\vec{\mu}_{O,i},\vec{\mu}_{O,j})\}\)
19:endif
20:else
21:\(V_{new}=\{place\_points(\hat{O}_{i},\hat{O}_{j},w_{robot})\}\)
22:endif
23:for all\(v_{k}\in V_{new}\)do
24:\(p_{\mathrm{safe}}(v_{k})=prob\)
25:\(c_{\mathrm{safe}}(v_{k})=-\log(prob)\)
26:\(R(v_{k})=r\)
27:for all\(v_{l}\in V_{\mathrm{N}}\) that share a Delaunay cell with \(v_{k}\)do
28:\(E_{\mathrm{N}}\gets E_{\mathrm{N}}\cup\{e_{kl}\}\)
29:\(c_{\mathrm{dist}}(e_{kl})=distance(v_{k},v_{l})\)
30:endfor
31:endfor
32:\(V_{\mathrm{N}}\gets V_{\mathrm{N}}\cup V_{new}\)
33:endfor
34:return\((V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},p_{\mathrm{safe}},c_{ \mathrm{safe}},R)\)
```
**Algorithm 1**Constructing the navigation graph
**Input:**
* \(\hat{O}_{i}=\left(\vec{\mu}_{O,i},\Sigma_{O,i}\right),i\in[1,n_{\mathrm{obs}}]\), obstacle estimates. \(\vec{q}_{start}\), the robot's starting state. \(\vec{x}_{\mathrm{G}}\), the global goal position. \(p_{\mathrm{target}}\in[0,1]\), the desired safety probability. \(r_{\mathrm{short}}\in\mathbb{R}_{\geq 0}\), the threshold between the short and long range zones. \(w_{robot}\in\mathbb{R}_{\geq 0}\), the robot width.
**Output:**
* \(V_{\mathrm{N}},E_{\mathrm{N}}\), the navigation graph structure. \(c_{\mathrm{dist}}\), the edge distance costs. \(p_{\mathrm{safe}}\), the vertex safety probabilities. \(R\), the vertex range zones.
* \(V_{\mathrm{DT}},E_{\mathrm{DT}}=DT(\hat{\mathbf{O}})\)// Delaunay triangulation.
* \(V_{\mathrm{N}}\leftarrow\emptyset\)// Initialize navigation graph vertices. \(E_{\mathrm{N}}\leftarrow\emptyset\)// Initialize navigation graph edges.
* \(E_{\mathrm{N}}\gets\emptyset\)// Initialize navigation graph edges.
* \(E_{\mathrm{DT},ij}\in E_{\mathrm{DT}}\)do// Delaunay edge between obstacles i, j.
* \(r_{i}=compute\_distance(\vec{\mu}_{O,i},\vec{q}_{start})\)
* \(r_{j}=compute\_distance(\vec{\mu}_{O,j},\vec{q}_{start})\)
* \(r_{i}>r_{\mathrm{short}}\)or\(r_{j}>r_{\mathrm{short}}\)then
* \(r\gets long\)
* \(r\gets short\)
* \(r\gets short\)
* \(prob<p_{\mathrm{target}}\)
* \(r=short\)
* \(V_{new}=\emptyset\)
* \(V_{new}=\{midpoint(\vec{\mu}_{O,i},\vec{\mu}_{O,j})\}\)
* \(\textbf{if}\)
* \(V_{new}=\{midplace\_points(\hat{O}_{i},\hat{O}_{j},w_{robot})\}\)
* \(V_{new}=\{place\_points(\hat{O}_{i},\hat{O}_{j},w_{robot})\}\)
* \(V_{k}\in V_{new}\)do
* \(p_{\mathrm{safe}}(v_{k})=prob\)
* \(c_{\mathrm{safe}}(v_{k})=-\log(prob)\)
* \(R(v_{k})=r\)
* \(v_{l}\in V_{\mathrm{N}}\) that share a Delaunay cell with \(v_{k}\)do
* \(E_{\mathrm{N}}\gets E_{\mathrm{N}}\cup\{e_{kl}\}\)
* \(c_{\mathrm{dist}}(e_{kl})=distance(v_{k},v_{l})\)
* \(\textbf{endfor}\)
* \(V_{\mathrm{N}}\gets V_{\mathrm{N}}\cup V_{new}\)
* \(\textbf{endfor}\)
* \(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},p_{\mathrm{safe}},c_{ \mathrm{safe}},R)\)
* \(v_{i}\in V_{\mathrm{N}},e_{ij}\in E_{\mathrm{N}}\), the individual vertices and edges.
* \(c_{\mathrm{dist}}:e_{ij}\rightarrow\mathbb{R}_{\geq 0}\), a mapping which gives the Euclidean distance cost for each navigation graph edge. \(\mathbb{R}_{\geq 0}\) is the set of non-negative real numbers.
* \(p_{\mathrm{safe}}:v_{i}\rightarrow\{p\in\mathbb{R}\mid 0\leq p\leq 1\},\forall i \in V_{\mathrm{N}}\), a mapping which gives the safety probability for each graph vertex, i.e. the probability that the robot can safely move between the pair of obstacles associated with this vertex.
* \(c_{\mathrm{safe}}:v_{i}\rightarrow\mathbb{R}_{\geq 0}\), defined as the safety cost \(c_{\mathrm{safe}}(v_{i})=-\log(p_{\mathrm{safe}}(v_{i})),\forall i\in V_{ \mathrm{N}}\).
* \(\mathcal{R}=\{short,long\}\), the set of **range zones**. Our planner treats vertices differently depending on their range zone, due to differing amounts of sensor noise and estimation uncertainty at different sensor ranges.
* \(r_{\mathrm{short}}\in\mathbb{R}_{\geq 0}\), the range threshold defining the two range zones. The threshold is defined as a planner parameter.
* \(R:v_{i}\rightarrow\mathcal{R}\), a **range mapping** which maps a vertex to the range zone to which it belongs.
#### Iii-B3 Vertex range zones
Obstacle estimation uncertainty varies significantly with distance from the robot, due to increasing sensor measurement noise with range, and our graph structure therefore stores obstacle range information so that the planner can account for this variation.
As illustrated in Figure 5.b), we divide the estimated obstacles into two _range zones_: short and long, divided by the threshold \(r_{\mathrm{short}}\). Obstacles at distance less than \(r_{\mathrm{short}}\) are considered short range, and all others are considered long range. We assume that at short range, obstacle position and size estimates are fairly confident, having been updated recursively with multiple range-bearing measurements from detections. Beyond the short range, obstacle estimates contain significant uncertainty due to higher measurement noise at further ranges.
#### Iii-B4 Navigation graph vertex placement
The placement of vertices in the graph depends on obstacle estimate safety probabilities as well as their range zones. For safe Delaunay cell faces, i.e. \(p_{\mathrm{safe}}(e_{DT,ij})\geq p_{\mathrm{target}}\), we allow the robot to plan paths between the estimated obstacles \(\hat{O}_{i}\) and \(\hat{O}_{j}\), and we add corresponding vertices to the graph as shown in Figure 5.c).
For an unsafe Delaunay cell face, i.e. \(p_{\mathrm{safe}}(e_{DT,ij})<p_{\mathrm{target}}\), the decision on whether or not to add graph vertices along this cell face depends on the range zone of obstacles \(\hat{O}_{i}\) and \(\hat{O}_{j}\). At short range, we do not place graph vertices on unsafe Delaunay cell faces, as demonstrated in Figure 5.e). When planning around close-by obstacles, the robot may not have enough time to collect more measurements and update its estimates of these obstacles. Therefore, the robot should not risk a crash by passing between unsafe obstacle pairs in the short range.
In contrast, path planning between uncertain long-range obstacles should take into account the fact that the robot will have time to safely collect additional sensor information, and update its estimate of these obstacles before closely approaching them. Preventing the robot from planning between uncertain, far-off obstacles will likely lead to excessively conservative planning behavior in obstacle-dense environments. During the graph construction, we therefore place navigation
graph vertices on Delaunay cell faces between unsafe long-range obstacles, recognizing that as the robot receives more detections, some of these uncertain vertices will resolve to safe and viable navigation paths. The multiple-hypothesis planner we introduce in the next section of this paper intelligently considers multiple of these candidate paths, making use of uncertain information from the obstacle estimates.
We place multiple vertices on long Delaunay cell faces which are determined to be very safe. These cell faces correspond to very wide gaps between obstacles, where the overall path distance may change significantly depending on whether the robot decides to pass nearby one of the obstacles or the other. Placing multiple vertices on these cell faces therefore allows the graph search to better approximate the geometric length of paths.
For each new vertex added on Delaunay cell face \(e_{DT,ij}\), we assign the vertex to the short range zone if both obstacles \(\hat{O}_{i}\) and \(\hat{O}_{j}\) are within the short range threshold \(r_{\mathrm{short}}\), and assign the vertex to the long range zone if _either_ obstacle is outside the short range.
We record the Delaunay cell face which each graph vertex lies on, and we additionally save the safety probability of the Delaunay cell face as \(p_{\mathrm{safe}}(v_{k})=P_{safe}(\hat{O}_{i},\hat{O}_{j}),\forall k\in V_{DT}\). The saved range zone and safety probability information will later be used by the multiple-hypothesis planner.
#### Iv-B5 Graph edge construction
For each Delaunay cell, we add graph edges to \(E_{\mathrm{N}}\) connecting each vertex on one of the cell faces with all vertices on other cell faces, as shown in Figure 1.d). For each \(e_{ij}\in E_{\mathrm{N}}\), we set the distance cost \(c_{\mathrm{dist}}(e_{ij})\) to the 2D Euclidean distance between the 2D positions of vertices \(v_{i}\) and \(v_{j}\).
Finally, we add the robot start position and goal positions as graph vertices \(v_{start}\) and \(v_{goal}\), shown in Figure 1.h). If the start and/or goal lie within a Delaunay triangulation cell, we connect them via graph edges, weighted by 2D Euclidean distance, to all graph vertices that lie on the faces of this cell. If either the start or goal lies outside all Delaunay cells, we connect them to all visible vertices on the cell faces at the boundary of the Delaunay triangulation (i.e. all vertices which can be connected by a straight line segment in 2D to the start/goal, without the line segment passing through any other cell faces). We set \(p_{\mathrm{safe}}(v_{start})=p_{\mathrm{safe}}(v_{goal})=1.0\), i.e. the robot does not incur any safety cost traveling through the start and goal vertices.
### _Multiple-hypothesis planning under uncertainty_
We now present a high-level planning method which uses the constructed navigation graph to generate and evaluate candidate paths to the goal, based on multiple environment hypotheses that take into account the uncertainty in obstacle estimates. Algorithm 2 outlines the multiple-hypothesis planning algorithm.
#### Iv-C1 Definitions of planner terms
We define the following terms, used in addition to those defined in IV-B2 to describe our multiple hypothesis planning algorithm.
* \(H:v_{i}\rightarrow\{0,1\}\), a **hypothesis** of the environment, defined as a mapping which takes as input a navigation graph vertex and outputs either a \(0\), indicating that _in this hypothesis_, the vertex is not safe to pass through, or a \(1\), indicating that it is safe.
* \(P\subset V_{\mathrm{N}}\), a **path** through the graph, defined as an _ordered_ subset of the graph vertices. For any vertex \(v_{i}\) at index \(P[k]\) in the path, and vertex \(v_{j}\) at index \(P[k+1]\), there must exist a graph edge \(e_{i,j}\in E_{\mathrm{N}}\).
The planner uses the following parameters:
* \(N_{\mathrm{hyp}}\in\mathbb{N}_{>0}\), the maximum number of path hypotheses to generate.
* \(p_{\mathrm{min}}\in[0,1)\), with \(p_{\mathrm{min}}<p_{\mathrm{target}}\) a minimum safety probability for a graph vertex to be considered by the multiple hypothesis planner. This parameter is optional, and by default can be set to zero, causing it to have no effect on the planner.
* \(\alpha_{\mathrm{dist}}\in\mathbb{R}_{\geq 0}\), the weight on path length, and \(\alpha_{\mathrm{safe}}\in\mathbb{R}_{\geq 0}\), the weight on path safety. These weights are used to evaluate the candidate paths in order to decide on the best overall path.
#### Iv-C2 Hypothesis generation
Our planner requires a shortest path search algorithm \(SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H)\). The input \(H\) is a _hypothesis_, which excludes some subset of the vertices \(v_{i}\in V_{\mathrm{N}}\) that are considered to be unsafe (i.e. \(H(v_{i})=0\)).
We first construct an initial hypothesis \(H_{0}\) with all graph vertices marked as safe, i.e. \(\forall v_{i},H_{0}(v_{i})=1\). Optionally, if a minimum probability \(p_{\mathrm{min}}\) greater than zero is specified, for any vertices with \(p_{\mathrm{safe}}(v_{i})<p_{\mathrm{min}}\), we set \(H_{0}(v_{i})\) to 0. The threshold \(p_{\mathrm{min}}\) can be used to exclude extremely unsafe vertices from the multiple hypothesis planner, making the path search more efficient.
The planner then computes the shortest path \(P_{0}=SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H_{0})\), finding the shortest possible path to the goal according only to the Euclidean distances given by the navigation graph edge distance costs \(c_{\mathrm{dist}}(e_{ij})\). We initialize a set of candidate paths \(\mathcal{P}\), at first containing only \(P_{0}\). Figure 6.a) illustrates this step.
The planner then generates additional candidate paths, up to \(N_{\mathrm{hyp}}\) in total, each planned under a different hypothesis of which vertices in the graph are safe to pass through. We store the graph vertices \(v_{i}\in P_{0}\) in a priority queue \(\mathcal{Q}\). Each vertex is stored along with a copy of \(H_{0}\). We denote the priorities used for the queue as \(\pi_{0,i}\) and calculate the priority for each vertex \(v_{i}\) as \(\pi_{0,i}=-(1-p_{\mathrm{safe}}(v_{i}))\), the negative of the likelihood that vertex \(v_{i}\) is unsafe. Computing the priorities in this way allows us to draw the vertex which is most likely to be unsafe from the queue at each iteration of the multiple hypothesis path search.
At each planner iteration, we use \(\mathcal{Q}\) to find the graph vertex \(v_{i}\) which is most likely to be unsafe. We pop this vertex from the queue, with its associated hypothesis \(H_{j}\), then initialize a new hypothesis \(H_{k}\) as a copy of \(H_{j}\). We then set \(H_{k}(v_{i})=0\).
We then plan the shortest path _under this hypothesis_, \(P_{k}=SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H_{k})\), finding the shortest path assuming that the graph vertex \(v_{i}\) is unsafe. This new path will be added to the set of candidate paths if it is not a duplicate of an existing path, and if the safety of the path evaluated only over the short range vertices in the path is above \(p_{\mathrm{target}}\). This
condition guarantees that the robot will not execute a path that is unacceptably unsafe in the short range.
If the new path \(P_{k}\) passes these checks, we add it to the set of candidate paths \(\mathcal{P}\), and add all vertices on the path to the queue \(\mathcal{Q}\), along with hypothesis \(H_{k}\). For each vertex \(v_{l}\in P_{k}\), the priority \(\pi_{k,l}\) is stored as \((1-p_{\mathrm{safe}}(v_{l}))\cdot\pi_{k,i}\), indicating that the likelihood of a hypothesis depends conditionally on multiple graph vertices being unsafe. Figure 6.b-c show the iterative process of planning, excluding graph vertices using the priority queue, then planning again under the new hypothesis.
We terminate the search once the maximum number of hypotheses has been reached, i.e. \(|\mathcal{P}|=N_{\mathrm{hyp}}\), or a path \(P_{i}\) is found such that the safety of \(P_{i}\) is above the desired safety probability, i.e. \(\prod_{v_{j}\in P_{i}}p_{\mathrm{safe}}(v_{j})\geq p_{\mathrm{target}}\). In calculating the path safety in this way, we assume that the values \(p_{\mathrm{safe}}(v_{i})\) of different graph vertices are independent of one another. The output of the search process is a set of candidate paths through the navigation graph, as shown in the example in Figure 6.d.
#### Iv-C3 Candidate path evaluation
Given the set of paths \(\mathcal{P}\), we calculate the _distance_ and _safety_ costs for each path, as
\[C_{\mathrm{dist}}(P_{i})=\sum_{(v_{j},v_{k})\in P_{i}}c_{\mathrm{dist}}(e_{jk}), \tag{27}\]
and
\[C_{\mathrm{safe}}(P_{i})=\sum_{v_{j}\in P_{i}}c_{\mathrm{safe}}(v_{j}). \tag{28}\]
Since the distance costs and safety costs are on different scales, we normalize both across all paths. For all \(P_{i}\in\mathcal{P}\):
\[C_{\mathrm{dist}}(P_{i})=\frac{C_{\mathrm{dist}}(P_{i})}{\max_{P_{j}\in \mathcal{P}}C_{\mathrm{dist}}(P_{j})}, \tag{29}\]
\[C_{\mathrm{safe}}(P_{i})=\frac{C_{\mathrm{safe}}(P_{i})}{\max_{P_{j}\in \mathcal{P}}C_{\mathrm{safe}}(P_{j})}. \tag{30}\]
Finally, we compute the overall cost of each path
\[C_{\mathrm{total}}(P_{i})=\alpha_{\mathrm{dist}}\cdot C_{\mathrm{dist}}(P_{i}) +\alpha_{\mathrm{safe}}\cdot C_{\mathrm{safe}}(P_{i}), \tag{31}\]
and use the final path with minimum cost
\[P^{*}=\operatorname*{arg\,min}_{P_{i}\in\mathcal{P}}\left(C_{\mathrm{total}}( P_{i})\right). \tag{32}\]
#### Iv-C4 Local goal computation
Starting from the robot position, at vertex \(v_{0}\) in \(P^{*}\), we then calculate the 2D goal point for the local planner, \((x_{\mathrm{L}},y_{\mathrm{L}})\), by finding the point along the 2D line segments \(e_{ij}\), such that \((v_{i},v_{j})\in P^{*}\), which is ahead of the robot by a plan-ahead distance \(d_{\mathrm{local}}\in\mathbb{R}_{\geq 0}\). This approach uses the high-level path \(P^{*}\) to guide the local planner, which handles the details of generating a dynamically-feasible trajectory for the robot and accounting for the specific geometry of nearby obstacles.
### _Planner safety guarantees_
Given the graph structure, paths through the graph, and formal probabilities of a robot passing safely between a pair of obstacles, one can calculate probability guarantees for each path.
Given a total number of vertices in \(P_{i}\), assuming each passing through a pair of obstacles is an independent event, the probability of a collision for \(P_{i}\) is
\[p_{\mathrm{col}}(P_{i})=\left(1-\prod_{v_{j}\in P_{i}}p_{\mathrm{safe}}(v_{j})\right) \tag{33}\]
One could also simply consider the worst case probability, or
\[p_{\mathrm{col}}(P_{i})\geq\left(1-\min_{v_{j}\in P_{i}}p_{\mathrm{safe}}(v_{ j})\right) \tag{34}\]
## V Experiments and results
### _Simulated forest environments_
We evaluate our planner in a large-scale experimental trial, using randomly generated simulated forest environments. We vary the complexity of the simulated forests, in terms of the overall density of trees and their distribution, in order to test our hypothesis that our probabilistic planner should enable safer navigation in more complex and obstacle-dense environments.
We generate our forests by specifying a desired tree density per area, then sampling the total number of trees in the environment according to the Poisson distribution as in [25]. Tree sizes are uniformly distributed in a defined range, and we sample trees such that they do not overlap with each other. We first conduct simulations in forest environments of varying
Fig. 6: Example of the multiple-hypothesis planning procedure using the navigation graph. The planner generates hypotheses of the environment, until finding a safe path–in this case, the pink path–with a safety probability over the desired safety \(p_{\mathrm{target}}\), or until generating the maximum allowed number of hypotheses.
```
0: A shortest path graph search algorithm \(SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H)\) which returns a path \(P\).
0:\(V_{\mathrm{N}},E_{\mathrm{N}}\), the graph structure. \(c_{\mathrm{dist}}\), the edge distance costs. \(p_{\mathrm{safe}}\), the vertex safety probabilities. \(R\), the vertex range zones. \(p_{\mathrm{target}}\), the desired safety probability. \(p_{\mathrm{min}}\), the minimum vertex safety to consider.
0:\(\mathcal{P}\), a set of candidate paths.
1:procedureGenerate initial hypothesis \(H_{0}\)
2:for all\(v_{i}\in V_{\mathrm{N}}\)do
3:if\(p_{\mathrm{safe}}(v_{i})<p_{\mathrm{min}}\)then
4:\(H_{0}(v_{i})\gets 0\)// 0 indicates unsafe vertex.
5:else
6:\(H_{0}(v_{i})\gets 1\)// 1 indicates safe vertex.
7:endif
8:endfor
9:endprocedure
10: Find the shortest path \(P_{0}=SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H_{0})\).
11: Initialize empty priority queue \(\mathcal{Q}\).
12:for all\(v_{i}\in P_{0}\)do
13:// Use vertex safeties to compute queue priorities.
14:\(\pi_{0,i}\leftarrow-(1-p_{\mathrm{safe}}(v_{i}))\)
15:\(queue\_insert\left(\mathcal{Q},\pi_{0,i},(v_{i},H_{0})\right)\)
16:endfor
17:\(\mathcal{P}\leftarrow\{P_{0}\}\)// Initialize set of candidate paths.
18:procedureMultiple Hypothesis Search
19:while\(|\mathcal{P}|<N_{\mathrm{hyp}}\)do
20:// Get most unsafe vertex from \(\mathcal{Q}\).
21:\(\pi_{j,i},v_{i},H_{j}=queue\_pop(\mathcal{Q})\).
22:\(H_{k}\leftarrow H_{j}\)// Copy previous hypothesis.
23:\(H_{k}(v_{i})\leftarrow 0\)// Set new vertex to unsafe.
24:\(P_{k}=SP(V_{\mathrm{N}},E_{\mathrm{N}},c_{\mathrm{dist}},H_{k})\)// Path search.
25:\(P_{k,short}=\{v_{l}\in P_{k}\mid R(v_{l})=short\}\)
26:if\(safety(P_{k,short})<p_{\mathrm{target}}\)then
27:continue// Path unsafe in short range
28:endif
29:if\(P_{k}\notin\mathcal{P}\)then// Disallow duplicate paths.
30:\(\mathcal{P}\leftarrow\mathcal{P}\cup\{P_{k}\}\).
31:if\(safety(P_{k})\geq p_{\mathrm{target}}\)then
32:break // Found safe path, end search.
33:endif
34:for all\(v_{l}\in P_{k}\)do
35:\(\pi_{k,l}\leftarrow(1-p_{\mathrm{safe}}(v_{l}))\cdot\pi_{j,i}\)
36:\(queue\_insert(\mathcal{Q},\pi_{j,i},(v_{l},H_{k}))\)
37:endfor
38:endif
39:endwhile
40:endprocedure
41:return\(\mathcal{P}\)
```
**Algorithm 2**Multiple-hypothesis planning in the navigation graph
**Input:**
\(V_{\mathrm{N}},E_{\mathrm{N}}\), the graph structure.
\(c_{\mathrm{dist}}\), the edge distance costs.
\(p_{\mathrm{safe}}\), the vertex safety probabilities.
\(R\), the vertex range zones.
\(p_{\mathrm{target}}\), the desired safety probability.
\(p_{\mathrm{min}}\), the minimum vertex safety to consider.
**Output:**
\(\mathcal{P}\), a set of candidate paths.
in \(\Sigma_{O_{i}}\) are zero. We estimate the landmark sizes using simple 1D Kalman filters, in parallel with the SLAM obstacle position estimation.
In our simulation, the robot receives obstacle detections at a lower rate than odometry measurements. At time steps where no detections are available, we perform SLAM updates using odometry only. In order to isolate our study of the path planner from localization error effects, our experiments use simulated odometry measurements with negligible noise, so the localization problem is effectively solved and the estimation practically reduces to a mapping problem.
The true assignment of detections to landmark identities is unknown to the robot. Following [32], we perform data association using the Mahalanobis distance between new detections and existing landmark estimates, in order to account for the estimate uncertainties during assignment. In order to compute the Mahalanobis distance, we transform the obstacle estimate 2D position uncertainty to the range-bearing space. We discard any data association matches whose Mahalanobis distance falls over a set threshold, and initialize new landmarks for any detections that remain unmatched after association.
#### Iv-B2 Hybrid A* local planner
We simulate a 2D differential drive robot which plans iteratively through the 2D forest environment. The robot uses a high-level global planner to specify a short-term goal for a local planner to follow, as described in section III-C, and re-plans as it moves and obtains new sensor measurements.
For our experiments, we implement a hybrid A* local planner based on [19, 20], and [36]. Hybrid A* produces smooth, drivable paths which obey the robot kinematics, and has previously been proven successful in practical robotic systems. The local goal point for hybrid A* is generated by our multiple-hypothesis planner using the approach described in section IV-C4. We note that the multiple-hypothesis planner is agnostic to the specific choice of local planner.
We run the hybrid A* search using the current obstacle estimate means as obstacles. Since hybrid A* is a local planner, it operates only in a short range where sensor noise is minimal and estimate means are reliable. We check for obstacle collisions by bloating each obstacle's radius by half the robot width, and then modeling the robot as a point robot.
As mentioned in [19], a common pitfall with the hybrid A* planning algorithm is that the returned path can contain unnecessary side-to-side turns, due to the fixed motion primitives used by the algorithm, and can also pass very close to obstacles due to the search finding the shortest possible path. Therefore, as in Dolgov _et al._[19], we use a gradient descent path smoother, optimizing for the distance of the path from nearby obstacles as well as the smoothness of the path.
#### Iv-B3 Baseline global planner
In our experiments, we compare our multiple-hypothesis planner to a baseline global planner which uses a 2D A* search to find the shortest path to the goal, but does not account for the uncertainty in obstacle estimates. The baseline A* planner performs a search in 2D over a fixed-resolution grid. The search does not pass through grid cells which are in collision with an estimated obstacle mean. This search ignores the robot kinematics, and therefore is much quicker to execute than the 3D hybrid A* search. This lower computation time is required since the global planner searches over a much longer distance to the goal, compared to the local planner which plans a path only a few meters in length. After the 2D A* global planner computes a path to the global goal, the point \(d_{\mathrm{local}}\) distance ahead of the robot on this path is selected and used as the goal for the hybrid A* local planner, in order to find a short-range kinematically feasible path for the robot. We use the same local planner for the baseline as we do for our multiple-hypothesis planner, in order to perform a controlled comparison.
### _Quantitative analysis_
We experimentally evaluate our multiple hypothesis planner in randomly generated forests, comparing it to the baseline 2D A* global planner. In order to study our planner's effectiveness in different types of environments, we vary the forest density and distribution of trees (including forests with uniformly distributed trees and forests containing nonuniform clusters of trees, as described in Section V-A). We generate 20 random forests for each combination of tree density and distribution, and run the multiple hypothesis planner varying the parameters \(N_{\mathrm{hyp}}\) and \(p_{\mathrm{target}}\). The simulated robot re-plans iteratively at a rate of 1Hz, and obtains new detections at a rate of 2Hz. The robot is able to detect all non-occluded obstacles within a maximum range of 20 meters that lie in a front-facing sensor field of view of 110 degrees. We set the graph construction to ignore estimated obstacles at a distance of greater than 15 meters, as we find that near the sensor's maximum range, obstacle estimates have received very few associated detections (or only a single detection), and therefore contain too high uncertainty to be useful for planning. The robot's speed varies between \(1~{}\mathrm{m}/\mathrm{s}\) and \(5~{}\mathrm{m}/\mathrm{s}\), driving more slowly when an obstacle is nearby. Finally, the planner internally adds a barrier of artificial obstacles around the limits of the environment in order to keep the robot within the boundaries of the simulated environment.
Figure 8 shows simulation results from the uniformly distributed forest environments, with multiple numbers of maximum hypotheses \(N_{\mathrm{hyp}}\) and two different settings for \(p_{\mathrm{target}}\). We plot the number of cases in which the robot successfully reached the goal, indicated by the colored bars. The _gray portions of the bars_ indicate cases where the robot was unable to find a path to the goal and stopped, rather than crashing into an obstacle. This can occur either when the hybrid A* local planner fails to find a path to the local goal, or the global planner fails to find a path to the global goal. In the case of the multiple hypothesis global planner, this failure indicates that the planner believes no path above safety \(p_{\mathrm{target}}\) exists in the navigation graph.
The overall success rate of the planner is high in the uniform forest environments, even when using the baseline global planner, as the robot can generally avoid uniformly distributed obstacles by driving mostly in a straight line towards the goal, making slight deviations around an obstacle when needed. Still, we observe a higher rate of crashes using the baseline planner in higher density forests. In the highest density forests, with \(\rho=0.3~{}\mathrm{tree}/\mathrm{m}^{2}\), the baseline crashes 3 times out of 20 and stops once, compared to 1 crash for the medium
Fig. 8: Results from simulating a differential drive robot navigating through 20 randomly generated Poisson forests with _uniformly distributed tree positions_. The straight-line distance from the start to the goal, ignoring obstacles, is 40 meters. We run our experiments with three different settings for \(\rho\), the density of trees in the forest (indicated by the three different colors), and with different maximum numbers of hypotheses \(N_{\mathrm{hyp}}\) allowed for the planner. Zero hypotheses indicates the 2D A* baseline global planner. The height of the bars indicate the number of generated forests in which the robot did not crash into an obstacle while navigating to the goal. The colored portions of the bars indicate the number of forests in which the robot successfully reached the goal. The gray portions of the bars indicate runs where the robot stopped early (without crashing into an obstacle), either due to the global planner believing there is no safe path to the goal available, or due to hybrid A* planner failure. **Left:** Results using a target safety probability of \(p_{\mathrm{target}}=0.95\) for the multiple hypothesis planner. **Right:** Results using \(p_{\mathrm{target}}=0.999\).
Fig. 9: Results of planner successes in forests which contain Gaussian-distributed tree clusters between the robot’s starting position and the goal point. Subplot arrangement is the same as in Figure 8. **Left:** Results using a target safety probability of \(p_{\mathrm{target}}=0.95\) for the multiple hypothesis planner. **Right:** Results using \(p_{\mathrm{target}}=0.999\)
density and no crashes in the lowest density forest. These findings show that denser obstacles, even in the relatively simple uniformly distributed forests, are still more challenging for navigation.
Our multiple hypothesis planner with \(p_{\mathrm{target}}=0.95\) closely matches the performance of the baseline, likely due to the already low rate of crashes. In the highest density forests, the 5-hypothesis planner crashes twice and stops twice, out of 20 generated forests. With \(p_{\mathrm{target}}=0.999\), we see that the multiple hypothesis planner achieves a higher rate of safe navigation compared to the baseline. The planner always safely reaches the goal in the low and medium density forests, and in the high density case, the planner never crashes with \(N_{\mathrm{hyp}}=3\), and crashes once out of 20 forests for the 1, 2, and 5-hypotheses cases.
We then analyze the performance of our planner in non-uniformly distributed forests, containing clusters of obstacles that the robot should avoid, with results shown in Figure 9. We observe that in these environments, the 2D A* baseline global planner performs poorly, reaching the goal safely only 4 out of 20 times in the forests, and overall demonstrating a much lower rate of safe navigation as compared to experiments in the uniform forests.
In these more challenging, non-uniform forests, the multiple-hypothesis planner significantly increases the rate at which the robot is able to safely reach the goal. At \(p_{\mathrm{target}}=0.95\), using just a single hypothesis in the high density forests triples the number of navigation successes compared to the baseline, from 4 to 12. Using 5 hypotheses further increases the number of successes to 15. We also see improvements in the multiple hypothesis planner's success rate, compared to the baseline, in the low and medium density forests.
Increasing the target safety probability to \(p_{\mathrm{target}}=0.999\) further decreases the number of crashes in the non-uniformly distributed forests. At this setting, the 5-hypothesis planner never crashes in the low and medium density cluster forests, and crashes twice in the high density forests as compared to four times for the \(p_{\mathrm{target}}=0.95\) 5-hypothesis planner. However, the increased \(p_{\mathrm{target}}\) does cause the robot to stop more often, due to being unable to find a safe enough path to the goal. The planner stops most often in the high density forests, with the 5-hypothesis planner succeeding 11 times, stopping 7 times, and crashing 2 times in the high density forests. This effect is intuitive for the higher safety threshold, as the planner will be more conservative overall. We note that depending on the application, a robot stopping when unable to find a path is likely more desirable than crashing into an obstacle. Overall, at both \(p_{\mathrm{target}}\) settings, the multiple hypothesis planner significantly outperforms the baseline at all forest densities, increasing the number of successful navigation runs and/or reducing the number of crashes by stopping before hitting obstacles.
We note that the actual observed success rate of the planner often does not match the target safety probability \(p_{\mathrm{target}}\) exactly. The planner targets a safety probability \(p_{\mathrm{target}}\) for each planned path, but since multiple re-planning iterations are required to reach the goal, the overall likelihood that _all_ of the planned paths are safe is below \(p_{\mathrm{target}}\). Additionally, the other components of the planning pipeline, including measurement data association and the local hybrid A* planner, can occasionally introduce other failures that cause crashes (for example, hybrid A* planning too close to an obstacle due to approximations when checking collisions during the motion primitives). Still, our findings do show increased safety for the higher setting of \(p_{\mathrm{target}}\), indicating that the path safeties approximated using our safety probability model and high-level graph representation do in fact correspond to higher rates of the robot actually reaching its goal.
### _Qualitative analysis_
We present several visualizations of the planner's behavior in order to demonstrate the effects of using multiple plan hypotheses, as well as of adjusting the planner's path safety threshold.
Figure 10 shows the trajectories taken by the robot through a randomly generated forest (containing nonuniformly distributed obstacle clusters at the highest obstacle density setting), using the 1-hypothesis, 5-hypothesis, and hybrid A* baseline planners. The hybrid A* planner crashes into an obstacle, as it attempts to navigate between a pair of close-together obstacles. The single hypothesis planner successfully reaches the goal, but backtracks and changes course multiple times, as it reacts to newly obtained sensor measurements of nearby obstacles. The 5-hypothesis planner reaches the goal with the smoothest path overall, as it accounts for further-away, uncertain obstacles when planning.
Figure 11 zooms in on an example case to demonstrate how the planner behaves differently when using 1 hypothesis versus 5 hypotheses. The robot trajectories, plotted in Figure (a)a show that the 1-hypothesis planner finds a safe path around the cluster of obstacles ahead, but has to closely approach these obstacles before determining there is no path through them with a 95% likelihood of being safe. In contrast, the 5-hypothesis planner identifies ahead of time that a safer, but slightly longer distance, path exists by traveling around the cluster. Figures (b)b and (c)c illustrate the navigation graph constructed by the planner, and the path hypotheses considered by the 1- and 5-hypothesis versions of the planner. The 1-hypothesis planner computes only the shortest path to the goal in the navigation graph, which passes through an uncertain, narrow gap between two obstacles, ultimately leading to the robot needing to backtrack away from these obstacles. In contrast, the 5-hypothesis planner identifies a path that takes a slightly longer route around these obstacles, but has a safety probability near 1.0. The multiple-hypothesis planner chooses this as the best path, trading off distance for a much safer path.
Finally, Figure 12 demonstrates the effects of changing the desired path safety probability \(p_{\mathrm{target}}\), showing the behavior of the 5-hypothesis planner with a desired path safety of \(p_{\mathrm{target}}=0.95\) versus with \(p_{\mathrm{target}}=0.999\), in three different forest environments. In Figure (a)a, the lower-safety threshold planner crashes as it attempts to navigate a gap between several obstacles, while the higher-safety planner identifies this unsafe region and avoids it. However in other cases, the more conservative behavior of the higher safety threshold planner
can cause the robot to take longer to reach the goal, as seen in Figure 11(b), or to fail to find a path when a safe path does in fact exist, as seen in Figure 11(c). These multiple cases show that the safety threshold \(p_{\mathrm{target}}\) significantly affects the planned path, and should be set depending on the desired trade-off of safety with the time to reach the goal.
## VI Conclusion
In this paper, we presented a method for path planning through complex, obstacle-dense environments using noisy object detections, motivated by the use of machine learning-based object detectors to enable robots to plan around distant obstacles. Our approach constructs a probabilistic graph representation of obstacles; we then plan multiple candidate paths through the graph, accounting for path safety and expected distance to optimize the best route to the goal. Simulation experiments in generated forest environments demonstrated that our multiple hypothesis planner enables a differential drive robot to reach its goal much more safely in complex, obstacle-dense forests as compared to a baseline global planner.
Our graph representation provides a probabilistic framework that could be extended with other sources of uncertain information. In particular, the evaluation metric for the multiple path hypotheses is a promising area for future improvement, building upon the current path cost function based on safety and distance-to-goal. Including information gain as a weight in the path evaluation, as in [6], would encourage the robot to follow paths that allow it to view previously unobserved regions of the environment, or to approach uncertain obstacles in order to improve its estimate certainty by collecting additional measurements. Image semantics are another promising source of information [5, 6], and could be used to weight candidate paths by recognizing obstacle-dense regions from long range using semantic masks.
|
2305.17817 | Transfer Learning for Power Outage Detection Task with Limited Training
Data | Early detection of power outages is crucial for maintaining a reliable power
distribution system. This research investigates the use of transfer learning
and language models in detecting outages with limited labeled data. By
leveraging pretraining and transfer learning, models can generalize to unseen
classes.
Using a curated balanced dataset of social media tweets related to power
outages, we conducted experiments using zero-shot and few-shot learning. Our
hypothesis is that Language Models pretrained with limited data could achieve
high performance in outage detection tasks over baseline models. Results show
that while classical models outperform zero-shot Language Models, few-shot
fine-tuning significantly improves their performance. For example, with 10%
fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5%
accuracy (+8.5%). This has practical implications for analyzing and localizing
outages in scenarios with limited data availability.
Our evaluation provides insights into the potential of few-shot fine-tuning
with Language Models for power outage detection, highlighting their strengths
and limitations. This research contributes to the knowledge base of leveraging
advanced natural language processing techniques for managing critical
infrastructure. | Olukunle Owolabi | 2023-05-28T22:36:35Z | http://arxiv.org/abs/2305.17817v1 | # Transfer Learning for Power Outage Detection Task with Limited Training Data
###### Abstract
Early detection of power outages is crucial for maintaining a reliable power distribution system. This research investigates the use of transfer learning and language models in detecting outages with limited labeled data. By leveraging pretraining and transfer learning, models can generalize to unseen classes.
Using a curated balanced dataset of social media tweets related to power outages, we conducted experiments using zero-shot and few-shot learning. Our hypothesis is that Language Models pretrained with limited data could achieve high performance in outage detection tasks over baseline models. Results show that while classical models outperform zero-shot Language Models, few-shot fine-tuning significantly improves their performance. For example, with 10% fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% accuracy (+8.5%). This has practical implications for analyzing and localizing outages in scenarios with limited data availability.
Our evaluation provides insights into the potential of few-shot fine-tuning with Language Models for power outage detection, highlighting their strengths and limitations. This research contributes to the knowledge base of leveraging advanced natural language processing techniques for managing critical infrastructure.
Olukunle Owolabi
[email protected]
## 1 Introduction
Maintaining a reliable and uninterrupted power supply is of critical importance, with electric power outages having far-reaching implications across residential, commercial, and industrial sectors [1, 2, 3]. Timely detection of power outages is crucial for prompt response and efficient restoration efforts [4].
In recent years, the emergence of social media platforms has provided a valuable real-time source of information on power outages [5]. Platforms like Twitter have become channels through which users share their experiences, frustrations, and concerns regarding power disruptions, generating a wealth of data. This user-generated data acts as social sensors and offers valuable insights into the occurrence, impact, and geographical distribution of power outages [6]. However, effectively harnessing this data for outage detection and analysis poses significant challenges due to the noise and unstructured nature of social media content [7].
To address these challenges, researchers have turned to machine learning algorithms. Deep learning techniques, particularly those utilizing large language models (LLMs) [8], have shown promise in enhancing the performance of natural language processing (NLP) models across various domains. By leveraging the transfer learning capability of pretrained LLMs such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pretrained Transformer) [9, 10], models can capture contextual understanding and semantic relationships, enabling generalization to unseen classes and domain-specific tasks.
This research focuses on investigating the effectiveness of transfer learning and language models in detecting electric power outages in situations with limited training data. We specifically analyze a meticulously curated dataset comprising social media tweets associated with power disruptions. Our primary objective is to assess the transfer learning capability of LLMs in zero-shot and few-shot learning scenarios for power outage detection tasks with limited data. Through a thorough examination of their capabilities and limitations, we aim to provide valuable insights into the potential of these techniques for accurate power outage detection.
The contributions of this research lie in demonstrating the capabilities of advanced NLP techniques, particularly transfer learning and language models, in monitoring critical infrastructure. Furthermore, we highlight the significance of these techniques in addressing the challenges faced by small minority communities in power outage analysis, where limited training examples and data availability pose obstacles.
The remainder of this paper is structured as follows: Section 2 provides a comprehensive review of related works in the field of power outage detection, as well as the application of machine learning techniques to social media data. Section 3 presents the methodology, encompassing dataset preparation, model architecture, and evaluation metrics. In Section 4, we discuss the experimental setup and Section 5 entails the results and analysis, comparing the performance of the LLMs under few-shot learning scenarios. Finally, Section 6 concludes the paper by summarizing the findings, discussing future research directions, and addressing the ethical considerations and limitations of this work.
## 2 Related Works
In this section, we review related works on machine learning techniques for infrastructure and power outage monitoring, as well as the application of transfer learning and language models to social media data. We also discuss studies focusing on the domain application of transfer learning and large language models (LLMs), and highlight the gaps in the application of these techniques for power outage detection.
### Machine Learning for Infrastructure and Power Outage Monitoring Using Social Media Data
#### 2.1.1 Application for Power Outage Detection
Conventional methods of outage detection primarily rely on infrastructure monitoring and reporting, which can suffer from delays and limited coverage. To address these challenges, researchers have explored alternative approaches, including the utilization of social media data.
For instance, Ermakov et al. [11] and Resch et al. [12] conducted studies on leveraging tweets from Twitter to detect power and communication outages during natural disasters. They curated datasets of tweets using domain-specific keywords and employed machine learning algorithms to classify outage-related tweets. Haifeng et al. [5] developed a probabilistic framework that incorporated textual, temporal, and spatial characteristics to identify outage events in real-time. Li et al. [13] introduced a novel quantitative method to analyze community resilience during power outages, exemplified through a case study of the Manhattan blackout in July 2019. Baidya et al. [14] explored the reliability of social sensors, with a specific focus on social media, and proposed a framework for enhancing the resilience of power grids.
These studies demonstrate efforts to harness the potential of social media and alternative data sources for improving power outage detection and enhancing the resilience of power systems.
#### 2.1.2 Other Infrastructure Applications
Researchers have also explored the use of social media data for monitoring and assessing various infrastructure-related impacts during and after disasters.
Wang et al. [15] collected geotagged tweets to examine individuals' sentiment and mobility patterns during and after a specific earthquake. They observed spatial autocorrelation of sentiment and investigated the relationship between sentiment and mobility over time. Hao et al. [16] proposed a method to locate and assess disaster damage using multi-modal social media data, including text and images. They applied machine learning classifiers and keyword search-based methods to extract various damage information. Shan et al. [17] developed a framework for real-time urban disaster damage monitoring and assessment using social media texts collected during manmade and natural disasters. They performed sentiment analysis and quantity evaluation of physical damage based on keyword frequency. Tan et al. [18] developed a framework for rapid damage classification and recovery monitoring for urban floods using social media data. They employed machine learning classification algorithms and statistical models to measure emotional and physical damage. Fan et al. [19] proposed a theoretical framework that integrated human sentiment reactions on social media into infrastructure resilience assessment during disasters. They used social media data to capture societal impacts of infrastructure disruptions by analyzing sentiments in human messages related to relevant infrastructure systems. Bhavaraju et al. [20] explored the sensitivity of social media to different types or magnitudes of natural disasters under various circumstances for tornadoes, winter storms, wildfires, and floods.
### Large Language Models: Transfer Learning and Few-Shot Learning
Large language models (LLMs) have demonstrated promising capabilities in various domains by capturing contextual understanding and semantic relationships. Transfer learning, a technique that leverages pretraining on LLMs, enables models to generalize to unseen classes and tasks. Few-shot learning (FSL), on the other hand, addresses the challenge of learning from limited labeled data, which is a common scenario in many real-world applications [21, 22, 23].
### Literature Gaps
Previous research has made significant progress in infrastructure power outage detection using data-driven approaches and machine learning algorithms applied to social media data. However, there exists a gap in the utilization of large language models (LLMs) and few-shot learning techniques in this domain. While classical machine learning techniques have been widely employed, the potential of LLMs and few-shot learning in power outage detection tasks remains largely unexplored. This study aims to fill this literature gap by evaluating different language models in zero-shot and few-shot learning scenarios for power outage detection. It contributes to the development of more accurate and efficient systems that can extract valuable information from social media data, even with limited labeled datasets.
## 3 Data
For this study, we utilized a publicly available dataset [24] comprising social media reactions to power outages in the New England area (Connecticut, New Hampshire, Massachusetts, Maine, New York, Rhode Island, Vermont) from September 2009 to December 2012. From this dataset, we
curated a few-shot training dataset consisting of 1000 samples with a balanced distribution between the "Outage" and "No outage" classes. Table 1 provides a description of the curated training and test data.
To ensure the quality and representativeness of the data, we conducted a manual inspection of the collected tweets to verify their association with power outage events. We cross-referenced the collected data with verified outage information from the US Department of Energy [25]. Additionally, we ensured an equal distribution of samples between the "Outage" and "No outage" classes, facilitating a fair evaluation of the performance of few-shot and zero-shot learning techniques.
This curated dataset serves as the foundation for our experiments on LLMs and transfer learning for power outage detection tasks. Our objective is to explore the effectiveness of different language models in addressing power outage detection tasks with limited labeled data.
Table 1 presents details of the data, including the target classes ("Outage" or "No outage"), the average text length, and the number of samples in the training and test sets. Table 2 outlines the dataset split for each experiment type, specifying the percentage of training data used, the corresponding count of training samples, and the count of test samples.
## 4 Experimental Setup
This section outlines the experimental setup used to investigate the performance and potential applications of Large Language Models (LLMs) and their learning techniques in power outage detection scenarios with limited training data.
Table 2 provides an overview of the dataset split used in our experiments, including the baseline classical machine learning (ML) models and LLMs. To ensure comprehensive evaluation of the models' generalization ability, a separate balanced test set of 4000 samples was employed.
### Baseline Models
To establish a baseline for comparison, we utilized classical ML algorithms as baselines for the power outage detection task. The fully supervised ML models were trained using 100% of the curated training dataset. Specifically, Support Vector Machines (SVM), Logistic Regression, and XGBoost models were employed. These models represent traditional ML approaches and serve as benchmarks for assessing the performance of LLMs in few-shot and zero-shot scenarios.
### Transfer Learning
We leveraged the contextual understanding capabilities of the BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer) models. Specifically, the BERT uncased model [10] and the GPT-2 model [9] were utilized.
#### 4.2.1 Zero-Shot Learning
LLMs pre-trained on extensive text corpora possess strong language comprehension abilities. In the zero-shot setting (0% training examples), the models were fine-tuned using the training set without any explicit outage-related training examples. This approach aimed to assess the transfer learning capability of the pre-trained models to generalize their knowledge and accurately classify unseen text.
#### 4.2.2 Few-Shot Learning with Variable Fine-Tuning
To investigate the impact of limited training data on LLM performance, the percentage of training samples used for few-shot learning scenarios was varied (10%, 20%, 50%, 75%, and 100%). The BERT and GPT models were fine-tuned incrementally as the percentage of available training samples increased. By gradually exposing the models to more outage-related examples during fine-tuning, the goal was to enhance their performance and characterize their generalization capability as the number of training examples increased.
### Performance Evaluation
The performance of LLMs and baseline models was evaluated on the testing set, consisting of tweets related to power outages. Each tweet in the testing set was processed by the models, and their predictions were compared to the ground
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Target** & **Avg. Text Length** & **No. of Samples** \\ \hline Train & Outage & 99.4 & 500 \\ & No outage & 101.5 & 500 \\ & All & 100.5 & 1000 \\ \hline Test & Outage & 99.2 & 2000 \\ & No outage & 100.8 & 2000 \\ & All & 100.0 & 4000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data Description
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Exp. Type** & \multicolumn{1}{c|}{**Train**} & \multicolumn{1}{c}{**Test**} \\ & **\%** & **Count** & **Count** \\ \hline LLM + Zero shot & 0\% & 0 & 4000 \\ \hline LLM + Few shot & 10\% & 100 & 4000 \\ & 20\% & 200 & 4000 \\ & 50\% & 500 & 4000 \\ & 75\% & 750 & 4000 \\ & 100\% & 1000 & 4000 \\ \hline Classical ML & 100\% & 1000 & 4000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset Split
truth labels. Performance metrics, including accuracy, precision, recall, and F1-score, were calculated to assess the effectiveness of the models in the power outage detection task.
### Comparative Analysis
A comparative analysis was conducted to compare the performance of the BERT and GPT models with the baseline models in the power outage detection task. Performance metrics and differences in predictions were analyzed to gain insights into the strengths and weaknesses of each model.
## 5 Results
This section presents the results of our experiments, evaluating the performance of baseline models, as well as Large Language Models (LLMs) in the power outage detection task. Furthermore, we discuss the outcomes of the power outage detection task with limited data.
### Baseline Model Performance
The performance of the baseline models is summarized in Table 3. Our results indicate that the simple Support Vector Machine (SVM) model achieved the highest accuracy of 66%, outperforming the Logistic Regression (65%) and XGBoost (60%) models.
### Transfer Learning Results
#### 5.2.1 LLMs with Zero-Shot Learning
Our experiments reveal that LLMs with zero-shot tuning did not surpass the performance of classical ML algorithms when applied to unseen power outage detection tasks. Table 4 presents the performance metrics of the zero-shot learning approach using the BERT and GPT models.
#### 5.2.2 LLMs with Few-Shot Learning and Variable Fine-Tuning
Table 5 shows the results of the few-shot learning approach utilizing the BERT and GPT models. We observe that as the percentage of training samples increases, BERT achieves higher overall performance, reaching 91.2% accuracy with 100% training samples. In comparison, GPT achieves an accuracy of 87% with 100% training samples.
### Discussion
#### 5.3.1 Superior Performance of Fully Supervised Classical ML Algorithms over Zero-Shot LLMs
The results of our experimental analysis demonstrate the superior performance of fully supervised classical machine learning (ML) algorithms compared to Large Language Models (LLMs) with zero-shot tuning in detecting power outages in unseen scenarios. While LLMs possess the advantage of leveraging their pre-trained knowledge and contextual understanding, our findings suggest that fine-tuning is crucial to unlock their full potential for domain-specific tasks such as power outage detection.
#### 5.3.2 Significant Performance Boost of LLMs through Few-Shot Fine-Tuning
Our investigation reveals that fine-tuning LLMs with a limited amount of labeled data can significantly enhance their performance in power outage detection. Notably, even with a minimal training sample size of 10%, we observed a substantial increase in performance compared to the Support Vector Machine (SVM) baseline, with a performance gain of 15.3% for BERT and 8.5% for GPT.
One key advantage of LLMs lies in their ability to effectively leverage pre-trained weights, enabling them to better capture outage-related patterns and make accurate predictions. We observe that through few-shot fine-tuning, these LLMs can adapt their knowledge to the specific characteristics of power outages, leading to improved detection capabilities.
Overall, our experimental findings emphasize the effectiveness of Large Language Models and Transfer Learning in power outage detection tasks. These models show promising potential for real-world applications where labeled data may be scarce, providing a valuable tool for timely identification and response to power outages.
## 6 Conclusion
In conclusion, this research study highlights the efficacy of leveraging Large Language Models (LLMs) and transfer learning for power outage detection using social media data. For the LLMs, BERT outperformed GPT, demonstrating superior performance in classifying power outage-related tweets. The findings also indicate that fully supervised classical ML algorithms outperform Zero-Shot LLMs (both BERT and GPT) on unseen tasks. Moreover, the study shows that
\begin{table}
\begin{tabular}{l c c c c c} & **Model** & **Accuracy** & **Precision** & **Recall** & **F1** \\ \hline \multirow{3}{*}{**Classical ML**} & SVM & 0.66 & 0.67 & 0.63 & 0.65 \\ & Logistic & 0.65 & 0.65 & 0.66 & 0.65 \\ & XGBoost & 0.60 & 0.59 & 0.68 & 0.63 \\ \hline \multirow{2}{*}{**LLMs (Zero-Shot)**} & BERT & 0.54 & 0.54 & 0.54 & 0.54 \\ & GPT & 0.52 & 0.52 & 0.52 & 0.52 \\ \end{tabular}
\end{table}
Table 3: Performance of Baseline Models and Zero-Shot LLMs
few-shot fine-tuning, even with a limited amount of training data, significantly enhances the performance of LLMs in power outage detection.
The ability to predict power outages with limited data is particularly advantageous for small, remote, and minority communities with limited internet access. By further exploring these research avenues and considering ethical considerations, the development and implementation of power outage detection systems can be improved. This improvement not only benefits power utilities but also enhances the experience and reliability of power supply for end-users.
Continued research in this area can contribute to advancements in power outage detection methodologies, making them more robust, accurate, and scalable. It is crucial to prioritize ethical considerations throughout the development and deployment of these systems, ensuring responsible data handling, unbiased analysis, and the protection of individual privacy. By doing so, the integration of language models and transfer learning techniques can effectively enhance power outage detection, ultimately benefiting both the power industry and the communities it serves.
## Limitations
This study has some limitations that should be considered. Firstly, the generalizability of the findings may be influenced by the representativeness of labeled data which may impact the performance and applicability of the models in diverse power outage scenarios. Secondly, the results may not fully capture the characteristics and patterns of power outages in all regions and populations.
## Ethics Statement
Ethical considerations were an integral part of this research. The study adhered to data privacy guidelines in utilizing publicly available tweets. Rigorous filtering and preprocessing techniques were implemented to mitigate biases; however, it is important to acknowledge that inherent biases may still exist in the data. The findings of this study should be responsibly disseminated and utilized, taking into consideration the potential impact on power utilities, end-users, and the broader society. The responsible use and application of these findings are crucial to ensure that power outage detection systems are developed and implemented ethically, avoiding any unintended negative consequences.
|
2310.02712 | ED-NeRF: Efficient Text-Guided Editing of 3D Scene with Latent Space
NeRF | Recently, there has been a significant advancement in text-to-image diffusion
models, leading to groundbreaking performance in 2D image generation. These
advancements have been extended to 3D models, enabling the generation of novel
3D objects from textual descriptions. This has evolved into NeRF editing
methods, which allow the manipulation of existing 3D objects through textual
conditioning. However, existing NeRF editing techniques have faced limitations
in their performance due to slow training speeds and the use of loss functions
that do not adequately consider editing. To address this, here we present a
novel 3D NeRF editing approach dubbed ED-NeRF by successfully embedding
real-world scenes into the latent space of the latent diffusion model (LDM)
through a unique refinement layer. This approach enables us to obtain a NeRF
backbone that is not only faster but also more amenable to editing compared to
traditional image space NeRF editing. Furthermore, we propose an improved loss
function tailored for editing by migrating the delta denoising score (DDS)
distillation loss, originally used in 2D image editing to the three-dimensional
domain. This novel loss function surpasses the well-known score distillation
sampling (SDS) loss in terms of suitability for editing purposes. Our
experimental results demonstrate that ED-NeRF achieves faster editing speed
while producing improved output quality compared to state-of-the-art 3D editing
models. | Jangho Park, Gihyun Kwon, Jong Chul Ye | 2023-10-04T10:28:38Z | http://arxiv.org/abs/2310.02712v2 | # ED-NeRF: Efficient Text-Guided Editing of 3D Scene using Latent Space NeRF
###### Abstract
Recently, there has been a significant advancement in text-to-image diffusion models, leading to groundbreaking performance in 2D image generation. These advancements have been extended to 3D models, enabling the generation of novel 3D objects from textual descriptions. This has evolved into NeRF editing methods, which allow the manipulation of existing 3D objects through textual conditioning. However, existing NeRF editing techniques have faced limitations in their performance due to slow training speeds and the use of loss functions that do not adequately consider editing. To address this, here we present a novel 3D NeRF editing approach dubbed ED-NeRF by successfully embedding real-world scenes into the latent space of the latent diffusion model (LDM) through a unique refinement layer. This approach enables us to obtain a NeRF backbone that is not only faster but also more amenable to editing compared to traditional image space NeRF editing. Furthermore, we propose an improved loss function tailored for editing by migrating the delta denoising score (DDS) distillation loss, originally used in 2D image editing to the three-dimensional domain. This novel loss function surpasses the well-known score distillation sampling (SDS) loss in terms of suitability for editing purposes. Our experimental results demonstrate that ED-NeRF achieves faster editing speed while producing improved output quality compared to state-of-the-art 3D editing models.
## 1 Introduction
In recent years, the development of neural implicit representation for embedding three-dimensional images in neural networks has seen remarkable progress. This advancement has made it possible to render images from all angles using only a limited set of training viewpoints. Starting with the seminar work known as the Neural Radiance Field (NeRF) (Mildenhall et al., 2021), which trained radiance fields using a simple MLP network, various improved techniques (Barron et al., 2021; Reiser et al., 2021; Muller et al., 2022) based on advanced network architectures or modified encoding have been proposed. Alternatively, several methods (Sun et al., 2022; Fridovich-Keil et al., 2022; Karnewar et al., 2022; Chen et al., 2022) proposed to directly optimize voxel points serving as sources for rendering, bypassing the traditional approach of encapsulating all information within implicit networks. These methods have gained prominence for their ability to train radiance fields in a remarkably short time. In addition to representing existing 2D image data in the 3D space, recent research has explored expanded approaches for generating entirely novel 3D objects. With the emergence of text-to-image embedding models like CLIP (Radford et al., 2021), various methods have been proposed to train implicit networks that can generate new objects solely from text prompts (Jain et al., 2022). This trend has been accelerated with the advent of text-to-image diffusion generation models such as Stable Diffusion (Rombach et al., 2022), particularly through the score distillation sampling (SDS) (Poole et al., 2022) which conveys the representation of the text-to-image model to NeRF model.
However, the challenge of editing pre-trained 3D implicit networks according to specific conditions still remains as an open problem due to the constraints of tasks: maintaining the integrity of the orig
inal 3D images while making desired modifications. As an initial work, several approaches (Wang et al., 2022; 2023a) tried to edit the pre-trained NeRF models based on text conditions, utilizing the pre-trained CLIP model to fine-tune the parameters of NeRF models. Nevertheless, these methods exhibit notable weaknesses, including the performance limitations of the CLIP model itself and the need for rendering high-resolution images during training, which results in significant time consumption.
Recently, several editing methods proposed to leverage the enhanced expressiveness of text-to-image diffusion models such as Stable Diffusion. Some methods (Sella et al., 2023) proposed to directly employ the score distillation sampling method, with additional regularizations. However, these methods suffer from significant time consumption and instability in generation performance due to the requirement of full-resolution rendering in the training stage and limitations of the score distillation loss itself. Other alternative approaches (Haque et al., 2023) proposed to directly manipulate the training images of NeRF using text-guided image translation models. This method aims to enable the generation of 3D images corresponding to text conditions. However, it suffers from a significant drawback in terms of training time, as it requires periodic translation of training images during the training process.
To address these challenges, we are interested in developing a novel NeRF editing method to efficiently and effectively edit 3D scenes using only text prompts. To achieve this, we enable NeRF to operate directly in the NeRF latent space, similar to Latent-NeRF (Metzer et al., 2023), which helps reduce time and computational costs. However, naively rendering the latent feature of real-world scenes directly with NeRF may lead to a significant drop in view synthesis performance due to the lack of geometric consistency in the latent space. To tackle this issue, we conduct an analysis of the latent generation process and propose a novel refinement layer to enhance performance based on the analysis. Furthermore, to solve the drawback of the existing SDS-based method in editing, we propose a new sampling strategy by extending Delta Denoising Score (DDS) (Hertz et al., 2023), a 2D image editing technique based on score distillation sampling, into the 3D domain. This extension allows us to achieve high-performance editing capabilities while keeping computational costs affordable, even with large Diffusion Models such as Stable Diffusion. Given the superior editing proficiency of our approach, we've named it ED-NeRF (EDiting NeRF).
Figure 1: **Qualitative results of our method.** ED-NeRF successfully edited 3D scenes with given target text prompts while preserving the original object structure and background regions.
Related Work
Starting from the Neural Radiance Field (NeRF) (Mildenhall et al., 2021), there have been approaches to represent three-dimensional scenes in neural fields. However, due to the slow training speed, several approaches tried to improve the performance by modifying the network architecture or training strategy (Barron et al., 2021; Muller et al., 2022; Reiser et al., 2021). Several methods without relying on neural networks showed great performance in accelerating. These include a method for optimizing the voxel fields (Sun et al., 2022; Fridovich-Keil et al., 2022; Chen et al., 2022; Karnewar et al., 2022), or decomposing the components of field representation. Based on the success of these techniques, methods for generating 'novel' 3D scenes have been proposed. Especially with the emergence of the text-to-image embedding model of CLIP (Radford et al., 2021), DreamField (Jain et al., 2022) leveraged CLIP to train the NeRF model for novel 3D object synthesis. Recently, the performance of the text-to-image diffusion model enabled remarkable improvement in 3D generation. Starting from DreamFusion (Poole et al., 2022), several methods (Metzer et al., 2023; Liu et al., 2023b; Xu et al., 2023) showed impactful results using the diffusion-based prior. However, these methods are limited to generating 'novel' 3D objects and, therefore cannot be applied to our case of NeRF-editing which tries to modify the existing 3D scenes according to the given conditions.
Compared to the novel object generation, NeRF editing is still not an explored field, due to the complexity of the task. As a basic work, several methods focused on color or geometric editing (Yuan et al., 2022; Liu et al., 2021; Kuang et al., 2023). Other works tried style transfer or appearance transfer on 3D neural fields (Zhang et al., 2022; Liu et al., 2023a; Bao et al., 2023) and showed promising results. With incorporating the CLIP model, several approaches (Wang et al., 2022; 2023a; Kim et al., 2023) tried to modify the pre-trained NeRF towards the given text conditions. Although the results show pleasing results, the method still has limitations in detailed expression due to the limitation of CLIP model itself.
Similar to the novel scene generation case, the development of text-to-image diffusion models brought significant improvement in the editing field. Starting from Score Distillation Sampling method proposed in DreamFusion, Vox-e tried to edit the pre-trained voxel fields with regularization (Sella et al., 2023). As an alternative method, InstructNerf2Nerf (Haque et al., 2023) proposed to directly leverage 2D image translation models for changing the attribute of 2D images for NeRF training. However, these methods still have limitations due to excessive training time or unstable editing from loss functions. To address the above problems, we propose an efficient method of editing with novel latent space NeRF training and improved edit-friendly loss functions.
## 3 Methods
Figure 2 provides an overview of training ED-NeRF. First, we optimize NeRF in the latent space of Stable Diffusion. To do this, we encode all images using a pre-trained Variational Autoencoder (VAE) to obtain the feature vectors and guide NeRF to predict these feature vectors directly. Also, we introduce an additional refinement layer, which enhances the novel view synthesis performance of NeRF (Fig. 2(a)). At the inference stage, we can render a natural image by latent NeRF via decoding rendered latent map (Fig. 2(b)). At the editing phase, by utilizing DDS, we adjust the parameters of both NeRF and the refinement process to align the 3D scene with the provided target text (Figure 3). The detailed pipeline for this approach is outlined in the following sections.
### ED-NeRF for 3D Scene Editing
NeRF (Mildenhall et al., 2021) uses MLPs to predict density \(\sigma\) and color \(\mathbf{c}\) for a given 3D point coordinate \(\mathbf{x}=(x,y,z)\) and view direction \(\mathbf{d}\). Through positional encoding \(\gamma(\cdot)\), \(\mathbf{x}\) and \(\mathbf{d}\) are mapped into high-frequency vectors, and then fed into the neural network of NeRF, resulting in two outputs: density \(\sigma\in\mathbb{R}\) and color \(\mathbf{c}\in\mathbb{R}^{3}\).
\[(\mathbf{c},\sigma)=F_{\theta}^{c}(\gamma(\mathbf{x}),\gamma(\mathbf{d})) \tag{1}\]
Through volume rendering Eq. (2), NeRF predicts the pixel color along the camera ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), with \(t\) representing the depth within the range \([t_{near},t_{far}]\), \(\mathbf{o}\) stands for the camera position,
and \(\mathbf{d}\) represents the view direction:
\[\hat{C}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t),d)dt,\text{ where }T(t)=\text{exp}\left(-\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))ds \right). \tag{2}\]
Optimizing NeRF to render the latent feature value of the latent diffusion model offers several advantages in text-guided 3D generation. These advantages include a reduced training burden due to the decreased dimensionality of the space, and enhanced editability for the NeRF model, as the rendered outputs can be directly employed as input for the latent diffusion models. The concept of migrating NeRF to the latent space is first proposed by Latent-NeRF (Metzer et al., 2023), in which the NeRF is directly trained with the latent feature rather than RGB color. Therefore it can render a 3D scene without the encoding process during optimization when using the latent diffusion model as semantic knowledge prior. However, this work exclusively focuses on generating 'virtual' 3D assets without supervision, making it unsuitable for real-world scenes.
Thus, ED-NeRF is realized based on a novel latent NeRF training pipeline for synthesizing real-world scenes in the latent space. As depicted in Figure 2, when a real-world image dataset \(I\) contains multi-view images \(I=\{I^{i}\}_{i=1}^{N}\), we can encode all images to the latent space of Stable Diffusion via encoder to obtain the feature: \(z^{i}=\mathcal{E}(I^{i})\in\mathbb{R}^{64\times 64\times 4}\). After embedding all images, we can use the latent feature maps \(z:=\{z^{i}\}_{i=1}^{N}\) as label data set for ED-NeRF training using the loss function:
\[\mathcal{L}_{rec}=\sum_{\mathbf{r}\in\mathcal{R}}\left\|Z^{i}(\mathbf{r})- \hat{Z}^{i}(\mathbf{r})\right\|^{2} \tag{3}\]
where \(Z^{i}\) denotes the pixel latent value of the latent \(z^{i}\) and \(\hat{Z}^{i}(\mathbf{r})\) is rendered by the volume rendering equation:
\[\hat{Z}^{i}(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\gamma(t))\mathbf{f}_{ z}(\mathbf{r}(t),d)dt,\text{ where }T(t)=\text{exp}\left(\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))ds\right). \tag{4}\]
where \(\mathbf{f}_{z}\in\mathbb{R}^{4}\) denotes the predicted feature value by the neural network, taking \(\gamma(\mathbf{x})\) and \(\gamma(\mathbf{d})\) as input:
\[(\mathbf{f}_{z},\sigma)=F_{\theta}(\gamma(\mathbf{x}),\gamma(\mathbf{d})) \tag{5}\]
By minimizing the loss Eq. (3) to update the parameters of the neural network \(F_{\theta}\), we obtain a novel ED-NeRF model optimized in the latent space of the Stable Diffusion.
### Refinement Layer based on Latent Feature Analysis
When naively matching the latent generated by Eq. (3), we observed that the reconstruction performance significantly deteriorated. In addressing this issue, we analyzed the Encoder \(\mathcal{E}\) and Decoder \(\mathcal{D}\) of Stable Diffusion and discovered the following insight in the process:
Figure 2: **Overall pipeline of training and inference stage. (a) We optimize ED-NeRF in the latent space, supervised by source latent. Naively matching NeRF to a latent feature map during optimization can degrade view synthesis quality. (b) Inspired by the embedding process of Stable Diffusion, we integrated additional ResNet blocks and self-attention layers as a refinement layer. (c) All 3D scenes are decoded from the Decoder when ED-NeRF renders a novel view feature map.**
1) The encoder and decoder consist of ResNet blocks and self-attention layers. Therefore during the process of mapping the image to the latent space and forming a feature map, pixel values exhibit interference between each other, primarily due to ResNet and self-attention layers. Thus the latent and image pixels are not directly aligned.
2) When NeRF renders a single pixel value from the latent feature map, each ray independently passes through an MLP to determine the pixel value of the feature map. Therefore, the feature value rendered by NeRF for a single pixel is determined without interactions with other pixels.
Based on this analysis, we find that the reason for the deformed reconstruction performance of the latent NeRF lies in the inconsideration of the interactions mentioned above. Therefore, we aim to incorporate the interactions among pixels introduced by the ResNet and self-attention layers into the ED-NeRF rendering stage. Fortunately, in the Encoder and Decoder of Stable Diffusion, the embedded feature maps pass through self-attention layers at the same dimension, allowing us to concatenate two attention layers straightly. Taking advantage of this, we can design a refinement layer \(F_{\phi}(\cdot)\) as shown in Figure 2, without dimension change of input and output vector. Let \(\tilde{Z}^{i}\) as the pixel latent vector of the refined feature map \(\tilde{z}^{i}\), where formed from \(\tilde{z}^{i}=F_{\phi}(\hat{z}^{i})\). Therefore, we can design a refined reconstruction loss function as follows :
\[\mathcal{L}_{ref}=\sum_{\mathbf{r}\in\mathcal{R}}\left\|\tilde{Z}^{i}(\mathbf{ r})-\tilde{Z}^{i}(\mathbf{r})\right\|^{2}\,\text{where}\ \tilde{z}^{i}=F_{\phi}(\hat{z}^{i}) \tag{6}\]
Ultimately, we can formulate total training loss as the sum of the refinement loss \(\mathcal{L}_{ref}\) and reconstruction loss \(\mathcal{L}_{rec}\), as follows.
\[\mathcal{L}_{rtot}=\lambda_{rec}\mathcal{L}_{rec}+\lambda_{ref}\mathcal{L}_{ ref} \tag{7}\]
We update NeRF and refinement layer concurrently denoted as \(F_{\theta}\) and \(F_{\phi}\) by minimizing total loss \(\mathcal{L}_{rtot}\) to reconstruct latent vectors in various views. To ensure stable learning, training with \(\lambda_{rec}\) set to 1.0 and \(\lambda_{ref}\) set to 0.1 during the initial stages of training. Beyond a specific iteration threshold, we set it to 0 to encourage the refinement layer to focus more on matching the latent representations.
### Editing ED-NeRF via Delta Denoising Score
After optimizing ED-NeRF in the latent space, it is possible to directly employ the latent diffusion model to update ED-NeRF parameter via rendered latent map \(z\) in the direction of the target text prompt \(y_{trg}\). The most well-known method for text-guided NeRF update is Score Distillation Sampling (SDS), which directly transfers the score estimation output as a gradient of NeRF training:
\[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\mathbf{z},y_{trg},\epsilon,t)= \omega(t)(\epsilon_{\psi}\left(\mathbf{z_{t}},y_{trg},t)-\epsilon\right) \frac{\partial\mathbf{z_{t}}}{\partial\theta} \tag{8}\]
Figure 3: **Expanding DDS into 3D for ED-NeRF editing.** Pretrained ED-NeRF renders the target latent feature map, and a scheduler of the denoising model perturbs it to the sampled time step. Concurrently, the scheduler adds noise to the source latent using the same time step. Each of them is fed into the denoising model, and the DDS is determined by subtracting two different SDS scores. In combination with a binary mask, masked DDS guides NeRF in the intended direction of the target prompt without causing unintended deformations.
However, in our NeRF editing case, the updating rule for SDS often shows several problems including color saturation and mode-seeking (Wang et al., 2023b). We conjecture that the problem originated from the properties of score estimation itself. Since the target noise \(\epsilon\) is pure Gaussian, the score difference is not aware of any prior knowledge of source images. Therefore the generated outputs are just the replacement of hallucinated objects without consideration of source NeRF.
To solve the problem of SDS, we focus on the recently proposed 2D editing method of Delta Denoising Score (DDS) (Hertz et al., 2023). The major difference between SDS and DDS is that the distilled score is the difference between the denoising scores from target and source. As shown in Eq. (9), DDS can be formed as a difference between two SDS scores conditioned on two different text prompts:
\[\nabla_{\theta}\mathcal{L}_{\mathrm{DDS}}=\nabla_{\theta}\mathcal{L}_{ \mathrm{SDS}}(\mathbf{z},y_{src})-\nabla_{\theta}\mathcal{L}_{\mathrm{SDS}}( \hat{\mathbf{z}},y_{trg}), \tag{9}\]
where \(\mathbf{z}\) is source latent, \(\hat{\mathbf{z}}\) is rendered target latent, \(y_{trg}\) represents the target text embedding, \(y_{src}\) represents the reference text embedding. DDS guides the optimized latent towards the target prompt from the source prompt without the influence of the pure noise component, therefore it can easily edit 2D images.
We aim to extend this manipulation capability of DDS to 3D space as shown in Fig. 3. As we already have embedded source latent \(z^{i}\) for the \(i\)-th camera pose, we can directly use them as source components of DDS. To fine-tune the model, we render the edited output \(\tilde{z}^{i}\) which is also rendered from the \(i\)-th camera pose. With the paired latents, we add the same sampled noise \(\epsilon_{t}\) with the noise scale of timestep \(t\) to the both source and edited latents so that we obtain noisy latent \(\tilde{z}^{i}_{t},z^{i}_{t}\). Then we apply the diffusion model to obtain estimated score outputs from noisy latents using different text conditions for source and edited images. As in Eq. (9), we can use the difference between the two outputs as a gradient for updating the NeRF parameters. In this step, we simultaneously train the NeRF parameters \(\theta\) with refinement parameters \(\phi\) as it showed better editing quality. Therefore with the random \(i\)-th camera pose, our 3D DDS is formulated as:
\[\nabla_{\theta,\phi}\mathcal{L}_{\mathrm{DDS}}=\nabla_{\theta,\phi}\mathcal{L }_{\mathrm{SDS}}(\mathbf{z}^{i},y_{src})-\nabla_{\theta,\phi}\mathcal{L}_{ \mathrm{SDS}}(\tilde{\mathbf{z}}^{i},y_{trg}). \tag{10}\]
Although the DDS formulation improves the performance, using vanilla DDS leads to excessive changes in unwanted areas and inconsistency between two different scenes. Therefore, we propose an additional binary mask for utilizing DDS in 3D images. The objective function that combines the binary mask \(\mathcal{M}\) and DDS is as follows:
\[\nabla_{\theta,\phi}\mathcal{L}_{\mathrm{MDDS}}=\mathcal{M}\cdot(\nabla_{ \theta,\phi}\mathcal{L}_{\mathrm{DDS}}), \tag{11}\]
where \(\cdot\) denotes the pixel-wise multiplication and \(\mathcal{M}\) is the conditional binary mask of the specific region of the target prompt to change. This mask is generated by utilizing off-the-shelf text prompt segmentation models such as CLIPSeg (Luddecke and Ecker, 2022) and SAM (Kirillov et al., 2023) to segment the target region by a text prompt.
Despite the use of a binary mask, masked DDS loss \(\nabla\mathcal{L}_{MDDS}\) update all parameters of NeRF potentially affecting even undesired areas. As a result, solely depending on the masked DDS loss may inadvertently result in alterations beyond the mask boundaries. Hence, we introduce an additional reconstruction loss as follows to mitigate undesired deformations beyond the mask.
\[\mathcal{L}_{\mathrm{Mrec}}=\lambda_{im}\cdot\mathcal{M}\cdot\mathcal{L}_{ \mathrm{rot}}+\lambda_{om}\cdot(1-\mathcal{M})\cdot\mathcal{L}_{\mathrm{rot}}. \tag{12}\]
Finally, the total editing loss is as follows:
\[\mathcal{L}_{\mathrm{tot}}=\mathcal{L}_{\mathrm{MDDS}}+\mathcal{L}_{\mathrm{ Mrec}} \tag{13}\]
By suppressing undesired alterations through the use of the masked reconstruction loss \(\mathcal{L}_{Mrec}\), our total editing objective function updates NeRF and refinement layer \(F_{\theta}\) and \(F_{\phi}\), ensuring NeRF renders novel views in accordance with the desired text conditions.
## 4 Experimental Results
### Baseline methods
To comprehensively evaluate the performance of our method, we perform comparative experiments comparing it to state-of-the-art methods. As CLIP-based text guidance editing methods, we used
CLIP-NeRF (Wang et al., 2022) and NeRF-ART (Wang et al., 2023a). CLIP-NeRF encodes the images rendered by NeRF to the CLIP embedding space, allowing it to transform the images according to the text condition. As an improved method, NeRF-ART trains NeRF with various regularization functions to ensure that CLIP-edited NeRF can preserve the structure of the original NeRF. For fair experiments, we re-implemented the methods to TensoRF backbone, referencing the official source codes. For the diffusion-based editing, we chose Masked SDS (Poole et al., 2022) and InstructNeRF2NeRF (Haque et al., 2023) as the methods that target local editing. In the masked SDS setting, we fine-tuned the pre-trained NeRF with applying basic SDS loss only to the masked regions so that the NeRF model is locally edited. InstructNeRF2NeRF (Haque et al., 2023) leverages the powerful generation capabilities of diffusion models to sequentially modify the entire dataset to align with text conditions and use the modified dataset as a new source for NeRF training. We utilized a database comprising real-world images, including LLFF (Mildenhall et al., 2019) and IBRNet (Wang et al., 2021) datasets, as well as the human face dataset employed in Instruction-NeRF2NeRF (Haque et al., 2023).
### Qualitative Results
**Text-guided editing of 3D scenes.** As shown in Figure 1, our method shows its capability to edit various image types with different textual contexts. Specifically, when altering 3D scenes, it is possible to achieve the effective transformation of specific objects without affecting other parts. Our baseline method InstructNeRF2NeRF (Haque et al., 2023) shows decent results with high consis
Figure 4: **Comparison with baseline models.** ED-NeRF demonstrates outstanding performance in effectively altering specific objects compared to other models. Baseline methods often failed to maintain the region beyond the target objects and failed to guide the model towards the target text.
tency between images and text conditions, as well as view consistency across scenes. However, it faces limitations in accurately transforming the specific objects to match text conditions and may introduce undesired image alterations beyond the specific objects. In Masked SDS, the output accurately edits the local objects towards target text conditions, but the edited output fails to reflect the structure of the original NeRF structure and shows unwanted artifacts. In the case of NeRF-ART, the entire image is embedded into the CLIP space, and it does not inherently recognize and modify only specific objects. Therefore, it exhibits limitations in recognizing and altering specific objects. CLIP-NeRF also encodes the images rendered by NeRF to the CLIP embedding space, allowing it to transform the images according to the text condition. However, its performance falls short when it comes to altering specific parts or objects in a similar manner. On the other hand, our ED-NeRF exhibited powerful abilities in editing 3D scenes by specifying certain parts through text, surpassing other models. It not only excelled in changing objects but also demonstrated the capability to faithfully follow and modify areas that are not objects, such as the ground, in accordance with the text condition.
### Quantitative Results
**CLIP Directional Score.** In order to quantitatively measure the editing performance, we show the comparison results using CLIP Directional scores(Gal et al., 2021). The CLIP Directional score quantifies the alignment between textual caption modifications and corresponding image alterations. We rendered multiple view images from NeRF and measured the average score over images. When compared to baseline methods, our model obtained the best similarity scores. The result indicates that our edited NeRF accurately reflects the target text conditions.
**User Study.** In order to further measure the perceptual preference of human subjects, we conducted an additional user study. For the study, we rendered images from edited NeRF using 5 different scenes from LLFF and IBRnet. We gathered feedback from 20 subjects aged between their 20s and 40s. Each participant was presented with randomly selected multi-view renderings from our model and baselines and provided feedback through a preference scoring survey. We set the minimum score as 1 and the maximum score is 5, and users can choose the score among 5 options: 1-very low, 2-low, 3-middle, 4-high, 5-very high. To measure the performance of editing, we asked two questions for each sample: 1) Does the image reflect the target text condition?(Text score) 2) Does the model accurately edit the target object?(Preservation). 3) Does the 3D scenes preserve view consistency?(view consistency). In Table 1, we show the user study results. Compared with baseline
Figure 5: **Ablation studies.** (a) If we only use DDS loss, the model fails to maintain the attribute of untargeted regions and often fails to reflect text conditions. (b) If we do not use masked reconstruction regularization, again the regions beyond the target objects are excessively changed. (c) If we remove the mask from DDS, unwanted artifacts occur in untargeted regions. (d) With removing the proposed refinement layer, the results become blurry as the backbone NeRF cannot fully embed real-world scenes. Our proposed setting can modify a specific region in a 3D scene and follow the target word without causing unwanted deformations.
methods, our method showed the best score in text score and preservation, and second best in view consistency. Overall, ours outperformed the baseline models in perceptual quality.
### Ablation Studies
To evaluate our proposed components, we conducted an ablations study in Figure 5. (a) If we only use DDS, the method fails to maintain the untargeted regions with artifacts, even failing in training (e.g. fossil). (b) If we do not use regularization \(\mathcal{L}_{Mrec}\), the edited results show the target text attribute, but again the regions beyond the target objects are severely degraded. (c) When we remove mask guidance on DDS, (w/o \(\mathcal{L}_{MDDS}\)), unwanted minor deformations occur due to the gradient of DDS affecting the regions outside the mask. (d) When we remove our refinement layer, the results show blurry outputs, which indicate that latent NeRF is not accurately trained. When we utilize all the components we proposed, we can reliably transform the 3D scene into the desired target object while preserving the original structure source NeRF. In order to further show the effect of our proposed refinement layer, we compare the reconstruction quality in Figure 6. We can observe that the reconstruction output from our refinement layer shows fine details when compared to the basic setting. As already shown in our previous ablation study in Figure 5, the refinement layer improves the latent space NeRF training which is crucial in the final edited output.
## 5 Conclusion
In this paper, we introduced a novel ED-NeRF method optimized in the latent space. By enabling NeRF to directly predict latent features, it efficiently harnesses the text-guided score function of latent diffusion models without the need for an encoder. By doing so, our approach is able to effectively reduce computation costs and address the burden of previous models that required rendering at full resolution to utilize the diffusion model. We extended the strong 2D image editing performance
\begin{table}
\begin{tabular}{c|c c c c}
**Metrics** & CLIP-NeRF & NeRF-Art & Instruct N2N & Mask SDS & Ours \\ CLIP Direction Score \(\uparrow\) & 0.1648 & 0.1947 & 0.2053 & 0.1409 & **0.2265** \\ Text score\(\uparrow\) & 2.36 & 3.20 & 3.59 & 3.14 & **3.88** \\ Preservation \(\uparrow\) & 2.30 & 2.97 & 3.08 & 2.76 & **4.09** \\ view consistency \(\uparrow\) & 3.21 & 3.79 & 3.28 & 3.56 & 3.64 \\ \end{tabular}
\end{table}
Table 1: **Quantitative Comparison. We compared the text-image similarity between the target text and rendered output from edited NeRF (CLIP Directional Score). Also, we show the user study results on three categories of text-guidance score, source preservation score, and view consistency. The results show that ours shows improved perceptual score among baseline models.**
Figure 6: **Ablation study on refinement layer. Novel view synthesis results of our ED-NeRF method. An additional refinement layer enhances the performance of rendering quality compared to naive training cases.**
of DDS to the 3D scene and also introduced a new loss function based on the mask. As a result, it showed high performance in object-specific editing, a task that previous models struggled with. We experimented with our proposed approach across various datasets, and as a result, it demonstrated strong adherence to text prompts in diverse scenes without undesired deformation.
## 6 Ethics and Reproducibility Statements
**Ethics statement.** ED-NeRF enables efficient and accurate text-guided NeRF editing, which can be applied to various applications. However, our ED-NeRF can be used for creating obscene objects which may cause the users to feel offended. In order to prevent the possible side effects, we can use a filtered diffusion model that does not contain malicious text conditions.
**Reproducibility statement.** We detailed our experimental process and parameter settings in our Appendix. We will upload our source code to an anonymous repository for reproduction.
|
2307.08196 | Long-time asymptotics of the Sawada-Kotera equation and Kaup-Kupershmidt
equation on the line | Both Sawada-Kotera (SK) equation and Kaup-Kupershmidt (KK) equation are
integrable systems with third-order Lax operator. Moreover, they are related
with the same modified nonlinear equation (called modified SK-KK equation) by
Miura transformations. This work first constructs the Riemann-Hilbert problem
associated with the SK equation, KK equation and modified SK-KK equation by
direct and inverse scattering transforms. Then the long-time asymptotics of
these equations are studied based on Deift-Zhou steepest-descent method for
Riemann-Hilbert problem. Finally, it is shown that the asymptotic solutions
match very well with the results of direct numerical simulations. | Deng-Shan Wang, Xiaodong Zhu | 2023-07-17T01:55:32Z | http://arxiv.org/abs/2307.08196v1 | # Long-time asymptotics of the Sawada-Kotera equation and Kaup-Kupershmidt equation on the line
###### Abstract.
Both Sawada-Kotera (SK) equation and Kaup-Kupershmidt (KK) equation are integrable systems with third-order Lax operator. Moreover, they are related with the same modified nonlinear equation (called modified SK-KK equation) by Miura transformations. This work first constructs the Riemann-Hilbert problem associated with the SK equation, KK equation and modified SK-KK equation by direct and inverse scattering transforms. Then the long-time asymptotics of these equations are studied based on Deift-Zhou steepest-descent method for Riemann-Hilbert problem. Finally, it is shown that the asymptotic solutions match very well with the results of direct numerical simulations.
Key words and phrases:Inverse scattering transform, Lax pair, Sawada-Kotera equation, Kaup-Kupershmidt equation, Riemann-Hilbert problem 2010 Mathematics Subject Classification: Primary 37K40, 35Q15, 37K10
## 1. **Introduction**
In 1974, Sawada and Kotera [1] proposed the so-called Sawada-Kotera (SK) equation
\[u_{t}+u_{xxxxx}+30\left(uu_{xxx}+u_{x}u_{xx}\right)+180u^{2}u_{x}=0, \tag{1.1}\]
which is also named the Caudrey-Dodd-Gibbon equation given by Caudrey, Dodd and Gibbon [2] independently. Subsequently, Kaup [3] and Kupershmidt [4] gave the Kaup-Kupershmidt (KK) equation
\[v_{t}+v_{xxxxx}+30(vv_{xxx}+\frac{5}{2}v_{x}v_{xx})+180v^{2}v_{x}=0. \tag{1.2}\]
Both SK equation (1.1) and KK equation (1.2) are completely integrable systems with third-order Lax operator of the form \(\psi_{xxx}+6Q\psi_{x}+6R\psi=k^{3}\psi\) studied by Kaup [3], in which \(Q=u\) and \(R=0\) correspond to the SK equation, while \(Q=u\) and \(R=u_{x}/2\) correspond to the KK equation. The Lax pair of the SK equation (1.1) in matrix form is
\[\left\{\begin{array}{ll}\Phi_{x}=L\Phi,\\ \Phi_{t}=Z\Phi,\end{array}\right. \tag{1.3}\]
where
\[L=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ k^{3}&-6u&0\end{array}\right), \tag{1.4}\]
\[Z=\left(\begin{array}{ccc}36k^{3}u&6u_{xx}-36u^{2}&9k^{3}-18u_{x}\\ 18k^{3}u_{x}+9k^{6}&6u_{xxx}-18k^{3}u+36uu_{x}&-12u_{xx}-36u^{2}\\ 6k^{3}u_{xx}-36u^{2}k^{3}&Z_{32}&-6u_{xxx}-18k^{3}u-36uu_{x}\end{array}\right), \tag{1.5}\]
with spectral parameter \(k\) and \(Z_{32}=36{u_{x}}^{2}+108uu_{xx}+9k^{6}+216u^{3}+6u_{xxxx}\).
The Lax pair of the KK equation (1.2) in matrix form is
\[\left\{\begin{array}{l}\Phi_{x}=\tilde{L}\Phi,\\ \Phi_{t}=\tilde{Z}\Phi,\end{array}\right. \tag{1.6}\]
where
\[\tilde{L}=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ k^{3}-3u_{x}&-6u&0\end{array}\right), \tag{1.7}\]
\[\tilde{Z}=\left(\begin{array}{ccc}3u_{xxx}+36k^{3}u+72uu_{x}&-3u_{xx}-36u^{2} &9k^{3}\\ \tilde{Z}_{21}&-18k^{3}u&-3u_{xx}-36u^{2}\\ \tilde{Z}_{31}&\tilde{Z}_{32}&-72uu_{x}-18k^{3}u-3u_{xxx}\end{array}\right), \tag{1.8}\]
with \(\tilde{Z}_{21}=9k^{3}u_{x}+9k^{6}+3u_{xxxx}+72u_{x}^{2}+72uu_{xx}\), \(\tilde{Z}_{31}=3u_{xxxxx}+225u_{x}u_{xx}+72uu_{xxx}+108u^{2}u_{x}-36u^{2}k^{3}+ 6k^{3}u_{xx}\) and \(\tilde{Z}_{32}=9k^{6}-9k^{3}u_{x}+3u_{xxxx}+72u_{x}^{2}+90uu_{xx}+216u^{3}\).
Notice that the space parts of the operator Lax pairs for the SK and KK equations have the similar form, i.e., \(\mathscr{L}\phi=(\partial_{xxx}+6u\partial)\phi=k^{3}\phi\) for the SK equation, and \(\tilde{\mathscr{L}}\phi=(\partial_{xxx}+6u\partial_{x}+3u_{x})\phi=k^{3}\phi\) the KK equation. Thus it is convenient to consider a more general Lax operator of the form
\[\mathscr{L}\phi=(\partial_{xxx}+p\partial_{x}+q) \tag{1.9}\]
which corresponds to the matrix form Lax pair
\[\Phi_{x}=L\Phi \tag{1.10}\]
with
\[L=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ k^{3}-q&-p&0\end{array}\right).\]
It is obvious that \(q=0,p=6u\) for the SK equation (1.1) and \(q=3v_{x},p=6v\) for the KK equation (1.2).
In addition, both the SK equation (1.1) and the KK equation (1.2) are related with the modified SK-KK equation (mSK-KK)
\[w_{t}+w_{xxxxx}-(5w_{x}w_{xx}+5ww_{x}^{2}+5w^{2}w_{xx}-w^{5})_{x}=0 \tag{1.11}\]
through the Miura transformations
\[u=\frac{1}{6}(w_{x}-w^{2})\quad\mbox{and}\quad v=\frac{1}{3}(w_{x}-\frac{w^{ 2}}{2}). \tag{1.12}\]
It is noted that there doesn't exist singularity in the spectral problem of the mSK-KK equation (1.11), thus it is practicable to study the long-time asymptotics of the equations (1.1)-(1.2) by the examining the asymptotic behavior of the mSK-KK equation (1.11).
In what follows, the direct and inverse scattering transforms [5]-[25] are performed to derive the Riemman-Hilbert problem of the SK and KK equations (1.1)-(1.2) and the mSK-KK equation (1.11).
## 2. **The Riemman-Hilbert problem**
Introduce the gauge transformation
\[\Phi=G\Psi\quad\mbox{with}\quad G=\left(\begin{array}{ccc}\alpha&\alpha^{2 }&1\\ \alpha^{2}k&\alpha k&k\\ k^{2}&k^{2}&k^{2}\end{array}\right),\quad\alpha=e^{\frac{2\pi i}{3}}, \tag{2.1}\]
then the spectral problem (1.10) becomes
\[\Psi_{x}=\mathcal{L}\Psi, \tag{2.2}\]
where \(\mathcal{L}=G^{-1}LG=k\Lambda+Q(x,t;k)\) with
\[Q(x,t;k)=\frac{Q_{(1)}(x,t;k)}{k}+\frac{Q_{(2)}(x,t;k)}{k^{2}},\]
and
\[\Lambda=\left(\begin{array}{ccc}\alpha&0&0\\ 0&\alpha^{2}&0\\ 0&0&1\end{array}\right),Q_{(1)}=-\frac{p}{3}\left(\begin{array}{ccc}\alpha^{2} &\alpha&1\\ \alpha^{2}&\alpha&1\\ \alpha^{2}&\alpha&1\end{array}\right),\quad Q_{(2)}=-\frac{q}{3}\left( \begin{array}{ccc}\alpha&\alpha^{2}&1\\ \alpha&\alpha^{2}&1\\ \alpha&\alpha^{2}&1\end{array}\right).\]
Following the same procedure, the gauge transformation (2.1) maps the temporal part of the Lax pairs (1.3) and (1.6) into
\[\Psi_{t}=\mathcal{Z}\Psi\quad\text{and}\quad\Psi_{t}=\tilde{\mathcal{Z}}\Psi, \tag{2.3}\]
respectively, where
\[\mathcal{Z}=G^{-1}ZG=9k^{5}\Lambda^{2}+P(x,t;k)\quad\text{and}\quad\tilde{ \mathcal{Z}}=G^{-1}\tilde{Z}G=9k^{5}\Lambda^{2}+\tilde{P}(x,t;k),\]
where \(P(x,t;k)\) and \(\tilde{P}(x,t;k)\to 0\) as \(|x|\to\infty\).
Thus the gauge transformation (2.1) converts the Lax pairs (1.3) and (1.6) into
\[\left\{\begin{array}{l}\Psi_{x}=(k\Lambda+Q)\Psi,\\ \Psi_{t}=(k^{5}\Lambda^{2}+P)\Psi,\end{array}\right.\quad\left\{\begin{array} []{l}\Psi_{x}=(k\Lambda+Q)\Psi,\\ \Psi_{t}=(k^{5}\Lambda^{2}+\tilde{P})\Psi.\end{array}\right. \tag{2.4}\]
Furthermore, taking \(\Psi=Je^{(k\Lambda x+k^{5}\Lambda^{2}t)}\) yields
\[\left\{\begin{array}{l}J_{x}-[k\Lambda,J]=QJ,\\ J_{t}-[k^{5}\Lambda^{2},J]=PJ,\end{array}\right.\quad\left\{\begin{array}{l} J_{x}-[k\Lambda,J]=QJ,\\ J_{t}-[k^{5}\Lambda^{2},J]=\tilde{P}J.\end{array}\right. \tag{2.5}\]
In what follows, we only focus on the \(x\)-variable and take \(t\)-variable as a dump variable. Moreover, according to the equation \(J_{x}-[k\Lambda,J]=QJ\), one can get the Volterra integral equation of the Jost functions \(J_{+}(x,k)\) and \(J_{-}(x,k)\) below
\[\begin{split}& J_{+}(x,k)=I-\int_{x}^{\infty}e^{(x-y)\vec{k} \Lambda}\left(Q(y,k)J_{+}(y,k)\right)dy,\\ & J_{-}(x,k)=I+\int_{-\infty}^{x}e^{(x-y)\vec{k}\Lambda}\left(Q(y,k)J_{-}(y,k)\right)dy,\end{split} \tag{2.6}\]
which indicates the singular set
\[\Sigma:=\{k\in\mathbb{C}|\text{Re}(\alpha^{n}k)=\text{Re}(\alpha^{m}k),\quad 0 \leq n<m<3\},\]
then \(\Sigma\) divided the complex plane into six regions, specifically
\[\Omega_{n}:=\{k\in\mathbb{C}|\frac{(n-1)\pi}{3}<\arg(k)<\frac{n\pi}{3},n=1, \cdots,6\}.\]
The following way to construct the Riemann-Hilbert problem [26]-[32] is standard, so we omit the process of proof, see [29] for details.
### Basic properties of the Jost functions
**Proposition 2.1**.: _Suppose the initial potential functions \(p_{0}(x),q_{0}(x)\in\mathcal{S}(x)\), then the matrix-valued Jost functions \(J_{+}(x,k)\) and \(J_{-}(x,k)\) have the following properties:_
_(1). \(J_{+}(x,k)\) is well defined in the closure of \((\omega^{2}S,\omega S,S)\setminus\{0\}\), and \(J_{-}(x,k)\) is well defined in the closure of \((-\omega^{2}S,-\omega S,-S)\setminus\{0\}\), where \(S=\Omega_{3}\cup\Omega_{4}\). Moreover, the determinant of \(J_{\pm}\) are always equal to \(1\)._
_(2). \(J_{+}(\cdot,k)\) and \(J_{-}(\cdot,k)\) are smooth and rapidly decay in the closure of their domains (except for \(\{0\}\))._
_(3). \(J_{+}(x,\cdot)\) and \(J_{-}(x,\cdot)\) are analytic in the interior of their domains, but any order partial derivative of \(k\) can be continuous to the closure of their domains (except for \(\{0\}\))._
_(4). \(J_{+}(x,k)\) and \(J_{-}(x,k)\) satisfied the following symmetries:_
\[\begin{split} J_{+}(x,k)&=\mathcal{A}J_{+}(x,\omega k )\mathcal{A}^{-1}=\mathcal{B}J_{+}^{*}(x,k^{*})\mathcal{B},\\ J_{-}(x,k)&=\mathcal{A}J_{-}(x,\omega k)\mathcal{A}^{-1}= \mathcal{B}J_{-}^{*}(x,k^{*})\mathcal{B},\end{split}\]
_where \(k\) is in their domains and \(\mathcal{A},\mathcal{B}\) are_
\[\mathcal{A}:=\left(\begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right)\quad\text{ and }\quad\mathcal{B}:=\left(\begin{array}{ccc}0 &1&0\\ 1&0&0\\ 0&0&1\end{array}\right)\]
_(5). When \(p_{0}\) and \(q_{0}\) are compact support, \(J_{+}\) and \(J_{-}\) are defined and analytic for \(k\) on \(\mathbb{C}\setminus\{0\}\)._
### The behavior of Jost functions for \(k\to\infty\)
Suppose the WKB expansion of the Jost functions \(J_{\pm}\) to be
\[J_{\pm}=I+\frac{J_{\pm}^{(1)}}{k}+\frac{J_{\pm}^{(2)}}{k^{2}}+\cdots\]
Taking into account of the equation (2.5), one has
\[\left\{\begin{array}{l}\left[\Lambda,J_{\pm}^{(n+1)}\right]=(\partial_{x}J_ {\pm}^{(n)})^{(o)}-\left(\mathrm{Q}_{1}J_{\pm}^{(n-1)}\right)^{(o)}-\left( \mathrm{Q}_{2}J_{\pm}^{(n-2)}\right)^{(o)},\\ (\partial_{x}J_{\pm}^{(n+1)})^{(d)}=\left(\mathrm{Q}_{1}J_{\pm}^{(n)}\right)^ {(d)}+\left(\mathrm{Q}_{2}J_{\pm}^{(n-1)}\right)^{(d)}.\end{array}\right. \tag{2.7}\]
Furthermore, we have
\[J_{+}^{(1)}=\int_{x}^{\infty}\frac{p}{3}dy\left(\begin{array}{ccc}\alpha^{2 }&0&0\\ 0&\alpha&0\\ 0&0&1\end{array}\right),\]
\[J_{+}^{(2)}=\int_{x}^{\infty}\frac{q+p(J_{1})_{33}}{3}dy\left(\begin{array}[] {ccc}\alpha&0&0\\ 0&\alpha^{2}&0\\ 0&0&1\end{array}\right)+\frac{p}{3(1-\alpha)}\left(\begin{array}{ccc}0&1&-1 \\ -\alpha&0&\alpha\\ \alpha^{2}&-\alpha^{2}&0\end{array}\right). \tag{2.8}\]
**Proposition 2.2**.: _Suppose \(\{q_{0},p_{0}\}\in\mathcal{S}(\mathbb{R})\), there exist bounded smooth functions \(f_{\pm}\), which rapidly decay as \(x\to\infty\) and \(x\to-\infty\), respectively. Let \(m\geq 0\) be an integer and for each integer \(n\geq 0\), then_
\[\left|\frac{\partial^{n}}{\partial_{k^{n}}}\left[J_{\pm}-\left(I+\frac{J_{\pm }^{(1)}}{k}+\cdots+\frac{J_{\pm}^{(m)}}{k^{m}}\right)\right]\right|\leq\frac{ f_{\pm}(x)}{k^{m+1}},\]
_where \(k\) is in the domain of \(J_{\pm}\), respectively and large enough._
### The behavior of Jost functions for \(k\to 0\)
Since the kernel matrix function \(Q(x;k)\) has double poles at \(k=0\), this illustrates the asymptotics of \(J_{\pm}\) as \(k\to 0\).
**Proposition 2.3**.: _Suppose \(\{q_{0},p_{0}\}\in\mathcal{S}(\mathbb{R})\), there exist bounded smooth functions \(f_{\pm}\), which are rapidly decay as \(x\to\infty\) and \(x\to-\infty\), respectively. Let \(m\geq 0\) be an integer and for each integer \(n\geq 0\), the Jost function \(J_{\pm}\) has the following expansion:_
\[\left|\frac{\partial^{n}}{\partial_{k^{n}}}\left[J_{\pm}(x,k)-\left(\frac{ \mathcal{J}_{\pm}^{(-2)}}{k^{2}}+\frac{\mathcal{J}_{\pm}^{(-1)}}{k}+I+ \mathcal{J}_{\pm}^{(1)}k+\cdots+J_{\pm}^{(m)}k^{m}\right)\right]\right|\leq g_ {\pm}(x)k^{m+1}\]
_where \(k\) is small enough. Furthermore, the leading term \(\mathcal{J}_{\pm}^{(-2)}\) has the form:_
\[\mathcal{J}_{\pm}^{(-2)}(x)=a_{\pm}(x)\left(\begin{array}{ccc}\alpha&\alpha^ {2}&1\\ \alpha&\alpha^{2}&1\\ \alpha&\alpha^{2}&1\end{array}\right)\]
_where \(a_{\pm}(x)\) is a real valued function and is dominated by \(g_{\pm}(x)\) with rapid decay as \(x\to\infty\) and \(x\to-\infty\), respectively._
### The scattering matrix
Define the scattering matrix as
\[\Delta(k)=I-\int_{\mathbb{R}}e^{-x\widetilde{k}\widetilde{\Lambda}}(QJ)(x,k)dx. \tag{2.9}\]
When the initial potential functions \(p_{0}\) and \(q_{0}\) are compact support, the scattering matrix \(\Delta(k)\) satisfies
\[J_{+}(x,k)=J_{-}(x,k)e^{x\widetilde{k}\widetilde{\Lambda}}\Delta(k),\quad k \in\mathbb{C}\setminus\{0\}.\]
**Proposition 2.4**.: _Suppose \(\{q_{0},p_{0}\}\in\mathcal{S}(\mathbb{R})\), then the scattering function \(\Delta(k)\) defined in (2.9) has the following properties:_
_(a) The domain of \(\Delta(k)\):_
\[\Delta(k)\in\left(\begin{array}{ccc}\omega^{2}\overline{S}&\mathbb{R}_{+}& \omega\mathbb{R}_{+}\\ \mathbb{R}_{+}&\omega\overline{S}&\omega^{2}\mathbb{R}_{+}\\ \omega\mathbb{R}_{+}&\omega^{2}\mathbb{R}_{+}&\overline{S}\end{array}\right) \setminus\{0\}.\]
_Here \(\overline{S}\) means the closure of \(S\) and \(\Delta(k)\) is continuous to the boundary of domain but is analytic in the interior of its domain.here \(\overline{S}\) means the closure of \(S\)._
_(b) The matrix-valued function \(\Delta(k)\) has the following Laurens expansions as \(k\to\infty\) and \(k\to 0\), respectively._
\[\Delta(k)=I-\sum_{j=1}^{N}\frac{\Delta_{j}}{k^{j}}+O(\frac{1}{k^{N+1}}),\quad k \to\infty,\]
_and_
\[\Delta(k)=\frac{\Delta^{(-2)}}{k^{2}}+\frac{\Delta^{(-1)}}{k}+\Delta^{(0)}+ \Delta^{(1)}k+\cdots,\quad k\to 0.\]
_(c) The matrix-valued function \(\Delta(k)\) satisfies the symmetries:_
\[\Delta(k)=\mathcal{A}\Delta(\omega k)\mathcal{A}^{-1}=\mathcal{B}\Delta^{*}(k ^{*})\mathcal{B}.\]
### The cofactor Jost functions
Define \(M^{A}=(M^{-1})^{T}\), then the adjoint equation associated with the equation \(J_{x}-[k\Lambda,J]=QJ\) is
\[\left(J^{A}\right)_{x}+\left[k\Lambda,J^{A}\right]=-Q^{T}J^{A}. \tag{2.10}\]
In the same procedure, one can also get the cofactor Jost functions \(J^{A}_{\pm}\) and cofactor scattering matrix \(\Delta^{A}(k)\). Furthermore, the properties of \(J^{A}_{\pm}\) and \(\Delta^{A}(k)\) are similar.
### The eigenfunctions \(M_{n}\)
Define the eigenfunctions for the equation (2.5) in each \(k\in\Omega_{k}\setminus\{0\}\) by the following Fredholm integral
\[\left(M_{n}\right)_{ij}(x,k)=\delta_{ij}+\int_{\gamma_{ij}^{n}}\left(e^{(x-y) \widetilde{k}\widetilde{\Lambda}}\left(QM_{n}\right)(y,k)\right)_{ij}dy,\quad i,j=1,2,3, \tag{2.11}\]
where \(\gamma_{ij}=(x,\infty)\) or \((-\infty,x)\), which is determined by the exponential part. Notice that there are zeros of Fredholm determinants in the complex plane denoted by \(\mathcal{Z}\), but in a proper assumptions, we can extend Fredholm solutions in (2.11) on \(\mathcal{Z}\).
**Proposition 2.5**.: _Suppose \(\{q_{0},p_{0}\}\in\mathcal{S}(\mathbb{R})\), then the equation (2.11) uniquely defines six \(3\times 3\) matrix-valued solutions \(\{M_{n}\}_{1}^{6}\) of (2.5) with the following properties:_
_(a) The eigenfunctions \(M_{n}(x,k)\) are defined for \(x\in\mathbb{R}\) and \(k\in\bar{\Omega}_{n}\backslash(\mathcal{Z}\cup\{0\})\). Moreover, \(M_{n}(x,k)\) is bounded except for \(k\in\mathcal{Z}\cup\{0\}\) and smooth about \(x\) and continuous to \(k\in\bar{\Omega}_{n}\backslash(\mathcal{Z}\cup\{0\})\) but analytic in the interior of its domain._
_(b) The eigenfunctions \(M_{n}(x,k)\) satisfied the symmetries_
\[M_{n}(x,k)=\mathcal{A}M_{n}(x,\alpha k)\mathcal{A}^{-1}=\mathcal{B}M_{n}^{*}(x,k^{*})\mathcal{B},\]
_where \(k\in\bar{\Omega}_{k}\setminus\mathcal{Z}\cup\{0\}\)._
_(c) The determinant of eigenfunctions \(M_{n}(x,k)\) identically equal to one for each \(k\in\bar{\Omega}_{k}\setminus\mathcal{Z}\cup\{0\}\)._
### The properties of \(M_{n}\) as \(k\to\infty\)
**Proposition 2.6**.: _Suppose \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\) and \(q_{0},p_{0}\) are not identically equal to zero. Given an integer \(m\geq 1\) and \(k\) is large enough in its domain, \(M_{n}\) can be approached by the expansion of \(J_{+}\) as_
\[\left|M_{n}-\left(I+\frac{J_{+}^{(1)}}{k}+\cdots+\frac{J_{+}^{(m)}}{k^{m}} \right)\right|\leq\frac{C}{k^{m+1}}. \tag{2.12}\]
Now, assuming \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\) are compact support, then one can get the relation between \(M_{n}\) and \(J_{\pm}\) by
\[M_{n}(x,k) =J_{-}(x,k)e^{x\widehat{\mathcal{L}(k)}}S_{n}(k)\] \[=J_{+}(x,k)e^{x\widehat{\mathcal{L}(k)}}T_{n}(k),\quad x\in \mathbb{R},k\in\bar{\Omega}_{n}\backslash\mathcal{Z},\quad n=1,2,\ldots,6.\]
Combining the relationship between \(J_{+}\) and \(J_{-}\), the \(S_{n}\) and \(T_{n}\) can be linked by
\[\Delta(k)=S_{n}(k)T_{n}^{-1}(k),\quad k\in\bar{\Omega}_{n}\backslash( \mathcal{Z}\cup\{0\}).\]
Since the Schwartz functions with compact support are dense in \(\mathcal{S}(\mathbb{R})\) with respect to the absolutely norm, one can asymptotically express the functions \(M_{n}\), \(J_{\pm}\) and \(\Delta(k)\) by Schwartz initial potential functions.
### The jump matrix \(v_{n}(x,k)\)
**Lemma 2.7**.: _Suppose \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\), then the matrix-valued functions \(M_{n}(x,k)\) satisfies the boundary condition_
\[M_{+}(x,k)=M_{-}(x,k)v(x,k),\quad k\in\Sigma\backslash(\mathcal{Z}\cup\{0\})\]
_where \(v(x,k)\) is jump matrix to be determined._
_In particular, when \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\) are compact support, there exists a matrix \(\nu_{1}(k)\) such that_
\[M_{1}(x,k)=M_{6}(x,k)e^{x\widehat{\mathcal{L}(k)}}\nu_{1}(k)\]
_and \(M_{n}(x,k)=e^{x\widehat{\mathcal{L}(k)}}S_{n}(k)\) when \(x\) is out of the compact support of \(q_{0},p_{0}\). Hence, we have_
\[\nu_{1}(k)=S_{6}(k)^{-1}S_{1}(k).\]
In conclusion, this completes the jump function \(v_{n}\).
**Lemma 2.8**.: _Let \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\), the eigenfunctions \(M_{1}\) can be expressed in terms of the entries of \(J_{\pm},J_{\pm}^{A},\Delta\), and \(\Delta^{A}\) as follows:_
\[M_{1}=\left(\begin{array}{ll}J_{11}^{+}&\frac{(J_{31}^{-1})^{A}(J_{32}^{+})^ {A}-(J_{31}^{-1})^{A}(J_{33}^{+})^{A}}{\delta\delta}\\ J_{21}^{+}&\frac{(J_{11}^{-1})^{A}(J_{33}^{+})^{A}-(J_{31}^{-1})^{A}(J_{33}^{+ })^{A}}{\delta\delta}\end{array}\right).\]
_Furthermore, for the \(|k|\) small enough, the following property holds_
\[C\left|M_{n}(x,k)-\sum_{l=-2}^{p}M_{n}^{(l)}(x)k^{l}\right|\leq|k|^{p+1},\quad k \in\bar{\Omega}_{n}.\]
### Construction of the Riemann-Hilbert problem
Define the reflection coefficients \(r_{1}(k)\) and \(r_{2}(k)\) as
\[\begin{cases}r_{1}(k)=\frac{\Delta_{12}(k)}{\Delta_{11}(k)},&k\in(0,\infty), \\ r_{2}(k)=\frac{\Delta_{2}(k)}{\Delta_{11}^{A}(k)},&k\in(-\infty,0).\end{cases}\]
**Proposition 2.9**.: _Suppose \(q_{0},p_{0}\in\mathcal{S}(\mathbb{R})\), then \(r_{1}:(0,\infty)\to\mathbb{C}\) and \(r_{2}:(-\infty,0)\to\mathbb{C}\) have the following properties: \(r_{1}(k)\) and \(r_{2}(k)\) are smooth functions with rapidly decay as \(|k|\to\infty\) on their domain and can be extended to \(0\) as follows_
\[r_{1}(k)=r_{1}(0)+r_{1}^{\prime}(0)k+\frac{1}{2}r_{1}^{\prime\prime}(0)k^{2}+ \cdots,\quad k\to 0,\quad k>0,\]
_and_
\[r_{2}(k)=r_{2}(0)+r_{2}^{\prime}(0)k+\frac{1}{2}r_{2}^{\prime\prime}(0)k^{2}+ \cdots,\quad k\to 0,\quad k<0,\]
_where \(r_{1}(0)=\omega,r_{2}(0)=1\)._
**Remark 2.10**.: _The SK equation only has the term \(q\) but without \(p\) in the space Lax pair, which means the \(Q_{2}\) is vanishing. In this case the behavior of \(J_{\pm},\Delta,J_{\pm}^{A},\Delta^{A}\) as \(k\to 0\) only is a simple pole, moreover, the behavior of reflection coefficients \(r_{1}\) and \(r_{2}\) are also has different value with \(r_{1}(0)=\omega^{2},r_{2}(0)=1\)._
Define the jump matrix \(v_{n}(x,t;k)\) for \(k\in\Sigma\) as
\[v_{1}=\left(\begin{array}{ccc}1&-r_{1}(k)e^{-\theta_{21}}&0\\ r_{1}^{*}(k)e^{\theta_{21}}&1-\left|r_{1}(k)\right|^{2}&0\\ 0&0&1\end{array}\right), \tag{2.13}\] \[v_{2}=\left(\begin{array}{ccc}1&0&0\\ 0&1-r_{2}(\omega k)r_{2}^{*}(\omega k)&-r_{2}^{*}(\omega k)e^{-\theta_{32}}\\ 0&r_{2}(\omega k)e^{\theta_{32}}&1\end{array}\right),\] \[v_{3}=\left(\begin{array}{ccc}1-r_{1}\left(\omega^{2}k\right) r_{1}^{*}\left(\omega^{2}k\right)&0&r_{1}^{*}\left(\omega^{2}k\right)e^{-\theta_{31}} \\ 0&1&0\\ -r_{1}\left(\omega^{2}k\right)e^{\theta_{31}}&0&1\end{array}\right),\] \[v_{4}=\left(\begin{array}{ccc}1-\left|r_{2}(k)\right|^{2}&-r_ {2}^{*}(k)e^{-\theta_{21}}&0\\ r_{2}(k)e^{\theta_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v_{5}=\left(\begin{array}{ccc}1&0&0\\ 0&1&-r_{1}(\omega k)e^{-\theta_{32}}\\ 0&r_{1}^{*}(\omega k)e^{\theta_{32}}&1-r_{1}(\omega k)r_{1}^{*}(\omega k) \end{array}\right),\] \[v_{6}=\left(\begin{array}{ccc}1&0&r_{2}\left(\omega^{2}k \right)e^{-\theta_{31}}\\ 0&1&\\ -r_{2}^{*}\left(\omega^{2}k\right)e^{\theta_{31}}&0&1-r_{2}\left(\omega^{2}k \right)r_{2}^{*}\left(\omega^{2}k\right),\end{array}\right)\]
where the terms \(\theta_{ij}=\theta_{ij}(x,t,k)(1\leq i\neq j\leq 3)\) are defined by \(\theta_{ij}(x,t,k)=\left(l_{i}-l_{j}\right)x+\left(z_{i}-z_{j}\right)t\) with \(l_{1}(k)=\omega k,l_{2}=\omega^{2}k,l_{3}=k\) and \(z_{1}(k)=9\omega^{2}k^{5},z_{2}(k)=9\omega k^{5},z_{1}(k)=9k^{5}\).
### Riemann-Hilbert problem
Given \(r_{1}(k)\) and \(r_{2}(k)\), find a \(3\times 3\) matrix-valued function \(M_{n}(x,t,k)\) with the following properties:
(a) \(M_{n}(x,t,k):\mathbb{C}\backslash\Sigma\to\mathbb{C}^{3\times 3}\) is analytic for \(k\in\mathbb{C}\backslash\Sigma\).
(b) The limits of \(M_{n}(x,t,k)\) as \(k\) approaches \(\Sigma\) from the left (+) and right (-) exist, are continuous on \(\Sigma\), and are related by
\[M_{+}(x,t,k)=M_{-}(x,t,k)v(x,t,k),\quad k\in\Sigma,\]
where \(v\) is defined in terms of \(r_{1}(k)\) and \(r_{2}(k)\) by (2.13).
(c) \(M_{n}(x,t,k)=I+O\left(k^{-1}\right)\) as \(k\to\infty,\ k\notin\Sigma\).
(d) \(M_{n}(x,t,k)=\sum_{l=-2}^{p}M_{n}^{(l)}(x)k^{l}+O(k^{p+1})\) as \(k\to 0\).
The reconstruction formula for the potential function \(p(x,t)\) is
\[p(x,t)=-3\frac{\partial}{\partial x}\lim_{k\to\infty}k(M(x,t,k)_{33}-1).\]
Alternatively, the reconstruction formula for the solutions of the SK equation and KK equation can be expressed by
\[u(x,t)=-\frac{1}{2}\frac{\partial}{\partial x}\lim_{k\to\infty}k(M(x,t,k)_{33}-1). \tag{2.14}\]
## 3. **Long-time asymptotics**
This section investigates the long-time asymptotics of the SK and KK equations (1.1)-(1.2) by Deift-Zhou steepest-descent method [33]. Firstly, compute the critical points by standard way:
\[\begin{split}\theta_{21}&=(\alpha^{2}-\alpha)kx+( \alpha-\alpha^{2})9k^{5}t\\ &=t[(\alpha^{2}-\alpha)k\xi+(\alpha-\alpha^{2})9k^{5}]\\ &:=t\Phi_{21}(k),\\ \partial_{k}\Phi_{21}&=(\alpha^{2}-\alpha)\xi+(\alpha- \alpha^{2})45k^{4}=0,\\ k^{4}&=\frac{\xi}{45}.\end{split} \tag{3.1}\]
The same procedure shows that
\[\begin{split}\theta_{31}&=(1-\alpha)kx+(1-\alpha^{ 2})9k^{5}t\\ &=t[(1-\alpha)k\xi+(1-\alpha^{2})9k^{5}]\\ &:=t\Phi_{31}(k),\\ \partial_{k}\Phi_{31}&=(1-\alpha)\xi+(1-\alpha^{2})9k^{4 }=0,\\ k^{4}&=\frac{x}{45t}\alpha,\end{split} \tag{3.2}\]
and
\[\begin{split}\theta_{32}&=(1-\alpha^{2})kx+(1- \alpha)9k^{5}t\\ &=t[(1-\alpha^{2})k\xi+(1-\alpha)9k^{5}]\\ &:=t\Phi_{32},\\ \partial_{k}\Phi_{21}&=(1-\alpha^{2})\xi+(1-\alpha)9k^{4 }=0,\\ k^{4}&=\frac{x}{45t}\alpha^{2}.\end{split}\]
Denote \(k_{0}=\sqrt[4]{\frac{|x|}{45t}}\) and in particular, for \(x>0\) let
\[k_{0}=\sqrt[4]{\frac{x}{45t}},\]
where \(\sqrt[4]{*}\) denotes the quartic root of non-negative real and \((*)^{\frac{1}{4}}\) is complex. Thus we have
for the \(\Phi_{21}\), \(k=(\frac{\xi}{45})^{\frac{1}{4}}=\sqrt[4]{\frac{|\xi|}{45}}e^{\frac{nx|}{2}}\), where \(n=0,1,2,3\);
for the \(\Phi_{31}\), \(k=(\frac{|\xi|}{45}\omega)^{\frac{1}{4}}=\sqrt[4]{\frac{|\xi|}{45}}e^{\frac{ nx|}{6}+\frac{nx|}{2}}\), where \(n=0,1,2,3\);
for the \(\Phi_{32}\), \(k=(\frac{|\xi|}{45}\omega^{2})^{\frac{1}{4}}=\sqrt[4]{\frac{|\xi|}{45}}e^{\frac {ny}{3}+\frac{nx|}{2}}\), where \(n=0,1,2,3\).
The critical points on \(\Sigma\) are illustrated in Figure 1. In fact, the jump matrices \(v(x,t,k)\) on each cut have the following symmetries:
\[v(x,t,k)=\mathcal{A}v(x,t,\omega k)\mathcal{A}^{-1}=\mathcal{B}\overline{v(x,t,\overline{k})}^{-1}\mathcal{B},\ \ k\in\Sigma.\]
Thus the following transformations focus only on the points \(\pm k_{0}\), furthermore, the other points can be obtained by the above symmetry.
### First transformation
The jump matrices on \(\mathbb{R}^{+}\) and \(\mathbb{R}^{-}\) with direction away from the origin are
\[v_{1}=\left(\begin{array}{ccc}1&-r_{1}(k)e^{-\theta_{21}}&0\\ r_{1}^{*}(k)e^{\theta_{21}}&1-\left|r_{1}(k)\right|^{2}&0\\ 0&0&1\end{array}\right),\]
and
\[v_{4}=\left(\begin{array}{ccc}1-\left|r_{2}(k)\right|^{2}&-r_{2}^{*}(k)e^{- \theta_{21}}&0\\ r_{2}(k)e^{\theta_{21}}&1&0\\ 0&0&1\end{array}\right).\]
The sign diagram of \(\operatorname{Re}(\theta_{21})\) is shown in Fig. 2, where the shaded region denotes \(\operatorname{Re}(\theta_{21})<0\), while the white region means \(\operatorname{Re}(\theta_{21})>0\). So decompose the jump matrices \(v_{1}\) and \(v_{4}\) on \(0<\left|k\right|\leq k_{0}\), respectively
\[v_{1}=\left(\begin{array}{ccc}1&-r_{1}(k)e^{-\theta_{21}}&0\\ r_{1}^{*}(k)e^{\theta_{21}}&1-\left|r_{1}(k)\right|^{2}&0\\ 0&0&1\end{array}\right)=\left(\begin{array}{ccc}1&0&0\\ r_{1}^{*}(k)e^{\theta_{21}}&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&-r_{1}(k)e^{-\theta_{21}}& 0\\ 0&1&0\\ 0&0&1\end{array}\right),\]
and
\[v_{4}=\left(\begin{array}{ccc}1-\left|r_{2}(k)\right|^{2}&-r_{2}^{*}(k)e^{- \theta_{21}}&0\\ r_{2}(k)e^{\theta_{21}}&1&0\\ 0&0&1\end{array}\right)=\left(\begin{array}{ccc}1&-r_{2}^{*}(k)e^{-\theta_ {21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ r_{2}(k)e^{\theta_{21}}&1&0\\ 0&0&1\end{array}\right).\]
However, for \(|k|\geq k_{0}\), a matrix-values function \(\Theta\) should be introduced to open lenses naturally. Since the reflect coefficients \(r_{1}(k)\) and \(r_{2}(k)\) rapidly decay, there exits \(\mathcal{C}>0\) such that
\[|r_{j}(k)|<1,\quad j=1,2,\quad k\in[\mathcal{C},\infty).\]
Now, let \(\delta_{1}(k),\delta_{4}(k)\) be the solution of the scalar RH problem
\[\begin{cases}\delta_{1+}(k)=\delta_{1-}(k)\left(1-|r_{1}(k)|^{2}\right),&k\in \Sigma_{1}^{(1)},\\ =\delta_{1-}(z),&k\in\mathbb{C}\setminus\Sigma_{1}^{(1)},\\ \delta_{1}(k)\to 1&\text{as }k\to\infty,\end{cases}\]
and
\[\begin{cases}\delta_{4+}(k)=\delta_{4-}(k)\left(1-|r_{1}(k)|^{2}\right),&k\in \Sigma_{4}^{(1)},\\ =\delta_{4-}(z),&k\in\mathbb{C}\setminus\Sigma_{4}^{(1)},\\ \delta_{4}(k)\to 1&\text{as }k\to\infty.\end{cases}\]
Hence, using the Plemelj formula, it follows that
\[\delta_{1}(k)=\exp\left\{\frac{1}{2\pi i}\int_{[k_{0},\infty)}\frac{\ln \left(1-|r_{1}(s)|^{2}\right)}{s-k}ds\right\},\quad k\in\mathbb{C}\backslash \Sigma_{1}^{(1)},\]
and
\[\delta_{4}(k)=\exp\left\{\frac{1}{2\pi i}\int_{[-k_{0},-\infty)}\frac{\ln \left(1-|r_{2}(s)|^{2}\right)}{s-k}ds\right\},\quad k\in\mathbb{C}\backslash \Sigma_{4}^{(1)}.\]
Let \(\log_{\theta}(k)\) denote the logarithm of \(k\) with branch cut along \(\arg k=\theta\), i.e.,
\(\log_{0}(k)=\ln|k|+\arg_{0}(k),\arg_{0}(k)\in(0,2\pi)\); \(\log_{\pi}(k)=\ln|k|+\arg_{\pi}(k),\arg_{\pi}(k)\in(-\pi,\pi)\).
**Proposition 3.1**.: _The basic properties of \(\delta\) functions are given below:_
1. _If choosing the branch cut of_ \(\ln\) _with_ \(\arg k\in(0,2\pi)\)_, the_ \(\delta(k)\) _can be rewritten as_ \[\delta_{1}(k)=e^{-i\nu_{1}\log_{0}(k-k_{0})}e^{-\chi_{1}(k)}\]
_where_
\[\nu_{1}=-\frac{1}{2\pi}\ln\left(1-|r_{1}\left(k_{0}\right)|^{2}\right),\]
_and_
\[\chi_{1}(k)=\frac{1}{2\pi i}\int_{k_{0}}^{\infty}\log_{0}(k-s)d\ln\left(1-|r _{1}(s)|^{2}\right).\]
_Moreover, we have_
\[\delta_{4}(k)=e^{-i\nu_{4}\log_{\pi}(k+k_{0})}e^{-\chi_{4}(k)}\]
_with_
\[\nu_{4}=-\frac{1}{2\pi}\ln\left(1-|r_{2}\left(-k_{0}\right)|^{2}\right),\]
_and_
\[\chi_{4}(k)=\frac{1}{2\pi i}\int_{-k_{0}}^{-\infty}\log_{\pi}(k-s)d\ln\left(1 -|r_{2}(s)|^{2}\right).\]
1. _The_ \(\delta(k)\) _satisfies the conjugate symmetry and are bounded in_ \(|k|>k_{0}\) _such that_ \[\delta_{1}(k)=(\overline{\delta_{1}(\overline{k})})^{-1},\quad k\in\mathbb{C }\backslash\Sigma_{1}^{(1)},\delta_{4}(k)=(\overline{\delta_{4}(\overline{k}) })^{-1},\quad k\in\mathbb{C}\backslash\Sigma_{4}^{(1)},\]
_and_ \[|\delta_{1\pm}(k)|<\infty,\quad\text{for }k>k_{0};\quad|\delta_{4\pm}(k)|< \infty,\quad\text{for }k<-k_{0}.\]
2. _As_ \(k\to\pm k_{0}\) _along with_ \(k>k_{0}\) _and_ \(k<-k_{0}\)_, respectively, we have_ \[|\chi_{1}(k)-\chi_{1}\left(k_{0}\right)|\leq C\left|k-k_{0}\right|\left(1+|\ln| k-k_{0}||\right),\] \[|\partial_{x}\left(\chi_{1}(k)-\chi_{1}\left(k_{0}\right)\right)| \leq\frac{C}{t}\left(1+|\ln|k-k_{0}||\right),\]
_and_
\[\left|\chi_{4}(k)-\chi_{4}\left(-k_{0}\right)\right|\leq C\left|k+k_ {0}\right|\left(1+\left|\ln\left|k+k_{0}\right|\right|\right),\] \[\left|\partial_{x}\left(\chi_{4}(k)-\chi_{4}\left(k_{0}\right) \right)\right|\leq\frac{C}{t}\left(1+\left|\ln\left|k+k_{0}\right|\right| \right).\]
_Moreover, we have_
\[\left|\partial_{x}\chi_{1}\left(k_{0}\right)\right|\leq\frac{C}{t},\quad \partial_{x}\left(\delta_{1}(k)^{\pm 1}\right)=\frac{\pm i\nu}{180tk_{0}^{3}\left(k-k_{0} \right)}\delta_{1}(k)^{\pm 1}.\]
Proof.: We focus on the properties of \(\delta_{1}\) and the properties of \(\delta_{2}\) are similar.
(1). Recall that
\[\delta_{1}(k)=\exp\left\{\frac{1}{2\pi i}\int_{[k_{0},\infty)}\frac{\ln\left( 1-\left|r_{1}(s)\right|^{2}\right)}{s-k}ds\right\},\quad k\in\mathbb{C} \backslash\Sigma_{1}^{(1)}.\]
Using the technique of integrations by parts, rewrite the formula of \(\delta_{1}\) as follows
\[\delta_{1}(k) =\exp\left\{\frac{1}{2\pi i}\int_{[k_{0},\infty)}\frac{\ln\left( 1-\left|r_{1}(s)\right|^{2}\right)}{s-k}ds\right\}=\exp\left\{-\frac{1}{2\pi i }\int_{[k_{0},\infty)}\frac{\ln\left(1-\left|r_{1}(s)\right|^{2}\right)}{k-s} ds\right\}\] \[=\exp\left\{\frac{1}{2\pi i}\int_{[k_{0},\infty)}\ln\left(1- \left|r_{1}(s)\right|^{2}\right)d\,\left(\ln(k-s)\right)\right\}\] \[=\exp\left\{\frac{1}{2\pi i}\left(\ln\left(1-\left|r_{1}(s) \right|^{2}\right)\ln(k-s)|_{k_{0}}^{\infty}-\int_{[k_{0},\infty)}\ln(k-s)d\, \ln\left(1-\left|r_{1}(s)\right|^{2}\right)\right)\right\}\] \[=\exp\left\{\frac{1}{2\pi i}\left(-\ln\left(1-\left|r_{1}(k_{0}) \right|^{2}\right)\log_{0}(k-k_{0})-\int_{[k_{0},\infty)}\log_{0}(k-s)d\,\ln \left(1-\left|r_{1}(s)\right|^{2}\right)\right)\right\}\] \[=e^{-i\nu_{1}\log_{0}(k-k_{0})}e^{-\chi_{1}(k)},\]
where we have chosen the branch cut \(0<\arg(k-k_{0})<2\pi\), since the jump contour is \(k>k_{0}\).
(2). By using the uniqueness of RH problem of \(\delta_{1}(k)\), it is found that \(\overline{\delta_{1}(\bar{k})}^{-1}\) satisfies the same RH problem, so that \(\delta_{1}(k)=\overline{\delta_{1}(\bar{k})}^{-1}\). When \(k_{0}\) large enough, the reflection coefficients is strictly small than \(1\) and inserting the symmetry of \(\delta_{1}(k)\) yields
\[\delta_{1+}(k) =\delta_{1-}(k)\left(1-\left|r_{1}(k)\right|^{2}\right),\] \[=\overline{\delta_{1-}(\bar{k})}^{-1}\left(1-\left|r_{1}(k)\right| ^{2}\right),\] \[=\overline{\delta_{1+}(k)}^{-1}\left(1-\left|r_{1}(k)\right|^{2} \right),\] \[\left|\delta_{1+}(k)\right|^{2} =\left(1-\left|r_{1}(k)\right|^{2}\right),\]
and the same procedure leads to
\[\left|\delta_{1-}\right|^{2}\leq\left(1-\sup_{k_{0}\in[c_{0},M]}\left|r_{1}(k_ {0})\right|^{2}\right)^{-1}.\]
In conclusion, \(\left|\delta_{1\pm}\right|\) is bounded and hence, by the maximum principle \(\left|\delta_{1}(k)\right|\) is bounded.
Finally, using the fact of symmetry, it is concluded that \(\left|\delta_{1}^{-1}(k)\right|\) is bounded.
(3). The facts that \(r_{1}(k)\) rapidly decays and the integral of \(\log_{0}(k)\) is well defined near the zero indicate that for any \(\epsilon>0\) one can estimate \(\chi_{1}(k_{0})\) as
\[\left|\chi_{1}(k)-\chi_{1}(k_{0})\right| =\left|\frac{1}{2\pi i}\int_{k_{0}}^{\infty}\log_{0}(k-s)-\log_{0} (k_{0}-s)d\ln\left(1-\left|r_{1}(s)\right|^{2}\right)\right|\] \[\leq C\int_{k_{0}}^{k_{0}+\epsilon}\left|\log_{0}(k-s)-\log_{0}(k _{0}-s)\right|\left|d\ln\left(1-\left|r_{1}(s)\right|^{2}\right)\right|\] \[+\int_{k_{0}+\epsilon}^{\infty}\left|\log_{0}(k-s)-\log_{0}(k_{0 }-s)\right|\left|d\ln\left(1-\left|r_{1}(s)\right|^{2}\right)\right|,\]
where for the first term, we have
\[\int_{k_{0}}^{k_{0}+\epsilon}\left|\log_{0}(k-s)-\log_{0}(k_{0}- s)\right|\left|d\ln\left(1-\left|r_{1}(s)\right|^{2}\right)\right|\] \[=\int_{k_{0}}^{k_{0}+\epsilon}\left|\log_{0}(k-s)-\log_{0}(k_{0} -s)\right|\frac{\partial_{s}\left|r_{1}(s)\right|}{1-\left|r_{1}(s)\right|^{2 }}ds\] \[\leq\|\frac{\partial_{s}\left|r_{1}(s)\right|}{1-\left|r_{1}(s) \right|^{2}}\|_{L^{\infty}(k_{0},k_{0}+\epsilon)}\int_{k_{0}}^{k_{0}+\epsilon} \left|\log_{0}(k-s)-\log_{0}(k_{0}-s)\right|ds\] \[\leq C|\left((s-k_{0})\log_{0}(s-k_{0})+(k_{0}-s)\right|_{k_{0}}^ {k_{0}+\epsilon}\right)-\left((s-k)\log_{0}(s-k)+(k-s)|_{k_{0}}^{k_{0}+ \epsilon}\right)|\] \[=C|\epsilon log_{0}(\epsilon)-\epsilon-(k_{0}-k+\epsilon)\log_{0} (k_{0}-k+\epsilon)+(k_{0}-k+\epsilon)+(k_{0}-k)\log_{0}(k_{0}-k)-(k_{0}-k)\] \[\leq|k_{0}-k|(1+\left|ln(k-k_{0})\right|).\]
and for the second term
\[\int_{k_{0}+\epsilon}^{\infty}\left|\log_{0}(k-s)-\log_{0}(k_{0} -s)\right|\left|d\ln\left(1-\left|r_{1}(s)\right|^{2}\right)\] \[\leq|k-k_{0}|\int_{k_{0}+\epsilon}^{\infty}|\frac{1}{\eta-s}|d\ln \left(1-\left|r_{1}(s)\right|^{2}\right)\leq|k-k_{0}|(1+\ln(k_{0}-k)).\]
Furthermore, for the critical point \(k_{0}=\sqrt[4]{\frac{\pi}{45\epsilon}}\), by the estimates above and taking the derivative with respect to \(x\), one can get the estimate for \(\partial_{x}\chi(k)\).
Using the symmetry properties, and considering \(\delta_{1}\) and \(\delta_{4}\), define \(\delta_{j}\)\((j=2,3,5,6)\) as follows
\[\delta_{3}(k) =\delta_{1}\left(\omega^{2}k\right), k\in\mathbb{C}\backslash\Sigma_{3}^{(1)},\] \[\delta_{5}(k) =\delta_{1}(\omega k), k\in\mathbb{C}\backslash\Sigma_{5}^{(1)},\] \[\delta_{2}(k) =\delta_{4}\left(\omega^{2}k\right), k\in\mathbb{C}\backslash\Sigma_{2}^{(1)},\] \[\delta_{6}(k) =\delta_{4}(\omega k), k\in\mathbb{C}\backslash\Sigma_{6}^{(1)},\]
which satisfy the jump conditions
\[\delta_{3+}(k) =\delta_{3-}(k)\left(1-\left|r_{1}\left(\omega^{2}k\right)\right| ^{2}\right), k\in\Sigma_{3}^{(1)},\] \[\delta_{5+}(k) =\delta_{5-}(k)\left(1-\left|r_{1}(\omega k)\right|^{2}\right), k\in\Sigma_{5}^{(1)},\] \[\delta_{2+}(k) =\delta_{2-}(k)\left(1-\left|r_{2}\left(\omega^{2}k\right)\right| ^{2}\right), k\in\Sigma_{2}^{(1)},\] \[\delta_{6+}(k) =\delta_{6-}(k)\left(1-\left|r_{2}(\omega k)\right|^{2}\right), k\in\Sigma_{6}^{(1)}.\]
**Remark 3.2**.: _The function \(\ln(k)\) in \(\delta_{n}(k)\) selects the branch cut \(\frac{(n-1)\pi}{3}<\arg(k)<2\pi+\frac{(n-1)\pi}{3}\) for \(n=1,2,3\) and the branch cut \(-\frac{(7-n)\pi}{3}<\arg(k)<2\pi-\frac{(7-n)\pi}{3}\) for \(n=4,5,6\)._
Thus define the matrix function \(\Theta(k)\) to factorize the RH problem. In particular,
\[M^{(1)}(x,t,k)=M(x,t,k)\Theta(k),\]
where \(\Theta(k)\) is
\[\Theta(k)=\left(\begin{array}{ccc}\frac{\delta_{1}(k)\delta_{0}(k)}{\delta_{3}(k) \delta_{4}(k)}&0&0\\ 0&\frac{\delta_{5}(k)\delta_{4}(k)}{\delta_{1}(k)\delta_{2}(k)}&0\\ 0&0&\frac{\delta_{3}(k)\delta_{2}(k)}{\delta_{5}(k)\delta_{6}(k)}\end{array} \right)=\left(\begin{array}{ccc}\Theta_{1}(k)&0&0\\ 0&\Theta_{2}(k)&0\\ 0&0&\Theta_{3}(k)\end{array}\right)\]
with \(\Theta(k)=I+O\left(k^{-1}\right)\) as\(k\rightarrow\infty\).
The jump matrix is \(v^{(1)}(x,t;k)=\Theta_{-}^{-1}v(x,t;k)\Theta_{+}\), and one can get for \(|k|>k_{0}\) as follows
\[v_{1}^{(1)} =\left(\begin{array}{ccc}\frac{\Theta_{1+}}{\Theta_{1-}}&- \frac{\Theta_{2+}}{\Theta_{1-}}r_{1}(k)e^{-t\Phi_{21}}&0\\ \frac{\Theta_{1+}}{\Theta_{2-}}r_{1}^{*}(k)e^{t\Phi_{21}}&\frac{\Theta_{2-}} {\Theta_{2+}}\left(1-r_{1}(k)r_{1}^{*}(k)\right)&0\\ 0&0&1\end{array}\right)=\left(\begin{array}{ccc}\frac{\delta_{1+}}{\delta_{1 -}}&-\frac{\delta_{v_{1}}}{\delta_{1-}}r_{1}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{1+}\delta_{1-}}{\delta_{v_{1}}}r_{1}^{*}(k)e^{t\Phi_{21}}&\frac{ \delta_{1-}}{\delta_{1+}}\left(1-r_{1}(k)r_{1}^{*}(k)\right)&0\\ 0&0&1\end{array}\right)\] \[=\left(\begin{array}{ccc}1-|r_{1}(k)|^{2}&-\frac{\delta_{v_{1}}} {\delta_{1-}^{2}}r_{1}(k)|^{2}e^{-t\Phi_{21}}&0\\ \frac{\delta_{1+}^{2}}{\delta_{v_{1}}}\frac{r_{1}^{*}(k)}{1-|r_{1}(k)|^{2}}e^ {t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\]
and
\[v_{4}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\delta_{4+}^{2}}{\delta_{v_{4 }}}\frac{r_{2}^{*}(k)}{1-|r_{2}(k)|^{2}}e^{-t\Phi_{21}}&0\\ \frac{\delta_{v_{4}}}{\delta_{4-}^{2}}\frac{r_{2}(k)}{1-|r_{2}(k)|^{2}}e^{t \Phi_{21}}&1-|r_{2}(k)|^{2}&0\\ 0&0&1\end{array}\right),\]
where \(\tilde{\delta}_{v_{1}}=\frac{\delta_{3}\delta_{2}^{2}\delta_{5}}{\delta_{6} \delta_{2}}\) and \(\tilde{\delta}_{v_{4}}=\frac{\delta_{1}^{2}\delta_{2}\delta_{6}}{\delta_{5} \delta_{3}}\).
Furthermore, the other jump matrices can be obtained by the symmetry or direct calculations:
\[v_{3}^{(1)} =\left(\begin{array}{ccc}1&0&\frac{\delta_{3+}^{2}}{\delta_{v_{ 3}}}\frac{r_{1}^{*}\left(\omega^{2}k\right)}{1-r_{1}(\omega^{2}k)r_{1}^{*}( \omega^{2}k)}e^{-\theta_{31}}\\ 0&1&0\\ -\frac{\delta_{v_{3}}}{\delta_{3-}^{2}}\frac{r_{1}\left(\omega^{2}k\right)}{1- r_{1}(\omega^{2}k)r_{1}^{*}(\omega^{2}k)}e^{\theta_{31}}&0&1-r_{1}\left(\omega^{2}k \right)r_{1}^{*}\left(\omega^{2}k\right)\\ \end{array}\right),\] \[v_{5}^{(1)} =\left(\begin{array}{ccc}1&0&0\\ 0&1-r_{1}(\omega k)r_{1}^{*}(\omega k)&-\frac{\tilde{\delta}_{v_{5}}}{\delta_ {2-}^{2}}\frac{r_{1}(\omega k)}{1-r_{1}(\omega k)r_{1}^{*}(\omega k)}e^{-\theta _{32}}\\ 0&\frac{\delta_{5+}^{2}}{\delta_{v_{5}}}\frac{r_{1}^{*}(\omega k)}{1-r_{1}( \omega k)r_{1}^{*}(\omega k)}e^{\theta_{32}}&1\end{array}\right),\]
and
\[v_{2}^{(1)} =\left(\begin{array}{ccc}1&0&0\\ 0&1&-\frac{\delta_{2+}^{2}}{\delta_{v_{2}}}\frac{r_{2}^{*}(\omega k)}{1-r_{2}( \omega k)r_{2}^{*}(\omega k)}e^{-\theta_{32}}\\ 0&\frac{\delta_{v_{2}}}{\delta_{2-}^{2}}\frac{r_{2}(\omega k)}{1-r_{2}(\omega k )r_{2}^{*}(\omega k)}e^{\theta_{32}}&1-r_{2}(\omega k)r_{2}^{*}(\omega k)\\ \end{array}\right),\] \[v_{6}^{(1)} =\left(\begin{array}{ccc}1-r_{2}\left(\omega^{2}k\right)r_{2}^{* }\left(\omega^{2}k\right)&0&\frac{\tilde{\delta}_{v_{6}}}{\delta_{6-}^{2}} \frac{r_{2}\left(\omega^{2}k\right)}{1-r_{2}(\omega^{2}k)r_{2}^{*}(\omega^{2}k)}e ^{-\theta_{31}}\\ 0&1&0\\ -\frac{\delta_{\tilde{\delta}+}^{2}}{\delta_{v_{6}}}\frac{r_{2}^{*}\left(\omega^{2 }k\right)}{1-r_{2}(\omega^{2}k)r_{2}^{*}(\omega^{2}k)}e^{\theta_{31}}&0&1 \end{array}\right).\]
On the other hand, since on the \(\Sigma_{j=7,8,\cdots,12}\) the \(\delta_{j}\) has no jump, the jump matrix \(v_{\{j=7,8,\cdots,12\}}\) are just as follows
\[v_{7}^{(1)}=\left(\begin{array}{ccc}\frac{\Theta_{1+}}{\Theta_{1-}}&-\frac{ \Theta_{2+}}{\Theta_{1-}}r_{1}(k)e^{-t\Phi_{21}}&0\\ \frac{\Theta_{1+}}{\Theta_{2-}}r_{1}^{*}(k)e^{t\Phi_{21}}&\frac{\Theta_{2-}}{ \Theta_{2+}}\left(1-r_{1}(k)r_{1}^{*}(k)\right)&0\\ 0&0&1\end{array}\right)=\left(\begin{array}{ccc}1&-\frac{\tilde{\delta}_{v_{ 1}}}{\delta_{1}^{2}}r_{1}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{1+}^{2}}{\delta_{v_{1}}}r_{1}^{*}(k)e^{t\Phi_{21}}&1-r_{1}(k)r_{1}^{* }(k)&0\\ 0&0&1\end{array}\right),\]
\[v_{9}^{(1)}=\left(\begin{array}{ccc}1-r_{1}\left(\omega^{2}k\right)r_{1}^{*} \left(\omega^{2}k\right)&0&\frac{\delta_{2}^{2}}{\delta_{v_{3}}}r_{1}^{*}\left( \omega^{2}k\right)e^{-\theta_{31}}\\ 0&1&0\\ -\frac{\delta_{v_{3}}}{\delta_{3}^{2}}r_{1}\left(\omega^{2}k\right)e^{\theta_{ 31}}&0&1\end{array}\right)v_{11}^{(1)}=\left(\begin{array}{ccc}1&0&0\\ 0&1&-\frac{\delta_{v_{3}}}{\delta_{3}^{2}}r_{1}(\omega k)e^{-\theta_{32}}\\ 0&\frac{\delta_{2}^{2}}{\delta_{v_{3}}}r_{1}^{*}(\omega k)e^{\theta_{32}}&1-r_ {1}(\omega k)r_{1}^{*}(\omega k)\end{array}\right).\]
Moreover, one has
\[v_{8}^{(1)}=\left(\begin{array}{ccc}1&0&0\\ 0&1-r_{2}(\omega k)r_{2}^{*}(\omega k)&-\frac{\delta_{2}^{2}}{\delta_{v_{2}}} r_{2}^{*}(\omega k)e^{-\theta_{32}}\\ 0&\frac{\delta_{v_{2}}}{\delta_{2}^{2}}r_{2}(\omega k)e^{\theta_{32}}&1\end{array} \right),v_{10}^{(1)}=\left(\begin{array}{ccc}1-\left|r_{2}(k)\right|^{2}&- \frac{\delta_{4}^{2}}{\delta_{v_{4}}}r_{2}^{*}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{v_{4}}}{\delta_{4}^{2}}r_{2}(k)e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\]
Next, suppose
\[\rho_{1}(k)=\frac{r_{1}(k)}{1-r_{1}(k)r_{1}^{*}(k)},\quad\rho_{2}(k)=\frac{r_{ 2}(k)}{1-r_{2}(k)r_{2}^{*}(k)}.\]
It is necessary to decompose the functions \(r_{j},r_{j}^{*}\) and \(\rho_{j}(k)\) for analytic extension.
**Lemma 3.3**.: _The functions \(r_{j}\) and \(\rho_{j}\)\((j=1,2)\) have the following decompositions_
\[\begin{array}{ll}r_{1}(k)=r_{1,a}(x,t,k)+r_{1,r}(x,t,k),&k\in[0,k_{0}]\,,\\ r_{2}^{*}(k)=r_{2,a}^{*}(x,t,k)+r_{2,r}^{*}(x,t,k),&k\in[-k_{0},0],\\ \rho_{1}(k)=\rho_{1,a}(x,t,k)+\rho_{1,r}(x,t,k),&k\in[k_{0},\infty)\,,\\ \rho_{2}^{*}(k)=\rho_{2,a}^{*}(x,t,k)+\rho_{2,r}^{*}(x,t,k),&k\in(-\infty,k_{0} ]\,.\end{array}\]
_Fig. 3 shows the sign diagram of \(\mathrm{Re}(\Phi_{21})\), where \(U_{1}\cup U_{3}\cup U_{5}=\{k:\mathrm{Re}(\Phi_{21})<0\}\) and \(U_{2}\cup U_{4}\cup U_{6}=\{k:\mathrm{Re}(\Phi_{21})>0\}\)._
Furthermore, the decomposition functions have the following properties:
(1). For \(0<k_{0}<M\), \(r_{1,a}\) and \(r_{2,a}^{*}\) are defined and continuous for \(k\in\bar{U}_{2}\) and analytic for \(k\in U_{2}\), but the domain for \(r_{1,a},r_{2,a}^{*}\) are restricted by \(0\leq\mathrm{Re}(k)\leq k_{0}\) and \(-k_{0}\leq\mathrm{Re}(k)\leq 0\), respectively. The function \(\rho_{1,a}^{*}\) is defined and continuous for \(k\in\bar{U}_{1}\) and analytic for \(k\in U_{1}\), \(\rho_{2,a}\) are defined and continuous for \(k\in\bar{U}_{3}\) and analytic for \(k\in U_{3}\).
(2). For \(0<k_{0}<M\), the functions \(r_{1,a},r_{2,a}^{*}\) and \(\rho_{1,a}^{*},\rho_{2,a}\) satisfy the following estimates:
\[\left|\partial_{x}^{l}(r_{1,a}(x,t,k)-r_{1}(0))\right|\leq C|k|e^{ -t\,\mathrm{Re}\Phi_{21}(k)/2},\quad k\in\bar{U}_{2}\text{ and }\mathrm{Re}(k)\in[0,k_{0}];\] \[\left|\partial_{x}^{l}(r_{1,a}(x,t,k)-r_{1}(0))\right|\leq C|k-k_ {0}|e^{-t\,\mathrm{Re}\Phi_{21}(k)/2},\quad k\in\bar{U}_{2}\text{ and }\mathrm{Re}(k)\in[0,k_{0}];\]
Figure 3. The shaded region is \(\mathrm{Re}(\Phi_{21})<0\) and white is \(\mathrm{Re}(\Phi_{21})>0\).
and
\[\left|\partial_{x}^{l}(r_{2,a}^{*}(x,t,k)-r_{2}^{*}(0))\right|\leq C|k|e^{-t\,{ \rm Re}\,\Phi_{21}(k)/2},\quad k\in\bar{U}_{2}\ {\rm and}\ {\rm Re}(k)\in[-k_{0},0];\]
\[\left|\partial_{x}^{l}(r_{2,a}^{*}(x,t,k)-r_{2}^{*}(-k_{0}))\right|\leq C|k+k_ {0}|e^{-t\,{\rm Re}\,\Phi_{21}(k)/2},\quad k\in\bar{U}_{2}\ {\rm and}\ {\rm Re}(k)\in[-k_{0},0].\]
Furthermore,
\[\left|\partial_{x}\rho_{1,a}(x,t,k)\right|\leq\frac{Ce^{-t\,{\rm Re}\,\Phi_{21 }(k)/2}}{1+|k|}\quad k\in\bar{U}_{6},\,|\partial_{x}(\rho_{1,a}(x,t,k)-\rho_{1 }(k_{0}))|\leq C|k-k_{0}|e^{-t\,{\rm Re}\,\Phi_{21}(k)/2}\quad k\in\bar{U}_{6}.\]
and
\[\left|\partial_{x}\rho_{2,a}(x,t,k)\right|\leq\frac{Ce^{t\,{\rm Re}\,\Phi_{2 1}(k)/2}}{1+|k|}\quad k\in\bar{U}_{3},\,|\partial_{x}(\rho_{2,a}(x,t,k)-\rho_{ 2}(-k_{0}))|\leq C|k+k_{0}|e^{t\,{\rm Re}\,\Phi_{21}(k)/2}\quad k\in\bar{U}_{3}.\]
(3). For \(0<k_{0}<M\) and \(1\leq p\leq\infty\), the functions \(r_{j,r}\) and \(\rho_{j,r}\) satisfy
\[\left\|\partial_{x}^{l}r_{1,r}(x,t,k)e^{-t\Phi_{21}}\right\|_{L^{p}(0,k_{0})} \leq\frac{c}{t^{3/2}}\quad 0<k<k_{0},\,\left\|\partial_{x}^{l}r_{2,r}^{*}(x,t,k)e^{ -t\Phi_{21}}\right\|_{L^{p}(-k_{0},0)}\leq\frac{c}{t^{3/2}}\quad-k_{0}<k<0,\]
and
\[\left\|(1+|\cdot|)\partial_{x}^{l}\rho_{1,r}(x,t,\cdot)e^{-t\Phi_{21}}\right\| _{L^{p}[k_{0},\infty)}\leq\frac{c}{t^{3/2}},\,\left\|(1+|\cdot|)\partial_{x}^{ l}\rho_{2,r}(x,t,\cdot)e^{t\Phi_{21}}\right\|_{L^{p}(-\infty,-k_{0}]}\leq\frac{c}{t^{3/ 2}}.\]
Proof.
Recall that the reflection coefficient \(r_{1}(k)\) is smooth for \(k\in[0,\infty)\), so that we can take Taylor expansion about \(r_{1}(k)\) near \(k_{0}\) point:
\[r_{1}(k)=r_{1}(k_{0})+\partial_{k}r_{1}|_{k=k_{0}}(k-k_{0})+\cdots+\frac{1}{n! }\int_{k_{0}}^{k}\partial_{k}^{(n+1)}r_{1}(\tau)(k-\tau)^{n}d\tau.\]
Let \(R(k)=\sum_{i=0}^{n}\partial_{k}^{i}r_{1}|_{k=k_{0}}(k-k_{0})^{i}\) and \(h(k):=r_{1}(k)-R(k)\), now we split the \(h(k)\) into \(h(k)=h_{1}(k)+h_{2}(k)\).
Set
\[\beta(k)=(k-k_{0})^{q}.\]
Rewrite \(\theta_{21}\) as
\[\theta_{21}=2it\left(\frac{9\sqrt{3}k^{5}-\sqrt{3}k\xi}{2}\right):=2it\theta( \xi,k)\]
For \(k\in[0,k_{0}]\), As \(k\to\theta(\xi,k)\) is one-to-one, we define
\[\left\{\begin{aligned} \frac{h}{\beta}(\theta)&:=\frac{h (\theta(k))}{\beta(\theta(k))},\quad-18\sqrt{3}k_{0}^{5}=\theta(k_{0})<\theta< 0,\\ &:=0,\qquad\qquad\quad\quad\text{Otherwise.}\end{aligned}\right.\]
And rewrite \(h(k)\) as
\[h(k) =\frac{1}{n!}\int_{k_{0}}^{k}\partial_{k}^{(n+1)}r_{1}(\tau)(k- \tau)^{n}d\tau\] \[=\frac{(k-k_{0})^{n+1}}{n!}\int_{0}^{1}\partial_{k}^{(n+1)}r_{1} (k_{0}+\mu(k-k_{0}))(1-\mu)^{n}d\mu\]
then
\[\frac{h}{\beta}(\theta)=(k-k_{0})^{n-q+1}g(k_{0},k),\quad\theta(k_{0})<\theta<0,\]
where
\[g(k_{0},k)=\frac{1}{n!}\int_{0}^{1}\partial_{k}^{(n+1)}r_{1}(k_{0}+\mu(k-k_{0 }))(1-\mu)^{n}d\mu.\]
Since the reflect coefficient \(r_{1}\) belong to Schwartz space, so that \(|\partial_{k}g(k_{0},k)|\leq\infty\), and as \(\frac{dk}{d\theta}=\frac{2}{45\sqrt{3}(k^{4}-k_{0}^{4})}\) and \(0<k_{0}<M\), we can estimate that
\[\int_{0}^{k_{0}}\left|\left(\frac{d}{d\theta}\right)^{j}\frac{h}{ \beta}(k)\right|^{2}|\bar{d}\theta(k)| \leq C\int_{0}^{k_{0}}\left|\left(\frac{d}{dk}\frac{1}{|k^{4}-k_{0 }^{4}|}\right)^{j}\frac{h}{\beta}(k)\right|^{2}|k^{4}-k_{0}^{4}|\bar{d}k\] \[=C\int_{0}^{k_{0}}\left|\left(\frac{1}{|k^{4}-k_{0}^{4}|}\frac{d }{dk}\right)^{j}(k-k_{0})^{n-q+1}g(k_{0},k)\right|^{2}|k^{4}-k_{0}^{4}|\bar{d}k\] \[\leq C\int_{0}^{k_{0}}\left|(k-k_{0})^{2n-2q-4j+3}\right|\bar{d} k<\infty,\]
for the \(0\leq j\leq\frac{2n-2q+4}{4}\).
Thus one can conclude that \(\frac{h}{\beta}(\theta)\in H^{j}(\mathbb{R})\) for \(0\leq j\leq\frac{2n-2q+4}{4}\), and one can define the Fourier inversion transform
\[\frac{h}{\beta}(k)=\int_{-\infty}^{\infty}e^{is\theta(k)}\widehat{\left( \frac{h}{\beta}\right)}(s)\bar{d}s,\quad k\in[0,k_{0}],\]
where
\[\widehat{\left(\frac{h}{\beta}\right)}(s)=-\int_{0}^{k_{0}}e^{-is\theta(k)} \left(\frac{h}{\beta}\right)(k)\bar{d}\theta(k),\quad s\in\mathbb{R}.\]
Because \(\frac{h}{\beta}(\theta)\in H^{j}(\mathbb{R})\) and by the Plancherel formula, we have that
\[\int_{-\infty}^{\infty}(1+s^{2})^{j}\left|\widehat{\left(\frac{h}{\beta} \right)}(s)\right|^{2}ds\leq C.\]
Now split the \(h(k)\) as
\[h(k)=\beta(k)\int_{t}^{\infty}e^{is\theta(k)}\widehat{\left(\frac{h}{\beta} \right)}(s)\bar{d}s+\beta(k)\int_{-\infty}^{t}e^{is\theta(k)}\widehat{\left( \frac{h}{\beta}\right)}(s)\bar{d}s:=h_{1}(k)+h_{2}(k),\quad k\in[0,k_{0}].\]
Then for \(k\in[0,k_{0}]\), one has
\[|e^{-2it\theta(k)}h_{1}(k)| =\left|e^{-2it\theta(k)}\beta(k)\int_{t}^{\infty}e^{is\theta(k)} \widehat{\left(\frac{h}{\beta}\right)}(s)\bar{d}s\right|\] \[\leq\left|e^{-it\theta(k)}\beta(k)\right|\int_{t}^{\infty}\left|e ^{i(s-t)\theta(k)}\widehat{\left(\frac{h}{\beta}\right)}(s)\right|\bar{d}s\] \[=\left|\beta(k)\right|\int_{t}^{\infty}\left|\widehat{\left( \frac{h}{\beta}\right)}(s)\right|\bar{d}s \tag{3.3}\] \[\leq C\left(\int_{t}^{\infty}\left|(1+s^{2})^{j}\widehat{\left( \frac{h}{\beta}\right)}(s)\right|^{2}ds\right)^{\frac{1}{2}}\left(\int_{t}^{ \infty}\left|(1+s^{2})^{-j}\right|^{2}ds\right)^{\frac{1}{2}}\] \[\leq\frac{C}{t^{j-\frac{1}{2}}}.\]
On the other hand, for \(k\in U_{2}\) and \(\operatorname{Re}(k)\in[0,k_{0}]\), we have
\[|e^{-2it\theta(k)}h_{2}(k)| =\left|e^{-2it\theta(k)}\beta(k)\int_{-\infty}^{t}e^{is\theta(k)} \widehat{\left(\frac{h}{\beta}\right)}(s)\bar{d}s\right|\] \[=\left|e^{-it\theta(k)}\beta(k)\int_{-\infty}^{t}e^{i(s-t)\theta( k)}\widehat{\left(\frac{h}{\beta}\right)}(s)\bar{d}s\right|\] \[\leq C\left|e^{-it\theta(k)}\beta(k)\right|\int_{-\infty}^{t}e^{ (s-t)\operatorname{Re}(i\theta(k))}\left|\widehat{\left(\frac{h}{\beta} \right)}(s)\right|ds\] \[\leq Ce^{-t\operatorname{Re}(i\theta(k))}.\]
Moreover, by directly computation one can get \(|e^{-2it\theta(k)}R(k)|\leq Ce^{-2t\operatorname{Re}(i\theta(k))}\), for \(k\in U_{2}\) and \(\operatorname{Re}(k)\in[0,k_{0}]\).
Finally, we can define \(r_{1,a}(k):=h_{2}(k)+R(k)\) which can be analytically extent to the \(U_{2}\) with \(\operatorname{Re}(k)\in[0,k_{0}]\) and \(r_{1,r}(k):=h_{1}(k)\) define on \(0\leq k\leq k_{0}\).
Take \(n=2,q=1\) for the estimate above, then we can get that
\[R(k)=r_{1}(k_{0})+\partial_{k}r_{1}(k_{0})(k-k_{0})+\partial_{k}^{2}r_{1}(k_{0 })(k-k_{0})^{2},h_{2}(k)=(k-k_{0})\int_{-\infty}^{t}e^{is\theta(k)}\widehat{ \left(\frac{h}{\beta}\right)}(s)\bar{d}s,\]
moreover,
\[|r_{1,a}(k)-r_{1}(k_{0})|\leq C|k-k_{0}|e^{-\operatorname{Re}\theta_{21}(k)}.\]
By the directly computation, one can get
\[|\partial_{x}(r_{1,a}(k)-r_{1}(0))|\leq C|k-k_{0}|e^{-\operatorname{Re}\theta_ {21}(k)}.\]
The estimate between \(r_{1,a}(k)\) and \(r_{1}(0)\) need us to make Taylor expansion near \(k=0\), the procedure is similar, so we omitted it.
Again, by the Taylor's expansion, we have that
\[(k+i)^{n+5}\rho_{1}^{*}(k)=\gamma_{0}+\gamma_{1}(k-k_{0})+\cdots+\gamma_{n}(k -k_{0})^{n}+\frac{1}{n!}\int_{k_{0}}^{k}((\cdot+i)\rho_{1}(\cdot))^{(n+1)}( \mu)(k-\mu)^{n}d\mu,\]
and define
\[R(k)=\frac{\sum_{i=0}^{n}\gamma_{i}(k-k_{0})^{i}}{(k+i)^{n+5}},h(k)=\rho_{1}^ {*}(k)-R(k).\]
Set
\[\beta(k)=\frac{(k-k_{0})^{q}}{(k+i)^{q+2}}.\]
By the Fourier transform
\[\frac{h}{\beta}=\int_{-\infty}^{\infty}e^{is\theta(k)}\widehat{\left(\frac{h} {\beta}\right)}(s)\bar{d}s,\]
where
\[\frac{\widehat{h}}{\beta}(s)=\int_{k_{0}}^{\infty}e^{-is\theta(k)}\left(\frac {h}{\beta}\right)(k)\bar{d}\theta(k).\]
As the same procedure, we estimate that
\[\frac{h}{\beta}=\frac{(k-k_{0})^{n-q+1}}{(k+i)^{n-q+3}}g(k_{0};k).\]
Here
\[g(k_{0},k)=\frac{1}{n!}\int_{0}^{1}((\cdot+i)^{n+5}\rho_{1}(\cdot))^{(n+1)}(k_ {0}+\mu(k-k_{0}))(1-\mu)^{n}d\mu\]
and \(|\partial_{k}g(k_{0},k)|<\infty\).
Now, we obtain that
\[\int_{k_{0}}^{\infty}\left|\left(\frac{d}{d\theta}\right)^{j}\frac{h }{\beta}(k)\right|^{2}|\bar{d}\theta(k)| \leq C\int_{k_{0}}^{\infty}\left|\left(\frac{d}{dk}\frac{1}{|k^{4} -k_{0}^{4}|}\right)^{j}\frac{h}{\beta}(k)\right|^{2}|k^{4}-k_{0}^{4}|\bar{d}k\] \[=C\int_{k_{0}}^{\infty}\left|\left(\frac{1}{|k^{4}-k_{0}^{4}|} \frac{d}{dk}\right)^{j}\frac{(k-k_{0})^{n-q+1}}{(k+i)^{n-q+3}}g(k_{0},k)\right| ^{2}|k^{4}-k_{0}^{4}|\bar{d}k\] \[\leq C\int_{k_{0}}^{\infty}\left|\frac{(k-k_{0})^{2n-2q-4j+3}}{(k +i)^{2n-2q+6}}\right|\bar{d}k<\infty,\]
for \(j\leq\frac{n-q+2}{2}\), and we can conclude that \(\frac{h}{\beta}\in H^{j}(k_{0},\infty]\).
By the Plancherel formula, we can get that
\[\int_{-\infty}^{\infty}(1+s^{2})^{j}\left|\left(\frac{h}{\beta}\right)(s) \right|^{2}ds\leq C.\]
As the before process, split \(h\) into two part
\[h(k)=\beta(k)\int_{t}^{\infty}e^{is\theta(k)}\overline{\left(\frac{h}{\beta} \right)}(s)\bar{d}s+\beta(k)\int_{-\infty}^{t}e^{is\theta(k)}\overline{\left( \frac{h}{\beta}\right)}(s)\bar{d}s:=h_{1}(k)+h_{2}(k),\quad k\in[k_{0},\infty).\]
For \(k\geq k_{0}\), we have that
\[|e^{-it\theta(k)}h_{1}(k)|\leq\frac{C}{|k+i|^{2}t^{j-\frac{1}{2}}},\]
on the other hand, for \(k\in U_{6}\),
\[|e^{-2it\theta(k)}h_{2}(k)|\leq\frac{|k-k_{0}|^{q}e^{\operatorname{Re}(-it \theta(k))}}{|k+i|^{q+2}}.\]
Let \(\rho_{1,a}(x,t;k)=R(k)+h_{2}(k)\) and \(\rho_{1,r}=h_{1}(k)\). Compare with the same procedure, we can obtain the estimate in the lemma.
Figure 4. The first transformation of RH problem.
### The second transform
Now, we focus on splitting the jump at the critical points \(\pm k_{0}\) by recalling that
\[v_{1}^{(1)}(x,t;k)=\left(\begin{array}{ccc}1-|r_{1}(k)|^{2}&-\frac{\tilde{ \delta}_{\varphi_{1}}}{\delta_{1}^{2}}\frac{r_{1}(k)}{1-|r_{1}(k)|^{2}}e^{-t \Phi_{21}}&0\\ \frac{\delta_{1+}^{2}}{\delta_{\varphi_{1}}}\frac{r_{1}^{*}(k)}{1-|r_{1}(k)|^{ 2}}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right)=v_{1,lower}^{(1)}\ v_{1,r}^{(1)}\ v_{1,upper}^{(1)},\]
where
\[v_{1,lower}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\tilde{\delta}_{\varphi_ {1}}}{\delta_{1}^{2}}\rho_{1,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\quad v_{1,upper}^{(1)}=\left(\begin{array}{ccc}1&0 &0\\ \frac{\delta_{1+}^{2}}{\delta_{\varphi_{1}}}\rho_{1,a}^{*}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\]
and
\[v_{1,r}^{(1)}(x,t;k)=\left(\begin{array}{ccc}1-\frac{\tilde{\delta}_{\varphi _{1}}^{2}}{\delta_{1}^{2}}\rho_{1,r}(k)\rho_{1,r}^{*}(k)&-\frac{\tilde{\delta}_ {\varphi_{1}}}{\delta_{1}^{2}}\rho_{1,r}e^{-t\Phi_{21}}&0\\ \frac{\delta_{1+}^{2}}{\delta_{\varphi_{1}}}\rho_{1,r}^{*}e^{t\Phi_{21}}&1&0 \\ 0&0&1\end{array}\right).\]
Moreover, we have
\[v_{7}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\tilde{\delta}_{\varphi_{1}}}{ \delta_{1}^{2}}r_{1}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{1}^{2}}{\delta_{\varphi_{1}}}r_{1}^{*}(k)e^{t\Phi_{21}}&1-r_{1}( k)r_{1}^{*}(k)&0\\ 0&0&1\end{array}\right)=v_{7,lower}^{(1)}\ v_{7,r}^{(1)}\ v_{7,upper}^{(1)},\]
where
\[v_{7,lower}^{(1)}=\left(\begin{array}{ccc}1&0&0\\ \frac{\delta_{1}^{2}}{\delta_{\varphi_{1}}}r_{1,a}^{*}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\quad v_{7,upper}^{(1)}=\left(\begin{array}{ccc}1&- \frac{\tilde{\delta}_{\varphi_{1}}}{\delta_{1}^{2}}r_{1,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\]
and
\[v_{7,r}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\tilde{\delta}_{\varphi_{1}}} {\delta_{1}^{2}}r_{1,r}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{1}^{2}}{\delta_{\varphi_{1}}}r_{1,r}^{*}(k)e^{t\Phi_{21}}&1-r_{1,r}(k)r_{1,r}^{*}(k)&0\\ 0&0&1\end{array}\right).\]
The same procedure yields
\[v_{4}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\delta_{4+}^{2}}{\delta_{ \varphi_{4}}}\frac{r_{2}^{*}(k)}{1-|r_{2}(k)|^{2}}e^{-t\Phi_{21}}&0\\ \frac{\delta_{\varphi_{4}}}{\delta_{4-}^{2}}\frac{r_{2}(k)}{1-|r_{2}(k)|^{2}} e^{t\Phi_{21}}&1-|r_{2}(k)|^{2}&0\\ 0&0&1\end{array}\right)=v_{4,upper}^{(1)}\ v_{4,r}^{(1)}\ v_{4,lower}^{(1)},\]
where
\[v_{4,upper}^{(1)}=\left(\begin{array}{ccc}1&0&0\\ \frac{\delta_{\varphi_{4}}}{\delta_{4-}^{2}}\rho_{2,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\quad v_{4,lower}^{(1)}=\left(\begin{array}{ccc}1& -\frac{\delta_{4+}^{2}}{\delta_{\varphi_{4}}}\rho_{2,a}^{*}e^{-t\Phi_{21}}&0 \\ 0&1&0\\ 0&0&1\end{array}\right),\]
and
\[v_{4,r}^{(1)}=\left(\begin{array}{ccc}1&-\frac{\delta_{4+}^{2}}{\delta_{ \varphi_{4}}}\rho_{2,r}^{*}e^{-t\Phi_{21}}&0\\ \frac{\delta_{\varphi_{4}}}{\delta_{4-}^{2}}\rho_{2,r}e^{t\Phi_{21}}&1-\frac{ \delta_{4+}^{2}}{\delta_{4-}^{2}}\rho_{2,r}\rho_{2,r}^{*}&0\\ 0&0&1\end{array}\right).\]
On the other hand, one has
\[v_{10}^{(1)}=\left(\begin{array}{ccc}1-|r_{2}(k)|^{2}&-\frac{\delta_{2}^{2}} {\delta_{\varphi_{4}}}r_{2}^{*}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{\varphi_{4}}}{\delta_{4}^{2}}r_{2}(k)e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right)=v_{10,upper}^{(1)}\ v_{10,r}^{(1)}\ v_{10,lower}^{(1)},\]
with
\[v^{(1)}_{10,upper}=\left(\begin{array}{ccc}1&-\frac{\delta_{4}^{2}}{\delta_{ \kappa_{4}}}r_{2,a}^{*}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\quad v^{(10)}_{1,lower}=\left(\begin{array}{ccc}1&0 &0\\ \frac{\delta_{\kappa_{4}}}{\delta_{4}^{2}}r_{2,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\]
and
\[v^{(1)}_{10,r}=\left(\begin{array}{ccc}1-r_{2,r}(k)r_{2,r}^{*}(k)&-\frac{ \delta_{4}^{2}}{\delta_{\kappa_{4}}}r_{2,r}^{*}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta_{\kappa_{4}}}{\delta_{4}^{2}}r_{2,r}(k)e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right).\]
The \(\Sigma_{2}\) is just shown in Fig. 5.
[MISSING_PAGE_POST]
Now, introducing the intermediate function \(G_{j}\) to factorize the RH problem for \(M^{(1)}\) in to RH problem for \(M^{(2)}\). In particular, define
\[G_{1}(x,t,k)=\begin{cases}\left(v_{1,upper}^{(1)}\right)^{-1},&k\in V_{1},\\ \left(v_{7,upper}^{(1)}\right)^{-1},&k\in V_{2},\\ v_{7,lower}^{(1)},&k\in V_{3},\\ v_{1,lower}^{(1)},&k\in V_{4},\\ I,&k\in V_{5}\cup V_{6},\end{cases}\]
and
\[G_{2}(x,t,k)=\begin{cases}v_{10,upper}^{(1)},&k\in V_{7},\\ v_{4,upper}^{(1)},&k\in V_{8},\\ \left(v_{4,lower}^{(1)}\right)^{-1},&k\in V_{9},\\ \left(v_{10,lower}^{(1)}\right)^{-1},&k\in V_{10},\\ I,&k\in V_{11}\cup V_{12}.\end{cases}\]
Furthermore, the RH problems \(M^{(1)}\) and \(M^{(2)}\) can be related by
\[M^{(2)}=M^{(1)}G(x,t;k),\]
where the other \(G_{j}\) can be defined by the symmetries.
Figure 6. The sets \(V_{j}\) in the complex plane.
The jump matrix \(v^{(2)}\) associated with the RH problems for \(M^{(2)}\) is given by
\[v^{(2)}_{1}=v^{(1)}_{1,upper}=\left(\begin{array}{ccc}1&0&0\\ \frac{\delta^{2}_{1+}}{\delta_{x_{1}}}\rho^{*}_{1,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{2}=\left(v^{(1)}_{7,upper}\right)^{-1}=\left(\begin{array} []{ccc}1&\frac{\delta^{2}_{4+}}{\delta^{2}_{4+}}r^{*}_{1,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{3}=\left(v^{(1)}_{7,lower}\right)^{-1}=\left(\begin{array} []{ccc}1&0&0\\ -\frac{\delta^{2}_{4+}}{\delta^{2}_{4+}}r^{*}_{1,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{4}=v^{(1)}_{1,lower}=\left(\begin{array}{ccc}1&- \frac{\delta^{2}_{\nu_{1}}}{\delta^{2}_{4-}}\rho_{1,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{5}=v^{(1)}_{7,r}=\left(\begin{array}{ccc}1&-\frac{ \delta^{2}_{\nu_{1}}}{\delta^{2}_{4-}}r^{*}_{1,r}(k)e^{-t\Phi_{21}}&0\\ \frac{\delta^{2}_{1+}}{\delta^{2}_{4+}}r^{*}_{1,r}(k)e^{t\Phi_{21}}&1-r_{1,r}( k)r^{*}_{1,r}(k)&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{6}=v^{(1)}_{1,r}=\left(\begin{array}{ccc}1-\frac{ \delta^{2}_{4+}}{\delta^{2}_{4-}}\rho_{1,r}(k)\rho^{*}_{1,r}(k)&-\frac{\delta _{\nu_{1}}}{\delta^{2}_{4-}}\rho_{1,r}e^{-t\Phi_{21}}&0\\ \frac{\delta^{2}_{1+}}{\delta^{2}_{4+}}\rho^{*}_{1,r}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right).\]
Moreover,
\[v^{(2)}_{7}=v^{(1)}_{4,upper}=\left(\begin{array}{ccc}1&0&0\\ \frac{\delta_{\nu_{4}}}{\delta^{2}_{4-}}\rho_{2,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{8}=v^{(1)}_{4,lower}=\left(\begin{array}{ccc}1&-\frac{ \delta^{2}_{4+}}{\delta_{4+}}\rho^{*}_{2,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{9}=\left(v^{(1)}_{10,lower}\right)^{-1}=\left(\begin{array} []{ccc}1&0&0\\ -\frac{\delta_{\nu_{4}}}{\delta^{2}_{4}}r^{*}_{2,a}e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{10}=\left(v^{(1)}_{10,upper}\right)^{-1}=\left(\begin{array} []{ccc}1&\frac{\delta^{2}_{4}}{\delta^{2}_{4-}}r^{*}_{2,a}e^{-t\Phi_{21}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{11}=v^{(1)}_{10,r}=\left(\begin{array}{ccc}1-r_{2,r}(k )r^{*}_{2,r}(k)&-\frac{\delta^{2}_{4}}{\delta^{2}_{4-}}r^{*}_{2,r}(k)e^{-t\Phi _{21}}&0\\ \frac{\delta_{\nu_{4}}}{\delta^{2}_{4}}r_{2,r}(k)e^{t\Phi_{21}}&1&0\\ 0&0&1\end{array}\right),\] \[v^{(2)}_{12}=v^{(1)}_{4,r}=\left(\begin{array}{ccc}1&-\frac{ \delta^{2}_{4+}}{\delta^{2}_{4-}}\rho^{*}_{2,r}e^{-t\Phi_{21}}&0\\ \frac{\delta_{\nu_{4}}}{\delta^{2}_{4-}}\rho_{2,r}e^{t\Phi_{21}}&1-\frac{\delta^ {2}_{4+}}{\delta^{2}_{4-}}\rho_{2,r}\rho^{*}_{2,r}&0\\ 0&0&1\end{array}\right).\]
In addition, the other jump matrices can be computed by the symmetry.
**Lemma 3.4**.: _The function \(G_{j}\) is uniformly bounded for \(k\in\mathbb{C}\setminus\Sigma_{j}\), and \(G_{j}=I+O(\frac{1}{k})\) as \(k\to\infty\)._
Proof.: We focus on the region \(V_{1}\) and \(V_{2}\). In the region \(V_{2}\), \(G_{1}=\left(v_{1,upper}^{(1)}\right)^{-1}\) so that it suffices to show that \(\frac{\delta_{1+}^{2}}{\delta_{v_{1}}}\rho_{1,a}^{*}e^{t\Phi_{21}}\) is bounded in \(V_{1}\). Recall that \(\delta_{j}\) is bounded in \(\mathbb{C}\setminus\Sigma_{j}\) and \(|\rho_{1,a}e^{t\Phi_{21}}|\) satisfied the lemma before, then \(\frac{\delta_{1+}^{2}}{\delta_{v_{1}}}\rho_{1,a}^{*}e^{t\Phi_{21}}\) is uniformly bounded. Nevertheless, the region \(V_{2}\) is compact and \(G_{1}\) is continuous, so \(V_{2}\) is uniformly bounded.
Again, recall the reconstruction formula that
\[u(x,t) =-\frac{1}{2}\frac{\partial}{\partial x}\lim_{k\to\infty}k[M^{(1) }(x,t,k)-I]_{33}\] \[=-\frac{1}{2}\frac{\partial}{\partial x}\lim_{k\to\infty}k[M^{(1) }(x,t,k)G(x,t;k)-I]_{33}\] \[=-\frac{1}{2}\frac{\partial}{\partial x}\lim_{k\to\infty}k[M^{(2) }(x,t,k)-I]_{33}.\]
**Lemma 3.5**.: _For \(0<k_{0}<M\) and any \(\epsilon>0\), the jump functions \(v^{(2)}\) converges uniformly to \(I\) as \(t\to\infty\) and \(\partial_{x}v^{(2)}\) uniformly converges to the zero matrix expect for the points near the saddle points, i.e., \(\left\{\pm k_{0},\pm\omega k_{0},\pm\omega^{2}k_{0}\right\}\). In particular, the jump matrix \(v^{(2)}\) on \(\Sigma_{5,6}\) have the following estimates:_
\[\|(1+|\cdot|)\partial_{x}^{d}(v^{(2)}-I)\|_{(L^{1}\cap L^{\infty})(\Sigma_{5, 6}^{(2)})}\leq Ct^{-\frac{3}{2}}.\]
_Moreover, using the fact of symmetric about jump matrix, we can get the similar estimate on the other \(\Sigma_{j}^{(2)}\)._
Proof.: We focus on the jump matrix on \(\Sigma_{1,\cdots,6}^{(2)}\) and since for \(k\in\Sigma_{1,2,3,4}^{(2)}\) the exponential part \(\operatorname{Re}t\Phi_{21}\) is strictly less than \(0\) for \(\Sigma_{1,3}^{(2)}\) and strictly bigger than \(0\) for \(\Sigma_{2,4}^{(2)}\), except for the points near the saddle point \(k_{0}\) (since \(\operatorname{Re}t\Phi_{21}(k_{0})=0\)). Using the lemma about the properties of \(r_{1,a},\rho_{1,a}\) and the bound of \(\delta\) functions, we can conclude that \(v_{1,2,3,4}^{(2)}\) (\(\partial_{x}v_{1,2,3,4}^{(2)}\)) converges to \(I\) (resp. to the \(0\) matrix) as \(t\to\infty\).
Finally, we have
\[(v_{5}^{(2)}-I)_{12}=-\frac{\tilde{\delta}_{v1}}{\delta_{1}^{2}}r_{1,r}(k)e^{- t\Phi_{21}},\quad(v_{6}^{(2)}-I)_{12}=-\frac{\tilde{\delta}_{v_{1}}}{\delta_{1 -}^{2}}\rho_{1,r}e^{-t\Phi_{21}},\]
the lemma of \(\delta\) and \(r_{j,r},\rho_{j,r}\) implies that
\[|(v_{5}^{(2)}-I)_{12}|\leq Ct^{-\frac{3}{2}},\quad|(v_{6}^{(2)}-I)_{12}|\leq Ct ^{-\frac{3}{2}}.\]
Moreover, the \((v_{5}^{(2)}-I)_{22}\) and \((v_{6}^{(2)}-I)_{22}\) can be written as a time of two part \((v_{j}^{(2)}-I)_{12}\) and \((v_{j}^{(2)}-I)_{21}\), so the estimate is smaller than off-diagonal term.
By directly computation, we can conclude that
\[\|(1+|\cdot|)\partial_{x}^{d}(v^{(2)}-I)\|_{(L^{1}\cap L^{\infty})(\Sigma_{5,6 }^{(2)})}\leq Ct^{-\frac{3}{2}}.\]
### The third transformation
In the transformations above, we have a new RH problem with the properties \(v\to I\) as \(t\to\infty\) for \(k\in\{\mathbb{R},\omega\mathbb{R},\omega^{2}\mathbb{R}\},\ 0<k_{0}<M\) and other jump matrix on \(\Sigma_{j}\) toward to \(I\) as \(t\to\infty\), but except for \(k\in B(\pm k_{0},\epsilon)\cup B(\pm\omega k_{0},\epsilon)\cup B(\pm\omega^{2 }k_{0},\epsilon)\).
In order to factorize the RH problem for \(M^{(2)}\) into model problem, we focus on the \(\Sigma_{A}\) and \(\Sigma_{B}\) where
\[\Sigma_{A}=\Sigma_{\{1,2,3,4\}}^{(2)}\cap B_{\epsilon}(k_{0}),\ \Sigma_{B}=\Sigma_{\{7,8,9,10\}}^{(2)}\cap B_{\epsilon}(-k_{0})\]
Observe that the exponential part in the jump matrix on \(\Sigma_{A}\) and \(\Sigma_{B}\) are \(\pm t\Phi_{21}\) and on the left contour \(\Sigma_{A}\), we expand \(t\Phi_{21}\) at \(k_{0}\) just as follows
\[t\Phi_{21}(k) =t[(\alpha^{2}-\alpha)k\zeta+(\alpha-\alpha^{2})9k^{5}]=9t(\alpha -\alpha^{2})(k^{5}-5kk_{0}^{4})\] \[=9\sqrt{3}it[(k-k_{0})^{5}+5k_{0}(k-k_{0})^{4}+10k_{0}^{2}(k-k_{0 })^{3}+10k_{0}^{3}(k-k_{0})^{2}-4k_{0}^{5}].\]
Suppose \(z_{1}=3^{\frac{5}{4}}2\sqrt{5}k_{0}^{\frac{3}{2}}(k-k_{0})\), then rewrite \(t\Phi_{21}\) as
\[t\Phi_{21}(k) =9\sqrt{3}ita^{3}[a^{2}z^{5}+5ak_{0}z^{4}+10k_{0}^{2}z^{3}]+\frac{ iz^{2}}{2}+t\Phi_{21}(k_{0})\] \[=t\Phi_{21}^{0}(k_{0},z)+\frac{iz_{1}^{2}}{2}+t\Phi_{21}(k_{0}),\]
where \(a=\frac{1}{3^{\frac{5}{4}}2\sqrt{5}k_{0}^{\frac{3}{2}}}\).
The other part of jump matrix functions on \(\Sigma_{A}\) involves the \(\delta\) function as
\[\delta_{1}(k)=e^{-i\nu_{1}\log_{0}(k-k_{0})}e^{-\chi_{1}(k)},\quad k\in\mathbb{ C}\setminus[k_{0},\infty),\]
where
\[\nu_{1}=-\frac{1}{2\pi}\ln\left(1-\left|r_{1}\left(k_{0}\right)\right|^{2} \right),\]
and
\[\chi_{1}(k)=\frac{1}{2\pi i}\int_{k_{0}}^{\infty}\log_{0}(k-s)d\ln\left(1- \left|r_{1}(s)\right|^{2}\right).\]
Again, rewrite it as
\[\frac{\delta_{1+}^{2}(k)}{\delta_{\tilde{\epsilon}_{1}}(k)} =e^{-2i\nu_{1}\log_{0}(z)}\frac{a^{-2i\nu_{1}}e^{-2\chi_{1}(k_{0}) }}{\tilde{\delta}_{v_{1}}(k_{0})}\frac{e^{2\chi_{1}(k_{0})-2\chi_{1}(k)} \tilde{\delta}_{v_{1}}(k_{0})}{\tilde{\delta}_{v_{1}}(k)}\] \[:=e^{-2i\nu_{1}\log_{0}(z)}\delta_{A}^{0}\delta_{A}^{1},\]
where \(\delta_{A}^{0}=\frac{a^{-2i\nu_{e}-2\chi_{1}(k_{0})}}{\delta_{\tilde{\epsilon} _{1}}(k_{0})}\) and \(\delta_{A}^{1}=\frac{e^{2\chi_{1}(k_{0})-2\chi_{1}(k)}\delta_{\tilde{\epsilon} _{1}}(k_{0})}{\tilde{\delta}_{v_{1}}(k)}\).
In the other hand, on the jump contour \(\Sigma_{B}\), expand \(t\Phi_{21}\) at \(-k_{0}\) just as follows
\[t\Phi_{21}(k) =t[(\alpha^{2}-\alpha)k\zeta+(\alpha-\alpha^{2})9k^{5}]=9t( \alpha-\alpha^{2})(k^{5}-5kk_{0}^{4})\] \[=9\sqrt{3}it[(k+k_{0})^{5}-5k_{0}(k+k_{0})^{4}+10k_{0}^{2}(k+k_{ 0})^{3}-10k_{0}^{3}(k+k_{0})^{2}+4k_{0}^{5}].\]
Suppose \(z_{2}=3^{\frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}(k+k_{0})\), then we can rewrite \(t\Phi_{21}\) as
\[t\Phi_{21}(k) =9\sqrt{3}ita^{3}[a^{2}z_{2}^{5}-5ak_{0}z_{2}^{4}+10k_{0}^{2}z_{2} ^{3}]-\frac{iz_{2}^{2}}{2}+t\Phi_{21}(-k_{0})\] \[=t\Phi_{21}^{0}(-k_{0},z)-\frac{iz_{2}^{2}}{2}+t\Phi_{21}(-k_{0}).\]
Moreover, the \(\delta\) functions on the \(\Sigma_{B}\) involves \(\delta_{4}\), and
\[\delta_{4}(k)=e^{-i\nu_{4}\log_{\pi}(k+k_{0})}e^{-\chi_{4}(k)},\quad k\in \mathbb{C}\setminus(-\infty,-k_{0}],\]
with
\[\nu_{4}=-\frac{1}{2\pi}\ln\left(1-\left|r_{2}\left(-k_{0}\right)\right|^{2} \right),\]
and
\[\chi_{4}(k)=\frac{1}{2\pi i}\int_{-k_{0}}^{-\infty}\log_{\pi}(k-s)d\ln\left(1- \left|r_{2}(s)\right|^{2}\right).\]
In addition, we have
\[\frac{\tilde{\delta}_{v_{4}}}{\delta_{4}^{2}} =e^{2i\nu_{4}\log_{\pi}(z_{2})}\frac{\tilde{\delta}_{v_{4}}(-k_{0 })}{a^{-2i\nu_{4}}e^{-2\chi_{4}(-k_{0})}}\frac{\tilde{\delta}_{v_{4}}(k)}{e^{2 \chi_{4}(-k_{0})-2\chi_{4}(k)}\tilde{\delta}_{v_{4}}(-k_{0})}\] \[:=e^{2i\nu_{4}\log_{\pi}(z_{2})}\left(\delta_{B}^{0}\right)^{-1} \left(\delta_{B}^{1}\right)^{-1},\]
here \(\delta_{B}^{0}=\frac{a^{-2i\nu_{4}}e^{-2\chi_{4}(-k_{0})}}{\delta_{\tilde{ \epsilon}_{4}}(-k_{0})}\) and \(\delta_{B}^{1}=\frac{e^{2\chi_{4}(-k_{0})-2\chi_{4}(k)}\delta_{v_{4}}(-k_{0})} {\delta_{\tilde{\epsilon}_{4}}(k)}\).
Now, define \(H(\pm k_{0},t)\) and transform the RH problem for \(M^{(2)}\) of the form
\[M^{(3,\epsilon)}=M^{(2)}(x,t;k)H(\pm k_{0},t),\quad k\in B_{\epsilon}(\pm k_{0}),\]
where
\[H(k_{0},t)=\left(\begin{array}{cccc}\left(\delta_{A}^{0}\right)^{-\frac{1}{2}}e^ {-\frac{t}{2}\Phi_{21}(k_{0})}&0&0\\ 0&\left(\delta_{A}^{0}\right)^{\frac{1}{2}}e^{\frac{t}{2}\Phi_{21}(k_{0})}&0 \\ 0&0&1\end{array}\right),\]
and
\[H(-k_{0},t)=\left(\begin{array}{cccc}\left(\delta_{B}^{0}\right)^{\frac{1}{2} }e^{-\frac{t}{2}\Phi_{21}(-k_{0})}&0&0\\ 0&\left(\delta_{B}^{0}\right)^{-\frac{1}{2}}e^{\frac{t}{2}\Phi_{21}(-k_{0})}& 0\\ 0&0&1\end{array}\right),\]
so that we can derive the jump matrix on \(\Sigma_{A}\) as follows
\[v_{1}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&0&0\\ e^{-2i\nu_{1}\log_{0}(z)}\delta_{A}^{1}\rho_{1,a}^{\ast}e^{t\Phi_{21}^{0}(k_ {0},z)+\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right),\] \[v_{2}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&e^{2i\nu_{1}\log_{0}(z)}(\delta_{A }^{1})^{-1}r_{1,a}e^{-t\Phi_{21}^{0}(k_{0},z)-\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v_{3}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&0&0\\ -e^{-2i\nu_{1}\log_{0}(z)}\delta_{A}^{1}r_{1,a}^{\ast}e^{t\Phi_{21}^{0}(k_{0},z)+\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right),\] \[v_{4}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&-e^{2i\nu_{1}\log_{0}(z)}(\delta_{A }^{1})^{-1}\rho_{1,a}e^{-t\Phi_{21}^{0}(k_{0},z)-\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right).\]
Moreover,
\[v_{7}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&0&0\\ e^{2i\nu_{4}\log_{\pi}(z)}\left(\delta_{B}^{1}\right)^{-1}\rho_{2,a}e^{t\Phi_ {21}^{0}(-k_{0},z)-\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right),\] \[v_{8}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&-e^{-2i\nu_{4}\log_{\pi}(z)}\delta_ {B}^{1}\rho_{2,a}^{\ast}e^{-t\Phi_{21}^{0}(-k_{0},z)+\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),\] \[v_{9}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&0&0\\ -e^{2i\nu_{4}\log_{\pi}(z)}\left(\delta_{B}^{1}\right)^{-1}r_{2,a}e^{t\Phi_{21 }^{0}(-k_{0},z)-\frac{iz^{2}}{2}}&1&0\\ 0&0&0&1\end{array}\right),\] \[v_{10}^{(3,\epsilon)} =\left(\begin{array}{cccc}1&e^{-2i\nu_{4}\log_{\pi}(z)}\delta_ {B}^{1}r_{2,a}^{\ast}e^{-t\Phi_{21}^{0}(-k_{0},z)+\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right).\]
When \(z\) is fixed, we observe that \(r_{j,a}\to r_{j}(k_{0})\),\(\rho_{j,a}\rightarrow\frac{r_{j}(k_{0})}{1-|r_{j}(k_{0})|^{2}}\),\(\delta_{A}^{1},\delta_{B}^{1}\to 1\) and \(e^{\pm t\Phi_{21}^{0}(\pm k_{0},z)}\to 1\) as \(t\rightarrow\infty\) so that the \(v^{3,\epsilon}\to v_{A,B}^{X}\) as \(t\rightarrow\infty\) where \(v_{A,B}^{X}\) are jump matrix of model problem of \(M_{A,B}^{X}\).
### The model problem \(M_{a,b}^{X}\)
Take \(X_{1}=\{z\in\mathbb{C}:z=re\frac{vi}{4},0\leq r\leq\infty\}\),\(X_{2}=\{z\in\mathbb{C}:z=re^{\frac{3\pi i}{4}},0\leq r\leq\infty\}\) and \(X_{3}=\{z\in\mathbb{C}:z=re^{\frac{5\pi i}{4}},0\leq r\leq\infty\}\), \(X_{4}=\{z\in\mathbb{C}:z=re^{\frac{7\pi i}{4}},0\leq r\leq\infty\}\). Denote \(X=\cup_{j=1}^{4}X_{j}\) and the function \(\nu_{A}(y)=-\frac{1}{2\pi}\ln\left(1-\left|y\right|^{2}\right)\) from \(B(0,1)\) to \((0,\infty)\). In what follows, define the model problem \(M_{A,B}^{X}\) naturally.
**Proposition 3.6**.: _The \(3\times 3\) matrix-valued function \(M_{A}^{X}\) satisfies the following properties:_
_(1). \(M_{A}^{X}(\,\cdot\,,\,y):\mathbb{C}\setminus X\rightarrow\mathbb{C}^{3\times 3}\) is analytic for \(z\in\mathbb{C}\setminus X\)._
_(2). \(M_{A}^{X}(z,y)\) continuous to \(X\setminus\{0\}\) and satisfy the jump condition below:_
\[(M_{A}^{X}(z,y))_{+}=(M_{A}^{X}(z,y))_{-}v_{j}^{X_{A}}(z,y),\quad z\in\mathbb{C} \setminus\{0\}.\]
_where the jump matrix \(v_{A}^{X}(z,y)\) is defined as following:_
\[\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_{1}(y)}e^{\frac{i\omega^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)\quad\text{ if }z\in X_{1},\quad\left(\begin{array}{ccc}1& yz^{2i\nu_{1}(y)}e^{-\frac{i\omega^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{2},\] \[\left(\begin{array}{ccc}1&0&0\\ -\bar{y}z^{-2i\nu_{1}}e^{\frac{i\omega^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{3},\quad\left(\begin{array}{ccc}1& -\frac{y}{1-|y|^{2}}z^{2i\nu_{1}}e^{-\frac{i\omega^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{4},\]
_and \(z^{2i\nu_{1}(y)}=e^{2i\nu_{1}(y)log_{0}(z)}\) for choosing the branch cut running along \(\mathbb{R}_{+}\)._
_(3). \(M_{A}^{X}(z,y)\to I\) as \(z\to\infty\)._
_(4). \(M_{A}^{X}(z,y)\to O(1)\) as \(z\to 0\)._
_For \(|y|<1\), the RH problem \(M_{A}^{X}\) satisfies the following expansion:_
\[M_{A}^{X}(y,z)=I+\frac{\left(M_{A}^{X}(y)\right)_{1}}{z}+O\left(\frac{1}{z^{2} }\right),\]
_where_
\[\left(M_{A}^{X}(y)\right)_{1}=\left(\begin{array}{ccc}0&\beta_{12}^{A}&0\\ \beta_{21}^{A}&0&0\\ 0&0&0\end{array}\right),\quad y\in B(0,1),\]
_and_
\[\beta_{12}^{A}=\frac{\sqrt{2\pi}e^{-\frac{\mu}{4}}e^{-\frac{5\pi\nu_{1}}{2}} }{\bar{y}\Gamma(-i\nu_{1})},\quad\beta_{21}^{A}=\frac{\sqrt{2\pi}e^{\frac{\mu }{4}}e^{\frac{3\pi\nu_{1}}{2}}}{y\Gamma(i\nu_{1})}.\]
Figure 7. The contour.
**Proposition 3.7**.: _On the other hand, the \(3\times 3\) matrix valued function \(M_{B}^{X}\) satisfies the following properties:_
_(1). \(M_{B}^{X}(\cdot,y):\mathbb{C}\setminus X\to\mathbb{C}^{3\times 3}\) is analytic for \(z\in\mathbb{C}\setminus X\)._
_(2). \(M_{B}^{X}(z,y)\) continuous to \(X\setminus\{0\}\) and satisfy the jump condition below:_
\[(M_{B}^{X}(z,y))_{+}=(M_{B}^{X}(z,y))_{-}v_{j}^{X_{B}}(z,y),\quad z\in\mathbb{ C}\setminus\{0\}.\]
_where the jump matrix \(v_{B}^{X}(z,y)\) is defined as following:_
\[\left(\begin{array}{ccc}1&\bar{y}z^{-2i\nu_{4}(y)}e^{\frac{iz^{ 2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\quad\text{if }z\in X_{1},\quad\left(\begin{array}{ccc}1& 0&0\\ \frac{y}{1-|y|^{2}}z^{2i\nu_{4}(y)}e^{-\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{2},\] \[\left(\begin{array}{ccc}1&-\frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_ {4}}e^{\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{3},\quad\left(\begin{array}{ccc}1& 0&0\\ -yz^{2i\nu_{4}}e^{-\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)\text{ if }z\in X_{4},\]
_and \(z^{2i\nu_{4}}(y)=e^{2i\nu_{4}(y)}e^{\log_{\pi}(z)}\) for choosing the branch cut running along \(\mathbb{R}_{-}\)._
_(3). \(M_{B}^{X}(z,y)\to I\) as \(z\to\infty\)._
_(4). \(M_{B}^{X}(z,y)\to O(1)\) as \(z\to 0\)._
_For \(|y|<1\), the RH problem \(M_{B}^{X}\) satisfies the following expansion:_
\[M_{B}^{X}(y,z)=I+\frac{\big{(}M_{B}^{X}(y)\big{)}_{1}}{z}+O\left(\frac{1}{z^{2 }}\right),\]
_where_
\[\big{(}M_{B}^{X}(y)\big{)}_{1}=\left(\begin{array}{ccc}0&\beta_{12}^{B}&0\\ \beta_{21}^{B}&0&0\\ 0&0&0\end{array}\right),\quad y\in B(0,1),\]
_and_
\[\beta_{12}^{B}=\frac{\sqrt{2\pi}e^{\frac{zi}{4}}e^{-\frac{\pi\nu_{4}}{2}}}{y \Gamma(i\nu_{4})},\quad\beta_{21}^{B}=\frac{\sqrt{2\pi}e^{-\frac{zi}{4}}e^{- \frac{\pi\nu_{4}}{2}}}{\bar{y}\Gamma(-i\nu_{4})}.\]
**Remark 3.8**.: _The model problem \(X_{A}\) and \(X_{B}\) has a relationship like mirror reflection, since the orientation of original RH problem 1 are opposite and the \(\delta\) function._
Furthermore, factorize Model problem to the Parabolic problem so that we can get the leading term in long time.
At first, add jump contour \(\mathbb{R}\) with jump function \(I\) and inverse the orientation of \(X_{2}\) and \(X_{3}\), as shown in Fig. 8.
Denote
\[z^{i\nu_{1}\tilde{\sigma}_{3}}=\left(\begin{array}{ccc}z^{i\nu_{1}\sigma_{3 }}&0\\ 0&1\end{array}\right)=\left(\begin{array}{ccc}z^{i\nu_{1}}&0&0\\ 0&z^{-i\nu_{1}}&0\\ 0&0&1\end{array}\right),\quad e^{\frac{iz^{2}}{4}\tilde{\sigma}_{3}}=\left( \begin{array}{ccc}e^{\frac{iz^{2}}{4}\sigma_{3}}&0\\ 0&1\end{array}\right)=\left(\begin{array}{ccc}e^{\frac{iz^{2}}{4}}&0&0\\ 0&e^{-\frac{iz^{2}}{4}}&0\\ 0&0&1\end{array}\right),\]
and introduce the transform matrix \(\mathcal{H}^{A}\) as follows
\[\mathcal{H}^{A}(y,z)=\begin{cases}\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_{1}(y)}e^{\frac{i\mu^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{1},\\ z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{2},\\ \left(\begin{array}{ccc}1&-yz^{2i\nu_{1}(y)}e^{-\frac{i\mu^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{3},\\ \left(\begin{array}{ccc}1&0&0\\ -\bar{y}z^{-2i\nu_{1}}e^{\frac{i\mu^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{4},\\ z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{5},\\ \left(\begin{array}{ccc}1&\frac{y}{1-|y|^{2}}z^{2i\nu_{1}}e^{-\frac{i\mu^{2} }{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)z^{i\nu_{1}\tilde{\sigma}_{3}},&z\in\Omega_{6}.\end{cases}\]
Further, take
\[P^{A}(y,z)=M^{X}_{A}(y,z)\mathcal{H}^{A}(y,z).,\]
By the directly computation, one has
\[P^{A}_{+}(y,z)=P^{A}_{-}(y,z)v^{P^{A}}_{j}(y,z),\quad v^{P^{A}}_{ j}(y,z)=(\mathcal{H}^{A}_{-}(y,z))^{-1}v^{X_{A}}_{j}(y,z)\mathcal{H}^{A}_{+}(y,z).\]
Figure 8. The new contour when adding jump contour \(\mathbb{R}\) with jump function \(I\).
More specially, we have
\[v_{j}^{P^{A}}(y,z)=\begin{cases}I,&z\in X_{1},\\ I,&z\in X_{2},\\ I,&z\in X_{3},\\ I,&z\in X_{4},\\ \end{cases}\]
There exists jump for \(z^{2i\nu_{1}(y)}=e^{2i\nu_{1}(y)\log_{0}(z)}\) in \(z\in\mathbb{R}_{+}\), i.e., when \(z\in X_{6}\)
\[v_{6}^{P^{A}} =(\mathcal{H}_{-}^{A}(y,z))^{-1}v_{6}^{X_{A}}(y,z)\mathcal{H}_{+} ^{A}(y,z)=(\mathcal{H}_{6,-}^{A}(y,z))^{-1}\mathcal{H}_{1,+}^{A}(y,z)\] \[=z_{-}^{-i\nu_{1}\bar{\sigma}_{3}}\left(\begin{array}{ccc}1&- \frac{y}{1-|y|^{2}}z_{-}^{2i\nu_{1}}e^{-\frac{iz^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}\frac{y}{1-|y|^{2}}z_{+}^{-2i \nu_{1}(y)}e^{\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z_{+}^{i\nu_{1}\bar{\sigma}_{3}}\] \[=z_{-}^{-i\nu_{1}\bar{\sigma}_{3}}\left(\begin{array}{ccc}1- \frac{|y|^{2}}{(1-|y|^{2})^{2}}z_{-}^{2i\nu_{1}}z_{+}^{-2i\nu_{1}}&-\frac{y}{ 1-|y|^{2}}z_{-}^{2i\nu_{1}}e^{-\frac{iz^{2}}{2}}&0\\ \frac{y}{1-|y|^{2}}z_{+}^{-2i\nu_{1}}e^{\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z_{+}^{i\nu_{1}\bar{\sigma}_{3}}\] \[=e^{-\frac{iz^{2}}{4}\operatorname{ad}\bar{\sigma}_{3}}\left( \begin{array}{ccc}(1-|y|^{2})z_{-}^{-i\nu_{1}}z_{+}^{i\nu_{1}}&-\frac{y}{1-| y|^{2}}z_{-}^{i\nu_{1}}z_{+}^{-i\nu_{1}}&0\\ \frac{y}{1-|y|^{2}}z_{-}^{i\nu_{1}}z_{+}^{-i\nu_{1}}&z_{-}^{2i\nu_{1}}&0\\ 0&0&1\end{array}\right)\] \[=e^{-\frac{iz^{2}}{4}\operatorname{ad}\bar{\sigma}_{3}}\left( \begin{array}{ccc}1&-y&0\\ \bar{y}&1-|y|^{2}&0\\ 0&0&1\end{array}\right).\]
Since
\[z_{-}^{2i\nu_{1}}z_{+}^{-2i\nu_{1}}=e^{2i\nu_{1}(\log_{0}(z_{-})-\log_{0}(z_{ +}))}=e^{2i\nu_{1}(\ln|z|+2\pi i-\ln|z|)}=e^{-4\pi\nu_{1}}=(1-|y|^{2})^{2},\]
we get a RH problem with jump contour on \(\mathbb{R}\), in particular
\[\begin{cases}P_{+}^{A}(y,z)=P_{-}^{A}(y,z)e^{-\frac{iz^{2}}{4} \operatorname{ad}\bar{\sigma}_{3}}V^{A}(y,z),&z\in\mathbb{R},\\ P^{A}\to z^{i\nu_{1}\bar{\sigma}_{3}}&\text{as}\quad z\to\infty,\end{cases}\]
where
\[V^{A}=\left(\begin{array}{ccc}1&-y&0\\ \bar{y}&1-|y|^{2}&0\\ 0&0&1\end{array}\right).\]
Since \(\mathcal{H}^{A}=\left(I+O\left(\frac{1}{z}\right)\right)z^{i\nu_{1}\bar{ \sigma}_{3}}\) and as the procedure in [33], one has
\[\Psi=P^{A}e^{-\frac{iz^{2}\bar{\sigma}_{3}}{4}}=\hat{\Psi}z^{i\nu\bar{\sigma}_ {3}}e^{-\frac{iz^{2}\bar{\sigma}_{3}}{4}},\]
and
\[P^{A}=\left(I+\frac{\left(M_{A}^{X}(y)\right)_{1}}{z}+O\left(\frac{1}{z^{2}} \right)\right)z^{i\nu_{1}\bar{\sigma}_{3}},\]
then we have
\[\Psi_{+}(z)=\Psi_{-}(z)V^{A}(y),\]
and further by direct computation
\[\left(\partial_{z}\Psi+\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)_{+}=\left( \partial_{z}\Psi+\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)_{-}V^{A}(y).\]
It is claimed that \(\left(\partial_{z}\Psi+\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)\Psi^{-1}\) has no jump along the \(\mathbb{R}\), so it is an entire function on the \(\mathbb{C}\), then we have
\[\left(\partial_{z}\Psi+\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)\Psi^{-1}= \frac{i}{2}\left[\tilde{\sigma}_{3},\left(M_{A}^{X}\right)_{1}\right]+O\left( \frac{1}{z}\right).\]
Let
\[\frac{i}{2}\left[\tilde{\sigma}_{3},\left(M_{A}^{X}\right)_{1}\right]:=\left( \begin{array}{ccc}0&\tilde{\beta}_{12}&0\\ \tilde{\beta}_{21}&0&0\\ 0&0&0\end{array}\right),\]
where \(\beta_{12}=-i\tilde{\beta}_{12}\) and \(\beta_{21}=i\tilde{\beta}_{21}\).
Rewrite the above ordinary differential equation as
\[\frac{\partial\Psi_{11}}{\partial z}+\frac{1}{2}iz\Psi_{11}=\tilde{\beta}_{1 2}\Psi_{21},\]
then we have
\[\frac{\partial^{2}\Psi_{11}}{\partial z^{2}}+\left(\frac{1}{2}i+\frac{1}{4}z^ {2}-\tilde{\beta}_{12}\tilde{\beta}_{21}\right)\Psi_{11}=0,\]
and
\[\frac{\partial^{2}\Psi_{22}}{\partial z^{2}}+\left(-\frac{1}{2}i+\frac{1}{4}z ^{2}-\tilde{\beta}_{12}\tilde{\beta}_{21}\right)\Psi_{22}=0.\]
Now focusing on the upper half plane and denoting \(z=e^{\frac{3}{4}\pi i}\xi,\Psi_{11}^{+}(z)=\Psi_{11}^{+}\left(e^{\frac{3}{4} \pi i}\xi\right)=g(\xi)\), yields
\[\frac{d^{2}g(\xi)}{d\xi^{2}}+\left(\frac{1}{2}-\frac{1}{4}\xi^{2}+a\right)g( \xi)=0,\]
which satisfies the Weber equation and by the standard analysis
\[g(\xi)=c_{1}D_{a}(\xi)+c_{2}D_{a}(-\xi),\]
where
\[D_{a}(\xi)=\left\{\begin{array}{l}\xi^{a}e^{-\frac{1}{4}\xi^{2}}\left(1+O \left(\frac{1}{\xi^{2}}\right)\right),\quad|\arg\xi|<\frac{3}{4}\pi,\\ \xi^{a}e^{-\frac{1}{4}\xi^{2}}\left(1+O\left(\frac{1}{\xi^{2}}\right)\right)- \frac{\sqrt{2}}{\Gamma(-a)}e^{a\pi i}\xi^{-a-1}e^{\frac{1}{4}\xi^{2}}\left(1+O \left(\frac{1}{\xi^{2}}\right)\right),\quad\frac{\pi}{4}<\arg\xi<\frac{5\pi}{4},\\ \xi^{a}e^{-\frac{1}{4}\xi^{2}}\left(1+O\left(\frac{1}{\xi^{2}}\right)\right)- \frac{\sqrt{2}}{\Gamma(-a)}e^{-a\pi i}\xi^{-a-1}e^{\frac{1}{4}\xi^{2}}\left(1+ O\left(\frac{1}{\xi^{2}}\right)\right),\quad-\frac{5\pi}{4}<\arg\xi<-\frac{\pi}{4}, \end{array}\right.\]
where \(D_{a}(z)\) denotes the parabolic cylinder function.
Combining the behavior of \(P^{A}\) as \(z\to\infty\) above leads to
\[a=i\nu_{1},\quad c_{1}=(e^{\frac{3\pi i}{4}})^{i\nu_{1}}=e^{-\frac{3\pi\nu_{1} }{4}},\quad c_{2}=0,\]
and in the other word
\[\Psi_{11}^{+}(z)=e^{-\frac{3}{4}\pi\nu}D_{a}\left(e^{-\frac{3}{4}\pi i}z \right),\quad a=i\nu.\]
For the \(\operatorname{Im}z<0\), as the same procedure, let \(z=e^{-\frac{1}{4}\pi i}\xi,\quad\Psi_{11}^{-}(z)=\Psi_{11}^{-}\left(e^{-\frac{ 1}{4}\pi i}\xi\right)=g(\xi)\) and one obtains that
\[a=i\nu_{1},\quad c_{1}=(e^{\frac{7\pi i}{4}})^{i\nu_{1}}=e^{-\frac{7\pi\nu_{1} }{4}},\quad c_{2}=0.\]
On the other hand, when \(\operatorname{Im}z>0\), let \(z=e^{-\frac{1}{4}\pi i}\xi,\quad\Psi_{22}^{-}(z)=\Psi_{22}^{-}\left(e^{-\frac{ 1}{4}\pi i}\xi\right)=g(\xi)\) and we have
\[a=-i\nu_{1},\quad c_{1}=(e^{\frac{\pi i}{4}})^{-i\nu_{1}}=e^{\frac{\pi\nu_{1}} {4}},\quad c_{2}=0.\]
When \(\operatorname{Im}z<0\), let \(z=e^{\frac{3}{4}\pi i}\xi,\quad\Psi_{22}^{-}(z)=\Psi_{22}^{-}\left(e^{\frac{3} {4}\pi i}\xi\right)=g(\xi)\) and we have
\[a=-i\nu_{1},\quad c_{1}=(e^{\frac{5\pi i}{4}})^{-i\nu_{1}}=e^{\frac{5\pi i_{1}} {4}},\quad c_{2}=0.\]
Notice that \(\operatorname{Im}z<0\), choose \(e^{\frac{7}{4}\pi i}\) and \(e^{\frac{5\pi i}{4}}\) since the branch cut from \(0\) to \(2\pi\).
In conclusion, we have
\[\Psi_{11}(q,z)=\begin{cases}e^{\frac{-3\pi\nu_{1}}{4}}D_{i\nu_{1}}\left(e^{- \frac{3\pi i}{4}}z\right),&\operatorname{Im}z>0,\\ e^{\frac{-7\pi\nu_{1}}{4}}D_{i\nu_{1}}\left(e^{\frac{\pi i}{4}}z\right),& \operatorname{Im}z<0,\end{cases}\]
and
\[\Psi_{22}(q,z)=\begin{cases}e^{\frac{\pi\nu_{1}}{4}}D_{-i\nu_{1}}\left(e^{- \frac{\pi i}{4}}z\right),&\operatorname{Im}z>0,\\ e^{\frac{5\nu_{1}}{4}}D_{-i\nu_{1}}\left(e^{\frac{3\pi i}{4}}z\right),& \operatorname{Im}z<0.\end{cases}\]
Moreover, we have
\[\Psi_{21}^{+}(z)=e^{-\frac{3}{4}\pi\nu_{1}}\left(\tilde{\beta}_{12}\right)^{- 1}\left[\partial_{z}D_{a}\left(e^{-\frac{3\pi i}{4}}z\right)+\frac{iz}{2}D_{a} \left(e^{-\frac{3\pi i}{4}}z\right)\right],\]
\[\Psi_{12}^{+}(z)=e^{\frac{1}{4}\pi\nu_{1}}\left(\tilde{\beta}_{21}\right)^{- 1}\left[\partial_{z}D_{-a}\left(e^{-\frac{\pi i}{4}}k\right)-\frac{iz}{2}D_{- a}\left(e^{-\frac{\pi i}{4}}z\right)\right],\]
and
\[\Psi_{21}^{-}(z)=e^{-\frac{7}{4}\pi\nu_{1}}\left(\tilde{\beta}_{12}\right)^{- 1}\left[\partial_{z}D_{a}\left(e^{\frac{\pi i}{4}}z\right)+\frac{ik}{2}D_{a} \left(e^{\frac{\pi i}{4}}z\right)\right],\]
\[\Psi_{12}^{-}(z)=e^{\frac{5}{4}\pi\nu_{1}}\left(\tilde{\beta}_{21}\right)^{- 1}\left[\partial_{z}D_{-a}\left(e^{\frac{3\pi i}{4}}z\right)-\frac{ik}{2}D_{- a}\left(e^{\frac{3\pi i}{4}}z\right)\right].\]
Since \(\left(\Psi_{-}\right)^{-1}\Psi_{+}=V^{A}(y)\) and we can obtain that
\[\bar{y}=\Psi_{11}^{-}\Psi_{21}^{+}-\Psi_{21}^{-}\Psi_{11}^{+}=\frac{e^{- \frac{5\pi\nu_{1}}{2}}}{\tilde{\beta}_{12}}W\left(D_{i\nu_{1}}(e^{\frac{\pi i} {4}}z),D_{i\nu_{1}}(e^{-\frac{3\pi i}{4}}z)\right)=\frac{\sqrt{2\pi}e^{\frac{ \pi i}{4}}e^{-\frac{5\pi\nu_{1}}{2}}}{\tilde{\beta}_{12}\Gamma(-i\nu_{1})},\]
and
\[-y=\Psi_{22}^{-}\Psi_{12}^{+}-\Psi_{12}^{-}\Psi_{22}^{+}=\frac{e^{\frac{3\pi \nu_{1}}{2}}}{\tilde{\beta}_{21}}W\left(D_{-i\nu_{1}}(e^{\frac{2\pi i}{4}}z),D _{-i\nu_{1}}(e^{-\frac{\pi i}{4}}z)\right)=\frac{\sqrt{2\pi}e^{\frac{3\pi i}{4 }}e^{\frac{3\pi\nu_{1}}{2}}}{\tilde{\beta}_{21}\Gamma(i\nu_{1})}.\]
Now, we have
\[\beta_{12}^{A}=-i\tilde{\beta}_{12}=\frac{\sqrt{2\pi}e^{-\frac{\pi i}{4}}e^{- \frac{5\pi\nu_{1}}{2}}}{\bar{y}\Gamma(-i\nu_{1})},\quad\beta_{21}^{A}=i\tilde{ \beta}_{21}=\frac{\sqrt{2\pi}e^{\frac{\pi i}{4}}e^{\frac{3\pi\nu_{1}}{2}}}{y \Gamma(i\nu_{1})}.\]
Again, introduce the matrix
\[\mathcal{H}^{B}(y,z)=\begin{cases}\left(\begin{array}{ccc}1&\bar{y}z^{-2i\nu _{4}(y)}e^{\frac{i\pi^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{1},\\ z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{2},\\ \left(\begin{array}{ccc}1&0&0\\ -\frac{y}{1-|y|^{2}}z^{2i\nu_{4}(y)}e^{-\frac{i\pi^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{3},\\ \left(\begin{array}{ccc}1&-\frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_{4}}e^{\frac{i \pi^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{4},\\ z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{5},\\ \left(\begin{array}{ccc}1&0&0\\ yzz^{2i\nu_{4}}e^{-\frac{i\pi^{2}}{2}}&1&0\\ 0&0&1\end{array}\right)z^{-i\nu_{4}\tilde{\sigma}_{3}},&z\in\Omega_{6}.\end{cases}\]
and take the transformation
\[P^{B}(y,z)=M_{B}^{X}(y,z)\mathcal{H}^{B}(y,z).\]
The direct calculation yields
\[P_{+}^{B}(y,z)=P_{-}^{B}(y,z)v_{j}^{P^{B}}(y,z),\quad v_{j}^{P^{B}}(y,z)=( \mathcal{H}_{-}^{B}(y,z))^{-1}v_{j}^{X_{B}}(y,z)\mathcal{H}_{+}^{B}(y,z).\]
In particular, we have
\[v_{j}^{P^{B}}(y,z)=\begin{cases}I,&z\in X_{1},\\ I,&z\in X_{2},\\ I,&z\in X_{3},\\ I,&z\in X_{4},\\ e^{\frac{i\mu^{2}}{4}\operatorname{ad}\hat{\sigma}_{3}}\left(\begin{array}{ ccc}1&\bar{y}&0\\ -y&1-|y|^{2}&0\\ 0&0&1\\ \end{array}\right),&z\in X_{5},\\ e^{\frac{i\mu^{2}}{4}\operatorname{ad}\hat{\sigma}_{3}}\left(\begin{array}{ ccc}1&\bar{y}&0\\ -y&1-|y|^{2}&0\\ 0&0&1\\ \end{array}\right),&z\in X_{6}.\end{cases}\]
Like the situation in \(P^{A}\), there exits branch cut on \(\mathbb{R}_{-}\) for \(z^{2i\nu_{4}}=e^{2i\nu_{4}\log_{\pi}z}\), so that the jump matrix on \(X_{5}\) is just as follows
\[V_{5}^{B}(y,z) =z_{-}^{i\nu_{4}\tilde{\sigma}_{3}}\left(\begin{array}{ccc}1& \frac{\bar{y}}{1-|y|^{2}}z_{-}^{-2i\nu_{4}}e^{\frac{i\mu^{2}}{2}}&0\\ 0&1&0\\ 0&0&1\\ \end{array}\right)\left(\begin{array}{ccc}1&0&0\\ -\frac{y}{1-|y|^{2}}z_{+}^{2i\nu_{4}}e^{-\frac{i\mu^{2}}{2}}&1&0\\ 0&0&1\\ \end{array}\right)z_{+}^{-i\nu_{4}\tilde{\sigma}_{3}},\] \[=z_{-}^{i\nu_{4}\tilde{\sigma}_{3}}\left(\begin{array}{ccc}1- \frac{|y|^{2}}{(1-|y|^{2})^{2}}z_{-}^{-2i\nu_{4}}z_{+}^{2i\nu_{4}}&\frac{\bar {y}}{1-|y|^{2}}z_{-}^{-2i\nu_{4}}e^{\frac{i\mu^{2}}{2}}&0\\ -\frac{y}{1-|y|^{2}}z_{+}^{2i\nu_{4}}e^{-\frac{i\mu^{2}}{2}}&1&0\\ 0&0&1\\ \end{array}\right)z_{+}^{-i\nu_{4}\tilde{\sigma}_{3}},\] \[=e^{\frac{i\mu^{2}}{4}\operatorname{ad}\tilde{\sigma}_{3}}\left( \begin{array}{ccc}(1-|y|^{2})z_{-}^{i\nu_{4}}z_{+}^{-i\nu_{4}}&\frac{\bar{y} }{1-|y|^{2}}z_{-}^{-i\nu_{4}}z_{+}^{i\nu_{4}}&0\\ -\frac{y}{1-|y|^{2}}z_{-}^{-i\nu_{4}}z_{+}^{i\nu_{4}}&z_{-}^{-i\nu_{4}}z_{+}^{i \nu_{4}}&0\\ 0&0&1\\ \end{array}\right),\] \[=e^{\frac{i\mu^{2}}{4}\operatorname{ad}\tilde{\sigma}_{3}}\left( \begin{array}{ccc}1&\bar{y}&0\\ -y&1-|y|^{2}&0\\ 0&0&1\\ \end{array}\right),\]
where
\[z_{-}^{-2i\nu_{4}}z_{+}^{2i\nu_{4}}=e^{2i\nu_{4}(-\log_{\pi}(z_{-})+\log_{\pi }(z_{+}))}=e^{-4\pi\nu_{4}}=(1+|r_{2}(-k_{0})|^{2})^{2}.\]
Thus the RH problem is
\[\begin{cases}P_{+}^{B}(y,z)=P_{-}^{B}(y,z)e^{\frac{i\mu^{2}}{4}\operatorname{ ad}\tilde{\sigma}_{3}}V^{B}(y,z),&z\in\mathbb{R},\\ P^{B}\to z^{-i\nu_{4}\tilde{\sigma}_{3}}&\text{as}&z\to\infty,\end{cases}\]
where
\[V^{B}=\left(\begin{array}{ccc}1&\bar{y}&0\\ -y&1-|y|^{2}&0\\ 0&0&1\\ \end{array}\right).\]
Since \(\mathcal{H}^{B}=\left(I+O\left(\frac{1}{z}\right)\right)z^{-i\nu_{1}\tilde{ \sigma}_{3}}\) and by the procedure in [33], we have
\[\Psi=P^{B}e^{\frac{i\epsilon^{2}\tilde{\sigma}_{3}}{4}}=\hat{\Psi}z^{-i\nu_{4} \tilde{\sigma}_{3}}e^{\frac{i\epsilon^{2}\tilde{\sigma}_{3}}{4}},\]
and
\[P^{B}=\left(I+\frac{\left(M_{B}^{X}(y)\right)_{1}}{z}+O\left(\frac{1}{z^{2}} \right)\right)z^{-i\nu_{1}\tilde{\sigma}_{3}},\]
then we have
\[\Psi_{+}(z)=\Psi_{-}(z)V^{B}(y),\]
and by direct computation
\[\left(\partial_{z}\Psi-\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)_{+}=\left( \partial_{z}\Psi-\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)_{-}V^{B}(y).\]
We claim that \(\left(\partial_{z}\Psi-\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right)\Psi^{-1}\) has no jump along the \(\mathbb{R}\), so it is an entire function on the \(\mathbb{C}\), then we have
\[\left(\partial_{z}\Psi-\frac{iz\tilde{\sigma}_{3}}{2}\Psi\right) \Psi^{-1} =(\partial_{z}\hat{\Psi})\hat{\Psi}^{-1}+\hat{\Psi}(-i\nu_{4} \tilde{\sigma}_{3}z^{-1})\hat{\Psi}^{-1}+\hat{\Psi}\left(\frac{iz}{2}\tilde{ \sigma}_{3}\right)\hat{\Psi}^{-1}-\left(\frac{iz}{2}\tilde{\sigma}_{3}\hat{ \Psi}\right)\hat{\Psi}^{-1}\] \[=-\frac{iz}{2}\left[\tilde{\sigma}_{3},\hat{\Psi}\right]\hat{\Psi }^{-1}+O\left(\frac{1}{z}\right)\] \[=-\frac{i}{2}\left[\tilde{\sigma}_{3},\left(M_{B}^{X}\right)_{1} \right]+O\left(\frac{1}{z}\right).\]
Let
\[-\frac{i}{2}\left[\tilde{\sigma}_{3},\left(M_{B}^{X}\right)_{1}\right]:= \left(\begin{array}{ccc}0&\tilde{\beta}_{12}&0\\ \tilde{\beta}_{21}&0&0\\ 0&0&0\end{array}\right),\]
where \(\beta_{12}^{B}=i\tilde{\beta}_{12}\) and \(\beta_{21}^{B}=-i\tilde{\beta}_{21}\).
Rewrite the above ODE as
\[\frac{\partial\Psi_{11}}{\partial z}-\frac{1}{2}iz\Psi_{11}=\tilde{\beta}_{12 }\Psi_{21}.\]
Then we have
\[\frac{\partial^{2}\Psi_{11}}{\partial z^{2}}+\left(-\frac{1}{2}i+\frac{1}{4}z ^{2}-\tilde{\beta}_{12}\tilde{\beta}_{21}\right)\Psi_{11}=0,\]
and
\[\frac{\partial^{2}\Psi_{22}}{\partial z^{2}}+\left(\frac{1}{2}i+\frac{1}{4}z ^{2}-\tilde{\beta}_{12}\tilde{\beta}_{21}\right)\Psi_{22}=0.\]
Now focusing on the upper half plane and denoting \(z=e^{\frac{1}{4}\pi i}\xi,\Psi_{11}^{+}(z)=\Psi_{11}^{+}\left(e^{\frac{1}{4} \pi i}\xi\right)=g(\xi)\), yield
\[\frac{d^{2}g(\xi)}{d\xi^{2}}+\left(\frac{1}{2}-\frac{1}{4}\xi^{2}+a\right)g( \xi)=0,\]
which satisfies the Weber equation and by the standard analysis
\[g(\xi)=c_{1}D_{a}(\xi)+c_{2}D_{a}(-\xi).\]
Combine the behavior of \(P^{B}\) as \(z\rightarrow\infty\) before, we have
\[a=-i\nu_{4},\quad c_{1}=(e^{\frac{\pi i}{4}})^{-i\nu_{4}}=e^{\frac{\pi\nu}{4}},\quad c_{2}=0,\]
in other word,
\[\Psi_{11}^{+}(z)=e^{\frac{1}{4}\pi\nu}D_{a}\left(e^{\frac{1}{4}\pi i}z\right),\quad a=-i\nu_{4}.\]
For the \(\operatorname{Im}z<0\), as the same procedure, let \(z=e^{-\frac{2}{4}\pi i}\xi,\quad\Psi_{11}^{-}(z)=\Psi_{11}^{-}\left(e^{-\frac {3}{4}\pi i}\xi\right)=g(\xi)\) and get that
\[a=i\nu_{1},\quad c_{1}=(e^{-\frac{3\pi i}{4}})^{-i\nu_{4}}=e^{-\frac{3\pi\nu_{ 4}}{4}},\quad c_{2}=0.\]
On the other hand, when \(\operatorname{Im}z>0\), let \(z=e^{\frac{2}{4}\pi i}\xi,\Psi_{22}^{+}(z)=\Psi_{22}^{+}\left(e^{\frac{2}{4} \pi i}\xi\right)=g(\xi)\) and we have
\[a=i\nu_{4},\quad c_{1}=(e^{\frac{3\pi i}{4}})^{i\nu_{4}}=e^{-\frac{3\pi\nu_{1} }{4}},\quad c_{2}=0.\]
When \(\operatorname{Im}z<0\), let \(z=e^{\frac{1}{4}\pi i}\xi,\quad\Psi_{22}^{-}(z)=\Psi_{22}^{-}\left(e^{\frac{1 }{4}\pi i}\xi\right)=g(\xi)\) and we have
\[a=i\nu_{4},\quad c_{1}=(e^{-\frac{\pi i}{4}})^{i\nu_{4}}=e^{\frac{\pi\nu_{1}} {4}},\quad c_{2}=0.\]
Notice that \(\operatorname{Im}z<0\), we choose \(e^{-\frac{3}{4}\pi i}\) and \(e^{-\frac{\pi i}{4}}\) since the branch cut from \(-\pi\) to \(\pi\).
In conclusion, we have
\[\Psi_{11}(q,z)=\begin{cases}e^{\frac{\pi\nu_{4}}{4}}D_{-i\nu_{4}}\left(e^{-\frac{ \pi i}{4}}z\right),&\text{Im}\,z>0,\\ e^{-\frac{3\nu_{4}}{4}}D_{-i\nu_{4}}\left(e^{\frac{3\pi i}{4}}z\right),&\text{Im }\,z<0,\end{cases}\]
and
\[\Psi_{22}(q,z)=\begin{cases}e^{\frac{-3\pi\nu_{4}}{4}}D_{i\nu_{4}}\left(e^{- \frac{3\pi i}{4}}z\right),&\text{Im}\,z>0,\\ e^{\frac{\pi\nu_{4}}{4}}D_{i\nu_{4}}\left(e^{\frac{\pi i}{4}}z\right),&\text{Im }\,z<0.\end{cases}\]
Moreover, we further have
\[\Psi_{12}^{+}(z) =e^{-\frac{3}{4}\pi\nu_{4}}\left(\tilde{\beta}_{12}\right)^{-1} \left[\partial_{z}D_{i\nu_{4}}\left(e^{-\frac{3\pi i}{4}}z\right)+\frac{iz}{2 }D_{i\nu_{4}}\left(e^{-\frac{3\pi i}{4}}z\right)\right],\] \[\Psi_{21}^{+}(z) =e^{\frac{1}{4}\pi\nu_{4}}\left(\tilde{\beta}_{21}\right)^{-1} \left[\partial_{z}D_{-i\nu_{4}}\left(e^{-\frac{\pi i}{4}}k\right)-\frac{iz}{2 }D_{-i\nu_{4}}\left(e^{-\frac{\pi i}{4}}z\right)\right],\]
and
\[\Psi_{12}^{-}(z) =e^{\frac{1}{4}\pi\nu_{1}}\left(\tilde{\beta}_{21}\right)^{-1} \left[\partial_{z}D_{i\nu_{4}}\left(e^{\frac{\pi i}{4}}z\right)+\frac{ik}{2}D _{i\nu_{4}}\left(e^{\frac{\pi i}{4}}z\right)\right],\] \[\Psi_{21}^{-}(z) =e^{-\frac{3}{4}\pi\nu_{1}}\left(\tilde{\beta}_{21}\right)^{-1} \left[\partial_{z}D_{-i\nu_{4}}\left(e^{\frac{3\pi i}{4}}z\right)-\frac{ik}{2 }D_{-i\nu_{4}}\left(e^{\frac{3\pi i}{4}}z\right)\right].\]
Since \(\left(\Psi_{-}\right)^{-1}\Psi_{+}=V^{B}(y)\) and we can obtain that
\[-y=\Psi_{11}^{-}\Psi_{21}^{+}-\Psi_{21}^{-}\Psi_{11}^{+}=\frac{e^{-\frac{\pi \nu_{4}}{2}}}{\tilde{\beta}_{12}}W\left(D_{-i\nu_{4}}(e^{\frac{3\pi i}{4}}z),D _{-i\nu_{4}}(e^{-\frac{\pi i}{4}}z)\right)=\frac{\sqrt{2\pi}e^{\frac{3\pi i}{4 }}e^{-\frac{\pi\nu_{4}}{2}}}{\tilde{\beta}_{12}\Gamma(i\nu_{4})},\]
and
\[\bar{y}=\Psi_{22}^{-}\Psi_{12}^{+}-\Psi_{12}^{-}\Psi_{22}^{+}=\frac{e^{-\frac{ \pi\nu_{4}}{2}}}{\tilde{\beta}_{21}}W\left(D_{i\nu_{4}}(e^{\frac{\pi i}{4}}z),D _{i\nu_{4}}(e^{-\frac{3\pi i}{4}}z)\right)=\frac{\sqrt{2\pi}e^{\frac{\pi i}{4 }}e^{-\frac{\pi\nu_{4}}{2}}}{\tilde{\beta}_{21}\Gamma(-i\nu_{4})}.\]
Now, we have
\[\beta_{12}^{B}=i\tilde{\beta}_{12}=\frac{\sqrt{2\pi}e^{\frac{\pi i}{4}}e^{- \frac{\pi\nu_{4}}{2}}}{y\Gamma(i\nu_{4})},\quad\beta_{21}^{B}=-i\tilde{\beta}_ {21}=\frac{\sqrt{2\pi}e^{-\frac{\pi i}{4}}e^{-\frac{\pi\nu_{4}}{2}}}{\bar{y} \Gamma(-i\nu_{4})}.\]
Next an explicit proof of our observation to illustrate the relation between \(M^{(3,\epsilon)}\) RH problem and the model problem \(M_{A,B}^{X}\) is given.
**Lemma 3.9**.: _The matrix function \(H(\pm k_{0},t)\) is uniformly bounded:_
\[\sup_{t\geq t_{0}}|\partial_{x}^{l}H(\pm k_{0},t)|\leq C,\quad 0<k_{0}<M\]
_for \(l=0,1\)._
_The function \(\delta_{A,B}^{0},\delta_{A,B}^{1}\) and \(e^{t\delta_{21}^{0}(\pm k_{0})}\) satisfy the following properties:_
\[|\delta_{A}^{0}|=e^{2\pi\nu},\quad|\delta_{B}^{0}|=1,\quad 0<k_{0}<M\text{ and }t\geq t_{0}.| \partial_{x}\delta_{A}^{0}|\leq\frac{C\ln t}{t},\quad|\delta_{B}^{0}|\leq\frac {C\ln t}{t},\quad 0<k_{0}<M\text{ and }t\geq t_{0}.\]
_Moreover, we have_
\[|\delta_{A}^{1}(k)-1|\leq C|k-k_{0}|(1+|\ln(|k-k_{0}|)|),|\partial_{x}\delta_{A }^{1}(k)|\leq\frac{C}{t}|\ln(|k-k_{0}|)|,\]
_and_
\[|\delta_{B}^{1}(k)-1|\leq C|k+k_{0}|(1+|\ln(|k+k_{0}|)|),|\partial_{x}\delta_{B }^{1}(k)|\leq\frac{C}{t}|\ln(|k+k_{0}|)|,\]
\[|\partial_{x}(e^{t\delta_{21}^{0}(\pm k_{0},z)}-1)|\leq C\frac{k_{0}^{2}\,z^{3} }{t^{\frac{3}{2}}}e^{t\,\text{Re}\cdot\Phi_{21}}.\]
Proof.: Recall that \(\delta_{A}^{0}=\frac{a^{-2i\nu_{1}}e^{-2\chi_{1}(k_{0})}}{\delta_{\varepsilon_{1} }(k_{0})}\) and by the directly computation
\[|a^{-2i\nu}|=|(3^{\frac{3}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}})^{2i\nu_{1}}|=|e^{2i \nu_{1}ln(a)}|=1,\]
since the coefficients \(\nu\) and \(a\) are real valued, and
\[|\tilde{\delta}_{v_{1}}(k_{0})|=\left|\frac{\delta_{3}(k_{0})\delta_{2}^{2}(k_ {0})\delta_{5}(k_{0})}{\delta_{6}(k_{0})\delta_{2}(k_{0})}\right|=\left|\frac{ \delta_{1}(\alpha^{2}k_{0})\delta_{4}^{2}(k_{0})\delta_{1}(\alpha k_{0})}{ \delta_{4}(\alpha k_{0})\delta_{2}(\alpha^{2}k_{0})}\right|=1,\]
where we using the fact that \(\delta_{1,4}(k)=(\overline{\delta_{1,4}(\overline{k})})^{-1}\) and the symmetric between \(\delta_{1}\) (resp. \(\delta_{4}\)) and \(\delta_{3,5}\) (resp. \(\delta_{2,6}\)).
Furthermore, the real part of \(\chi_{1}\) is that
\[\operatorname{Re}\chi_{1}(k_{0})=\frac{1}{2\pi}\int_{k_{0}}^{\infty}\pi d \ln\left(1-\left|r_{1}(s)\right|^{2}\right)=-\frac{1}{2}\ln\left(1-\left|r_{1 }\left(k_{0}\right)\right|^{2}\right)=\pi\nu_{1},\]
since we choose the branch cut from \(0\) to \(2\pi\).
So that
\[|\delta_{A}^{0}|=\left|\frac{a^{-2i\nu}e^{-2\chi_{1}(k_{0})}}{\delta_{ \varepsilon_{1}}(k_{0})}\right|=e^{-2\pi\nu_{1}}.\]
By the same procedure, we can get that
\[\operatorname{Re}\chi_{1}(k_{0})=\frac{1}{2\pi}\int_{k_{0}}^{\infty}0d\ln \left(1-\left|r_{1}(s)\right|^{2}\right)=0,\]
and
\[|\delta_{B}^{0}|=\left|\frac{a^{-2i\nu}e^{-2\chi_{4}(k_{0})}}{\tilde{\delta}_ {v_{4}}(k_{0})}\right|=1.\]
Moreover, using the before formula, we can get that
\[\left|\partial_{x}\delta_{A}^{0}(\zeta,t)\right| =\left|\delta_{A}^{0}(\zeta,t)\partial_{x}\ln\delta_{A}^{0}( \zeta,t)\right|=e^{-2\pi\nu_{1}}\left|\partial_{x}\ln\delta_{A}^{0}(\zeta,t)\right|\] \[\leq C\left(\left|\ln t\partial_{x}\nu_{1}\right|+\left|\partial _{x}\chi_{1}\left(k_{0}\right)\right|+\left|\partial_{x}\ln\tilde{\delta}_{v_ {1}}\left(k_{0}\right)\right|\right),\]
since \(k_{0}=\sqrt[4]{\frac{\pi}{45t}}\) we can get that \(\partial_{x}=\frac{1}{4k_{0}^{3}t}\partial_{k_{0}}\). Rewrite it as below
\[|\partial_{x}\nu_{1}|\leq C\frac{1}{t}\frac{\partial_{k}|r_{1}(k)|^{2}|_{k=k_ {0}}}{1-|r_{1}(k_{0})|^{2}}\leq C\frac{1}{t},\,\left|\partial_{x}\chi_{1}\left( k_{0}\right)\right|\leq\frac{C}{t},\,\,\left|\partial_{x}\ln\tilde{\delta}_{v_ {1}}\left(k_{0}\right)\right|\leq\frac{C}{t}\,\,\left|\partial_{k_{0}}\ln \tilde{\delta}_{v_{1}}\left(k_{0}\right)\right|\]
since the estimate about \(\chi_{1}\) is above and \(\tilde{\delta}_{v_{1}}\) is analytic near \(k_{0}\).
Recall that \(\delta_{A}^{1}=\frac{e^{2\chi_{1}(k_{0})-2\chi_{1}(k)}\delta_{\tilde{\delta}_{ 1}}(k_{0})}{\delta_{\varepsilon_{1}}(k)}\), by the directly computation
\[|e^{2\chi_{1}(k_{0})-2\chi_{1}(k)}-1|\leq C|\chi_{1}(k_{0})-\chi_{1}(k)|\leq C |k-k_{0}|(1+|\ln(|k-k_{0}|)|),\]
and
\[\partial_{x}\delta_{A}^{1}(k)=\delta_{A}^{1}(k)\partial_{x}\log\delta_{A}^{1}( k).\]
Since the \(\tilde{\delta}_{v_{1}}\) is analytic near \(k_{0}\) and combine the estimate before, we can get that
\[\left|\partial_{x}\delta_{A}^{1}(\zeta,k)\right|\leq C\left(\left|\partial_{x }\left(\chi_{1}(k)-\chi_{1}\left(k_{0}\right)\right)\right|+\frac{1}{t}\left| \partial_{k_{0}}\log\tilde{\delta}_{v_{1}}\right|\right)\leq\frac{C|\ln(|k-k_{0 }|)|}{t}.\]
Finally, we have \(t\Phi_{21}^{0}(k_{0})=9\sqrt{3}ita^{3}[a^{2}z^{5}-5ak_{0}z^{4}+10k_{0}^{2}z^{3}]\) and taking the Taylor expansion yields
\[|e^{t\Phi_{21}^{0}(\pm k_{0},z)}-1|\leq C\frac{k_{0}^{2}z^{3}}{t^{\frac{3}{2}}} e^{t\operatorname{Re}\Phi_{21}}.\]
In conclude, \(M^{(2)}(x,t;k)=M^{(3,\epsilon)}(x,t;k)H(\pm k_{0},t)^{-1}\to M^{X}_{A,B}(y,z)H(\pm k _{0},t)^{-1}\) as \(t\to\infty\) for \(k\in\Sigma_{A,B}\). But on the boundary of \(\partial B(\pm k_{0},\epsilon)\) the RH problem \(M^{X}_{A,B}H(\pm k_{0},t)^{-1}\) not converge to the \(I\) (as \(t\to\infty\) the \(z\to\infty\) ) this suggests that we need to introduce new RH problem defined by
\[M^{(\pm k_{0})}(x,t;k)=H(\pm k_{0},t)M^{X}_{A,B}(y,z)H(\pm k_{0},t)^{-1},\quad k \in B(\pm k_{0},\epsilon),\]
and the \(H(\pm k_{0},t)\) on the right hand does not change the jump matrix.
**Lemma 3.10**.: \(M^{(\pm k_{0})}\) _is analytic for \(k\in B(\pm k_{0},\epsilon)\setminus\Sigma_{A,B}\) and satisfies the jump condition \(M^{(\pm k_{0})}_{+}=M^{(\pm k_{0})}_{-}V^{(\pm k_{0})}\) on \(\Sigma_{A,B}\), respectively. Moreover, for \(0<k_{0}<M\) and \(t\) large enough we have the following estimate_
\[\|\partial_{x}^{l}(V^{(2)}-V^{(\pm k_{0})})\|_{L^{1}(\Sigma_{A,B})}\leq C \frac{\ln t}{t},\]
_and_
\[\|\partial_{x}^{l}(V^{(2)}-V^{(\pm k_{0})})\|_{L^{\infty}(\Sigma_{A,B})}\leq C \frac{\ln t}{t^{\frac{1}{2}}}.\]
_Furthermore,_
\[\left\|\partial_{x}^{l}\left(M^{(\pm k_{0})}(x,t,\cdot)^{-1}-I\right)\right\| _{L^{\infty}(\partial B(\pm k_{0},\epsilon))}=O\left(t^{-1/2}\right),\]
\[\frac{1}{2\pi i}\int_{\partial B_{(\pm k_{0},\epsilon)}}\left(M^{(\pm k_{0})}( x,t,k)^{-1}-I\right)dk=-\frac{H(\pm k_{0},t)\left(M^{X}_{A,B}(y)\right)^{(1)}H(\pm k _{0},t)^{-1}}{a(t)}+O\left(t^{-1}\right).\]
Proof.: Recall that
\[M^{(k_{0})}(x,t;k)=H(k_{0},t)M^{X}_{A}(y,z)H(k_{0},t)^{-1},\quad k\in B(k_{0}, \epsilon),\]
where
\[V^{(k_{0})}(x,t;k)=H(k_{0},t)V^{X_{B}}(y,z)H(k_{0},t)^{-1},\]
and
\[V^{(2)}(x,t;k)=H(k_{0},t)V^{(3,\epsilon)}(x,t;k)H(k_{0},t)^{-1}.\]
So that we get that
\[V^{(2)}-V^{(k_{0})}=H(k_{0},t)\left(V^{(3,\epsilon)}-V^{X_{B}}\right)H(k_{0},t).\]
Since \(H(k_{0},t)^{\pm 1}\) is bounded and this suggests us to show that
\[\left\|\partial_{x}^{l}\left[V^{(3,\epsilon)}(x,t;\cdot)-V^{X_{A}}(x,t,z(k_{0 },\cdot))\right]\right\|_{L^{1}\left(\mathcal{X}_{j}^{*}\right)}\leq Ct^{-1}\ln t,\]
\[\left\|\partial_{x}^{l}\left[V^{(3,\epsilon)}(x,t;\cdot)-V^{X_{A}}(x,t,z(k_{0 },\cdot))\right]\right\|_{L^{\infty}\left(\mathcal{X}_{j}^{*}\right)}\leq Ct^ {-1/2}\ln t.\]
Again, we have
\[v_{1}^{(3,\epsilon)}=\left(\begin{array}{ccc}1&0&0\\ e^{-2i\nu_{1}\log_{0}(z)}\delta_{A}^{1}\rho_{1,a}^{*}e^{t\Phi_{21}^{0}(k_{0},z)+ \frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right),\quad v_{1}^{X_{A}}=\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_{1}(y)}e^{\frac{iz^{2}}{2}}&1&0\\ 0&0&1\end{array}\right).\]
It suffices to show that
\[\left|e^{-2i\nu_{1}\log_{0}(z)}\delta_{A}^{1}\rho_{1,a}^{*}e^{t \Phi_{21}^{0}(k_{0},z)+\frac{iz^{2}}{2}}-\frac{\bar{y}}{1-|y|^{2}}z^{-2i\nu_{1 }(y)}e^{\frac{iz^{2}}{2}}\right|\] \[=|e^{-2i\nu_{1}\log_{0}(z)}|\left|\delta_{A}^{1}\rho_{1,a}^{*}e^ {t\Phi_{21}^{0}(k_{0},z)}-\frac{\bar{y}}{1-|y|^{2}}\right|\left|e^{\frac{iz^{2} }{2}}\right|\] \[\leq C\left|(\delta_{A}^{1}-1)\rho_{1,a}^{*}e^{t\Phi_{21}^{0}(k_{0 },z)}+(e^{t\Phi_{21}^{0}(k_{0},z)}-1)\rho_{1,a}^{*}+(\rho_{1,a}^{*}(k)-\rho_{1,a}^{*}(k_{0}))\right|\left|e^{\frac{iz^{2}}{2}}\right|\] \[\leq C\left(|k-k_{0}|ln(|k-k_{0}|)+|k-k_{0}|\right)e^{t\operatorname {Re}\Phi_{21}}|e^{\frac{iz^{2}}{2}}|\] \[\leq C|k-k_{0}|(1+\ln(|k-k_{0}|))e^{-ct|k-k_{0}|^{2}},\]
which implies that
\[\left\|\left(v^{(3,\epsilon)}-v^{X_{A}}\right)_{21}\right\|_{L^{1}(\Sigma_{A})} \leq C\int_{0}^{\infty}s(1+|\ln s|)e^{-ctu^{2}}ds\leq Ct^{-1}\ln t,\]
and
\[\left\|\left(v^{(3,\epsilon)}-v^{X_{A}}\right)_{21}\right\|_{L^{\infty}( \Sigma_{A})}\leq C\sup_{s\geq 0}s(1+|\ln s|)e^{-cts^{2}}\leq Ct^{-1/2}\ln t.\]
For the \(\partial_{x}\left(v^{(3,\epsilon)}-v^{X_{A}}\right)_{21}\), it follows that
\[\partial_{x}\left(v^{(3,\epsilon)}-v^{X_{A}}\right)_{21}= \partial_{x}(e^{-2i\nu_{1}\log_{0}(z)})\left((\delta_{A}^{1}-1) \rho_{1,a}^{*}e^{t\Phi_{21}^{0}(k_{0},z)}+(e^{t\Phi_{21}^{0}(k_{0},z)}-1)\rho_ {1,a}^{*}+(\rho_{1,a}^{*}(k)-\rho_{1,a}^{*}(k_{0})\right)e^{\frac{iz^{2}}{2}}\] \[+e^{-2i\nu_{1}\log_{0}(z)}\partial_{x}\left((\delta_{A}^{1}-1) \rho_{1,a}^{*}e^{t\Phi_{21}^{0}(k_{0},z)}+(e^{t\Phi_{21}^{0}(k_{0},z)}-1)\rho_ {1,a}^{*}+(\rho_{1,a}^{*}(k)-\rho_{1,a}^{*}(k_{0})\right)e^{\frac{iz^{3}}{2}}\] \[+e^{-2i\nu_{1}\log_{0}(z)}\left((\delta_{A}^{1}-1)\rho_{1,a}^{*}e ^{t\Phi_{21}^{0}(k_{0},z)}+(e^{t\Phi_{21}^{0}(k_{0},z)}-1)\rho_{1,a}^{*}+(\rho _{1,a}^{*}(k)-\rho_{1,a}^{*}(k_{0})\right)\partial_{x}e^{\frac{iz^{3}}{2}}\] \[:=I+II+III.\]
For the first part, using the fact that \(|\partial_{x}e^{-2i\nu_{1}log_{0}(z)}|\leq\frac{C}{t(k-k_{0})}\), we have that
\[\|I\|_{L^{1}(\Sigma_{A})}\leq Ct^{-1}\int_{0}^{\infty}(1+\ln s)e^{-cts^{2}}ds \leq Ct^{-3/2}\ln t,\|I\|_{L^{1}(\Sigma_{A})}\leq Ct^{-1}\sup_{u\geq 0}(1+ \ln s)e^{-cts^{2}}\leq Ct^{-1}\ln t.\]
For the second term and last term, using the lemma before, we have the same estimate.
Since
\[z_{1}=3^{\frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}(k-k_{0}),\]
so for the \(k\in\partial B(k_{0},\epsilon)\), \(z_{1}\to\infty\) as \(t\to\infty\) and combining the WKB expansion of \(M^{X_{A}}\), we have
\[M^{X_{A}}(y,z)=I+\frac{M_{1}^{X_{A}}(y)}{3^{\frac{4}{2}}2\sqrt{5}tk_{0}^{\frac{ 3}{2}}(k-k_{0})}+O(\frac{1}{t}),\]
as \(t\to\infty\), and
\[\left(M^{(k_{0})}\right)^{-1}-I=-\frac{H(k_{0},t)M_{1}^{X_{A}}(y)H(k_{0},t)^{ -1}}{3^{\frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{1}{2}}(k-k_{0})}+O\left(t^{-1} \right),\quad t\to\infty.\]
### Small norm RH problem
Now, using the symmetric property of RH problem, we make extension of \(M^{(\pm k_{0})}\) as follows
\[\tilde{M}^{(\pm k_{0})}=\mathcal{A}M^{(\pm k_{0})}(x,t;\alpha k)\mathcal{A}^{ -1}.\]
Denote \(\tilde{B}_{\epsilon}^{(\pm k_{0})}=B(\pm k_{0},\epsilon)\cup B(\pm\alpha k_{0},\epsilon)\cup B(\pm\alpha^{2}k_{0},\epsilon)\) and introduce the solution \(\tilde{M}(x,t;k)\) as following:
\[\tilde{M}(x,t;k):=\begin{cases}M^{(2)}\left(\tilde{M}^{(k_{0})}\right)^{-1},&k \in\tilde{B}_{\epsilon}^{(k_{0})}\\ M^{(2)}\left(\tilde{M}^{(-k_{0})}\right)^{-1},&k\in\tilde{B}_{\epsilon}^{(-k_{0 })},\\ M^{(2)},&\text{otherwise.}\end{cases}\]
Moreover, the jump contour denoted as \(\tilde{\Sigma}:=\Sigma^{(2)}\cup\partial\tilde{B}_{\epsilon}^{(k_{0})}\cup \partial\tilde{B}_{\epsilon}^{(-k_{0})}\) (see Fig. 9) and the jump matrix as follows
\[\tilde{V}:=\begin{cases}V^{(2)},&k\in\tilde{\Sigma}\setminus\overline{\left( \tilde{B}_{\epsilon}^{(\pm k_{0})}\right)},\\ (\tilde{M}^{(k_{0})})^{-1},&k\in\partial\tilde{B}_{\epsilon}^{(k_{0})},\\ (\tilde{M}^{(-k_{0})})^{-1},&k\in\partial\tilde{B}_{\epsilon}^{(-k_{0})},\\ \tilde{M}_{-}^{(k_{0})}V^{(2)}(\tilde{M}_{+}^{(k_{0})})^{-1},&k\in\tilde{B}_{ \epsilon}^{(k_{0})}\cap\tilde{\Sigma},\\ \tilde{M}_{-}^{(-k_{0})}V^{(2)}(\tilde{M}_{+}^{(-k_{0})})^{-1},&k\in\tilde{B}_{ \epsilon}^{(-k_{0})}\cap\tilde{\Sigma}.\end{cases}\]
In conclusion, construct a RH problem that satisfies \(\tilde{M}_{+}=\tilde{M}_{-}\tilde{V}\) for \(k\in\tilde{\Sigma}\) and is analytic in \(\mathbb{C}\setminus\tilde{\Sigma}\).
Suppose \(\tilde{\Sigma}_{A,B}:=\Sigma_{A,B}\cup\alpha\Sigma_{A,B}\cup\alpha^{2}\Sigma_{A,B}\) and denote
\[\Sigma^{\prime}:=\tilde{\Sigma}\setminus\left(\Sigma\cup\tilde{\Sigma}_{A,B} \cup\partial\tilde{B}_{\epsilon}^{(\pm k_{0})}\right).\]
**Lemma 3.11**.: _Let \(W=\tilde{V}-I\) and the estimate of jump matrix as follow is uniformly for \(t\) large enough and \(0<k_{0}<M\)_
\[\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{(L^{1}\cap L^{\infty })(\Sigma)}\leq\frac{C}{k_{0}^{3}t^{\frac{3}{2}}},\] \[\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{(L^{1}\cap L^{\infty })(\Sigma^{\prime})}\leq Ce^{-ct},\] \[\left\|\partial_{x}^{l}W\right\|_{(L^{1}\cap L^{\infty})(\partial \tilde{B}^{(\pm k_{0})})}\leq Ct^{-1/2},\] \[\left\|\partial_{x}^{l}W\right\|_{L^{1}\left(\tilde{\Sigma}_{A,B }\right)}\leq Ct^{-1}\ln t,\] \[\left\|\partial_{x}^{l}W\right\|_{L^{\infty}\left(\tilde{\Sigma}_ {A,B}\right)}\leq Ct^{-1/2}\ln t.\]
Proof.: (a) For the first inequality, recall the jump matrix on \(\Sigma\) involves \(r_{j,r},\rho_{j,r}\) and \((\tilde{M}^{(\pm k_{0})})^{\pm 1}\) is bounded, i.e.,
\[\|(1+|\cdot|)\partial_{x}^{l}(v^{(2)}-I)\|_{(L^{1}\cap L^{\infty})(\Sigma_{5, \delta}^{(2)})}\leq Ct^{-\frac{3}{2}}.\]
So that on \(\Sigma_{5,6}^{(2)}\cap B(k_{0},\epsilon)\),
\[W=\tilde{V}-I=m_{-}^{(k_{0})}v^{(2)}\left(m_{+}^{(k_{0})}\right)^{-1}-I=m_{-} ^{(k_{0})}\left(v^{(2)}-I\right)\left(m_{+}^{(k_{0})}\right)^{-1}\]
since the jump contour of \(\tilde{M}^{(k_{0})}\) is on \(\tilde{\Sigma}_{A,B}\), so that \(m^{(k_{0})}\) is analytic on \(\Sigma_{5,6}^{(2)}\cap B(k_{0},\epsilon)\) and is bounded, we have
\[\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{(L^{1}\cap L^{\infty})(\Sigma)} \leq\frac{C}{k_{0}^{3}t^{\frac{3}{2}}}.\]
(b) For the second inequality, the \(\Sigma^{\prime}\) means \(\Sigma^{(2)}\setminus\overline{B_{c}^{\pm k_{0}}}\), we would like to focus on \(\Sigma^{(2)}\setminus B(k_{0},\epsilon)\) and the \(W\) on that contour only involves \((v_{1}^{(2)})_{21}\) is not zero, i.e.,
\[\frac{\delta_{1+}^{2}}{\tilde{\delta}_{v_{1}}}\rho_{1,a}^{*}e^{t\Phi_{21}}.\]
Because the \(\partial_{x}^{l}\delta\) function is bounded, the estimate of \(\rho_{1,a}^{*}\) is
\[\left|\partial_{x}\rho_{1,a}^{*}(x,t,k)\right|\leq\frac{Ce^{t\operatorname{Re }\Phi_{21}(k)}}{1+|k|}.\]
Moreover, the \(\operatorname{Re}\Phi_{21}<-c\) for \(|k-k_{0}|>\epsilon\) so that we have
\[\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{(L^{1}\cap L^{\infty})(\Sigma^{ \prime})}\leq Ce^{-ct}.\]
(c) The third estimate is the direct outcome of lemma above.
(d) For \(k\in\tilde{\Sigma}_{A,B}\), in particular,
\[W=\tilde{M}_{-}^{(k_{0})}(V^{(2)}-V^{(k_{0})})(\tilde{M}_{+}^{(k_{0})})^{-1},\]
and combining the estimate above, it is found that \(M^{(k_{0})}\) is bounded uniformly for \(0<k_{0}<m\).
Now, introduce the Cauchy operator
\[(\operatorname{C}\!f)\left(z\right)=\int_{\tilde{\Sigma}}\frac{f(\zeta)}{ \zeta-z}\frac{\mathrm{d}\zeta}{2\pi i},\quad z\in\mathbb{C}\setminus\tilde{ \Sigma}.\]
If \((1+|z|)^{\frac{1}{3}}f(z)\in L^{3}(\tilde{\Sigma})\), then \((Cf)(z)\) is analytic from \(\mathbb{C}\setminus\tilde{\Sigma}\) to \(\mathbb{C}\) with property that for any component \(D\) in \(\mathbb{C}\setminus\tilde{\Sigma}\), there are curves \(\{C\}_{n=1}^{\infty}\) which surround each compact subset of \(D\) satisfy
\[\sup_{n\geq 1}\int_{C_{n}}(1+|z|)|f(z)|^{3}|dz|<\infty,\]
moreover, \(C_{\pm}f\) exist a.e. \(z\in\tilde{\Sigma}\) and \((1+|z|)^{\frac{1}{3}}C_{\pm}f(z)\in L^{3}(\tilde{\Sigma})\).
On one hand, the \(C_{\pm}\) are bounded operator from weighted \(L^{3}(\tilde{\Sigma})\) to itself (denote it as \(\dot{L}^{3}(\tilde{\Sigma})\)) and satisfy \(C_{+}-C_{-}=I\).
On the other hand, recall the estimate before
\[\begin{cases}\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{L^{1}(\tilde{\Sigma })}\leq Ct^{-\frac{1}{2}},\\ \left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{L^{\infty}(\tilde{\Sigma})}\leq Ct ^{-\frac{1}{2}}\ln t.\end{cases}\]
By the Riesz in interpolation inequality, we have
\[\left\|(1+|\cdot|)\partial_{x}^{l}W\right\|_{L^{p}(\tilde{\Sigma})}\leq Ct^{- \frac{1}{2}}(\ln t)^{\frac{1}{p}},\]
so that \(W\) belong to the weighted \(L^{3}(\tilde{\Sigma})\) and \(L^{\infty}(\tilde{\Sigma})\).
Define
\[C_{W}f=C_{+}\left(fW_{-}\right)+C_{-}\left(fW_{+}\right)\]
and \(C_{W}:\dot{L}^{3}(\tilde{\Sigma})+L^{\infty}(\tilde{\Sigma})\to\dot{L}^{3}( \tilde{\Sigma})\).
**Lemma 3.12**.: _For \(t\) large enough and \(0<k_{0}<M\), the operator \(I-C_{w}\) is invertible and \((I-C_{w})^{-1}\)is a bounded linear operator from \(L^{3}(\tilde{\Sigma})\) to itself._
Proof.: Since \(C_{\pm}\) are bounded operator from weighted \(L^{3}(\tilde{\Sigma})\) to itself then for any \(f\in\dot{L}^{3}(\tilde{\Sigma})\), we have
\[C_{W}f =\operatorname{C_{+}}\left(fW_{-}\right)+\operatorname{C_{-}} \left(fW_{+}\right)\] \[\leq\left\|C_{+}\right\|_{L^{3}(\tilde{\Sigma})\to\dot{L}^{3}( \tilde{\Sigma})}\|W\|_{L^{\infty}(\tilde{\Sigma})}\|f\|_{\dot{L}^{3}(\tilde{ \Sigma})}+\|C_{-}\|_{L^{3}(\tilde{\Sigma})\to\dot{L}^{3}(\tilde{\Sigma})}\|W \|_{L^{\infty}(\tilde{\Sigma})}\|f\|_{L^{3}(\tilde{\Sigma})}\] \[\leq\left(\|C_{+}\|_{\dot{L}^{3}(\tilde{\Sigma})\to\dot{L}^{3}( \tilde{\Sigma})}+\|C_{-}\|_{\dot{L}^{3}(\tilde{\Sigma})\to\dot{L}^{3}(\tilde{ \Sigma})}\right)\|W\|_{L^{\infty}(\tilde{\Sigma})}\|f\|_{\dot{L}^{3}(\tilde{ \Sigma})},\]
so that \(\|C_{W}\|_{L^{\infty}(\tilde{\Sigma})\to L^{3}(\tilde{\Sigma})}\leq\left(\|C_{+} \|_{L^{3}(\tilde{\Sigma})\to L^{3}(\tilde{\Sigma})}+\|C_{-}\|_{L^{3}(\tilde{ \Sigma})\to L^{3}(\tilde{\Sigma})}\right)\|W\|_{L^{\infty}(\tilde{\Sigma})}\), and by the estimate before
\[\|(1+|\cdot|)\partial_{x}^{l}W\|_{L^{\infty}(\tilde{\Sigma})}\leq Ct^{-\frac{1} {2}}\ln t,\]
as \(t\) large enough to satisfy \(\|W\|_{L^{\infty}(\tilde{\Sigma})}<\frac{1}{\left(\|C_{+}\|_{L^{3}(\Sigma)\to L ^{3}(\tilde{\Sigma})}+\|C_{-}\|_{L^{3}(\Sigma)\to L^{3}(\tilde{\Sigma})}\right)}\), then the operator \(I-C_{w}\) is invertible.
Let \(\mu\in I+\hat{L}^{3}(\tilde{\Sigma})\) satisfy the following equation
\[\mu=I+C_{W}\mu,\]
furthermore, one has \(\mu=I+(I-C_{W})^{-1}C_{w}I\).
**Lemma 3.13**.: _For \(t\) large enough and \(0<k_{0}<M\), the RH problem \(\tilde{M}\) has a unique solution as following:_
\[\tilde{M}(x,t;k)=I+C(\mu W)=I+\int_{\tilde{\Sigma}}\frac{\mu(x,t;\zeta)W(x,t; \zeta)}{\zeta-z}\frac{\mathrm{d}\zeta}{2\pi i},\quad z\in\mathbb{C}\setminus \tilde{\Sigma}.\]
**Lemma 3.14**.: _For \(t\) large enough, we have_
\[\|\partial_{x}^{l}(\mu-I)\|_{L^{p}(\tilde{\Sigma})}\leq\frac{C(\ln t)^{\frac{ 1}{p}}}{t^{\frac{1}{2}}}.\]
Proof.: Denote \(\|C_{\pm}\|_{p}:=\left(\|C_{+}\|_{L^{p}(\tilde{\Sigma})\to L^{p}(\tilde{ \Sigma})}+\|C_{-}\|_{L^{p}(\tilde{\Sigma})\to L^{p}(\tilde{\Sigma})}\right)\) and assume \(t\) large enough to satisfy \(\|W\|_{L^{\infty}(\tilde{\Sigma})}<\|C_{\pm}\|_{p}^{-1}\).
When \(l=0\), we have
\[\|\mu-I\|_{L^{p}(\tilde{\Sigma})} =\|(I-C_{W})^{-1}C_{w}I\|_{L^{p}(\tilde{\Sigma})}\leq\sum_{j=1}^{ \infty}\|C_{W}\|_{L^{p}(\tilde{\Sigma})\to L^{p}(\tilde{\Sigma})}^{j-1}\|C_{ w}I\|_{L^{p}(\tilde{\Sigma})}\] \[\leq\sum_{j=1}^{\infty}\|C_{\pm}\|_{p}^{j}\|W\|_{L^{\infty}( \tilde{\Sigma})}^{j-1}\|w\|_{L^{p}(\tilde{\Sigma})}=\frac{\|C_{\pm}\|_{p}\|w\| _{L^{p}(\tilde{\Sigma})}}{1-\|C_{\pm}\|_{p}\|w\|_{L^{\infty}(\tilde{\Sigma})}}.\]
So that combine the estimate of \(\|w\|_{L^{p}(\tilde{\Sigma})}\) we get the aimed estimate.
When \(l=1\), we get
\[\partial_{x}(\mu-I)=\partial_{x}\sum_{j=1}^{\infty}(C_{W})^{j}I.\]
Since the series is uniformly bounded and we can change the order of sum and derivative, then we have
\[\|\partial_{x}(\mu-I)\|_{L^{p}(\tilde{\Sigma})} \leq\sum_{j=2}^{\infty}(j-1)\left\|C_{W}\right\|_{L^{p}(\tilde{ \Sigma})\to L^{p}(\tilde{\Sigma})}^{j-2}\|\partial_{x}C_{W}\|_{L^{p}(\tilde{ \Sigma})\to L^{p}(\tilde{\Sigma})}\left\|C_{W}I\right\|_{L^{p}(\tilde{ \Sigma})}\] \[+\sum_{j=1}^{\infty}\|C_{W}\|_{L^{p}(\tilde{\Sigma})\to L^{p}( \tilde{\Sigma})}^{j-1}\|\partial_{x}C_{W}I\|_{L^{p}(\tilde{\Sigma})}\] \[\leq C\sum_{j=2}^{\infty}j\|C_{\pm}\|_{p}^{j}\|W\|_{L^{\infty}( \tilde{\Sigma})}^{j-2}\|\partial_{x}W\|_{L^{\infty}(\tilde{\Sigma})}\|W\|_{L^{ p}(\tilde{\Sigma})}+\sum_{j=1}^{\infty}\|C_{\pm}\|_{p}^{j}\|W\|_{L^{ \infty}(\tilde{\Sigma})}^{j-1}\|\partial_{x}W\|_{L^{p}(\tilde{\Sigma})}\] \[\leq\frac{C\left(\|\partial_{x}W\|_{L^{\infty}(\tilde{\Sigma})}\| W\|_{L^{p}(\tilde{\Sigma})}+\|\partial_{x}W\|_{L^{p}(\tilde{\Sigma})}\right)}{1-\|C_{ \pm}\|_{p}\|W\|_{L^{\infty}(\tilde{\Sigma})}}.\]
Now, we can get the following non-tangential limit as \(k\to\infty\)
\[Q(x,t):=\lim_{k\to\infty}k(\tilde{M}(x,t;k)-I)=-\frac{1}{2\pi i}\int_{\tilde{ \Sigma}}\mu(x,t;k)W(x,t;k)dk.\]
**Lemma 3.15**.: _When \(t\to\infty\), we have_
\[Q(x,t)=-\frac{1}{2\pi i}\int_{\partial\tilde{B}_{*}^{(k_{0})}\cup\partial\tilde{B }_{*}^{(-k_{0})}}W(x,t;k)dk+O\left(\frac{\ln t}{t}\right).\]
Proof.: Decompose \(Q(x,t)\) as follows
\[Q(x,t)=-\frac{1}{2\pi i}\int_{\partial\tilde{B}_{*}^{(k_{0})}\cup\partial\tilde {B}_{*}^{(-k_{0})}}\mu(x,t;k)W(x,t;k)dk+Q_{1}(x,t)+Q_{2}(x,t),\]
where
\[Q_{1}(x,t):=-\frac{1}{2\pi i}\int_{\tilde{\Sigma}}(\mu(x,t;k)-I)W(x,t;k)dk,\]
and
\[Q_{2}(x,t):=-\frac{1}{2\pi i}\int_{\tilde{\Sigma}\setminus(\partial\tilde{B} _{*}^{(k_{0})}\cup\partial\tilde{B}_{*}^{(k_{0})})}W(x,t;k)dk.\]
For \(Q_{1}(x,t)\), by the H\(\ddot{o}\)lder inequality, we have that
\[\left|Q_{1}(x,t)\right|\leq C\|(\mu(x,t;\cdot)-I)W(x,t;\cdot)\|_{L^{1}(\tilde {\Sigma})}\leq C\|\mu(x,t;\cdot)-I\|_{L^{p}(\tilde{\Sigma})}\|W(x,t;\cdot)\|_{ L^{q}(\tilde{\Sigma})}\leq\frac{C\ln t}{t},\]
where \(\frac{1}{p}+\frac{1}{q}=1\).
For \(Q_{2}(x,t)\), we have that
\[\left|Q_{2}(x,t)\right|\leq C\|W(x,t;\cdot)\|_{L^{1}(\tilde{\Sigma}\setminus (\partial\tilde{B}_{*}^{(k_{0})}\cup\partial\tilde{B}_{*}^{(-k_{0})}))}\leq \frac{C\ln t}{t}.\]
Now, suppose
\[R(x,t;k_{0}):=-\frac{1}{2\pi i}\int_{\partial B(k_{0},\epsilon)}W(x,t;k)dk=- \frac{1}{2\pi i}\int_{\partial B(k_{0},\epsilon)}((M^{(k_{0})})^{-1}-I)dk,\]
and
\[R(x,t;-k_{0}):=-\frac{1}{2\pi i}\int_{\partial B(k_{0},\epsilon)}W(x,t;k)dk=- \frac{1}{2\pi i}\int_{\partial B(-k_{0},\epsilon)}((M^{(-k_{0})})^{-1}-I)dk,\]
moreover, \(R(x,t;\pm k_{0})\) satisfy
\[R(x,t;k_{0})=\frac{H(k_{0},t)M_{1}^{X_{A}}(y(k_{0}))H(k_{0},t)^{-1}}{3^{ \frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}+O\left(t^{-1}\right),\]
and
\[R(x,t;-k_{0})=\frac{H(-k_{0},t)M_{1}^{X_{B}}(y(-k_{0}))H(-k_{0},t)^{-1}}{3^{ \frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}+O\left(t^{-1}\right).\]
By the symmetry of \(\tilde{M}(x,t;k)\), we have
\[\tilde{M}(x,t;k)=\mathcal{A}\tilde{M}(x,t;\alpha k)\mathcal{A}^{-1},\quad k \in\mathbb{C}\setminus\tilde{\Sigma}.\]
Then \(\mu\) and \(W\) also satisfy the above symmetric and we can find that
\[-\frac{1}{2\pi i}\int_{\partial\tilde{B}_{*}^{(k_{0})}\cup\partial \tilde{B}_{*}^{(-k_{0})}}W(x,t;k)dk =-\frac{1}{2\pi i}\left(\int_{\partial B(k_{0},\epsilon)}+\int_{ \partial B(\alpha k_{0},\epsilon)}+\int_{\partial B(\alpha^{2}k_{0}, \epsilon)}\right)W(x,t;k)dk\] \[-\frac{1}{2\pi i}\left(\int_{\partial B(-k_{0},\epsilon)}+\int_{ \partial B(-\alpha k_{0},\epsilon)}+\int_{\partial B(-\alpha^{2}k_{0}, \epsilon)}\right)W(x,t;k)dk\] \[=R(x,t;k_{0})+\alpha\mathcal{A}^{-1}R(x,t;k_{0})\mathcal{A}+ \alpha^{2}\mathcal{A}^{-2}R(x,t;k_{0})\mathcal{A}^{2}\] \[+R(x,t;-k_{0})+\alpha\mathcal{A}^{-1}R(x,t;-k_{0})\mathcal{A}+ \alpha^{2}\mathcal{A}^{-2}R(x,t;-k_{0})\mathcal{A}^{2}.\]
Therefore, we get that
\[R(x,t;k_{0})=\frac{H(k_{0},t)M_{1}^{X_{A}}(y(k_{0}))H(k_{0},t)^{-1}}{3^{\frac{5 }{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}+O\left(t^{-1}\right),\]
and
\[R(x,t;-k_{0})=\frac{H(-k_{0},t)M_{1}^{X_{B}}(y(-k_{0}))H(-k_{0},t)^{-1}}{3^{ \frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}+O\left(t^{-1}\right).\]
By the symmetry of \(\tilde{M}(x,t;k)\), we have
\[\tilde{M}(x,t;k)=\mathcal{A}\tilde{M}(x,t;\alpha k)\mathcal{A}^{-1},\quad k\in \mathbb{C}\setminus\tilde{\Sigma}.\]
Then \(\mu\) and \(W\) also satisfy the above symmetric and we can find that
\[-\frac{1}{2\pi i}\int_{\partial\tilde{B}_{*}^{(k_{0})}\cup\partial \tilde{B}_{*}^{(-k_{0})}}W(x,t;k)dk =-\frac{1}{2\pi i}\left(\int_{\partial B(k_{0},\epsilon)}+\int_ {\partial B(\alpha k_{0},\epsilon)}+\int_{\partial B(\alpha^{2}k_{0}, \epsilon)}W(x,t;k)dk\right.\] \[-\frac{1}{2\pi i}\left(\int_{\partial B(-k_{0},\epsilon)}+\int_{ \partial B(-\alpha k_{0},\epsilon)}+\int_{\partial B(-\alpha^{2}k_{0}, \epsilon)}W(x,t;k)dk\right.\] \[-\frac{1}{2\pi i}\left(\int_{\partial B(-k_{0},\epsilon)}+\int_{ \partial B(-\alpha k_{0},\epsilon)}+\int_{\partial B(-\alpha^{2}k_{0}, \epsilon)}W(x,t;k)dk\right.\] \[-\frac{1}{2\pi i}\left(\int_{\partial B(-k_{0},\epsilon)}+\int_{ \partial B(-\alpha k_{0},\epsilon)}+\int_{\partial B(-\alpha^{2}k_{0},\epsilon)}W(x,t ;k)dk\right.\] \[-\frac{1}{2\pi i}\left(\int_{\partial B(-k_{0},\epsilon)}+\int_{ \partial B(-\alpha k_{0},\epsilon)}+\int_{\partial B(-\alpha^{2}k_{0},\epsilon)}W(x,t ;k)dk\right.
\[\partial_{x}\lim_{k\to\infty}k(\tilde{M}(x,t;k)-I)=\partial_{x} \left(\frac{\sum_{j=0}^{2}\alpha^{j}\mathcal{A}^{-j}H(k_{0},t)M_{1}^{X_{A}}(y(k_{ 0}))H(k_{0},t)^{-1}\mathcal{A}^{j}}{3^{\frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2} }}\right)\] \[+\partial_{x}\left(\frac{\sum_{j=0}^{2}\alpha^{j}\mathcal{A}^{-j} H(-k_{0},t)M_{1}^{X_{B}}(y(-k_{0}))H(-k_{0},t)^{-1}\mathcal{A}^{j}}{3^{\frac{5}{ 4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}\right).\]
### The long-time asymptotics of solutions for the KK equation and SK equation
Since the SK equation (1.1) and KK equation (1.2) have the same reconstruction formula, the asymptotic solution \(u(x,t)\) is
\[u(x,t)=-\frac{1}{2}\partial_{x}\left(\lim_{k\to\infty}k(N_{3}(x,t;k)-1)\right),\]
where \(N(x,t;k)=(N_{1},N_{2},N_{3})=(\alpha,\alpha^{2},1)M(x,t;k)\).
Recall that for \(k\in\mathbb{C}\setminus(\tilde{B}_{e}^{(k_{0})}\cup\tilde{B}_{e}^{(-k_{0})})\), the \(M(x,t;k)\) has the relationship with \(\tilde{M}(x,t;k)\) as follows
\[M=\tilde{M}G^{-1}\Delta^{-1}.\]
Then it follows
\[u(x,t) =-\frac{1}{2}\partial_{x}\left(\lim_{k\to\infty}k[((\alpha, \alpha^{2},1)\tilde{M}G^{-1}\Delta^{-1})_{3}-1]\right)\] \[=-\frac{1}{2}\partial_{x}\left(\lim_{k\to\infty}k[((\alpha, \alpha^{2},1)\tilde{M})_{3}-1]\right)+O\left(\frac{\ln t}{t}\right).\]
For the second equality, since the \(G^{-1}\Delta^{-1}\) tends to \(I\) as \(k\to\infty\) and their derivatives are dominated by \(\frac{\ln t}{t}\), it is concluded that
\[u(x,t)= -\frac{1}{2}\partial_{x}\left((\begin{array}{cc}\alpha&\alpha^{ 2}&1\end{array})\;\frac{\sum_{j=0}^{2}\alpha^{j}\mathcal{A}^{-j}H(k_{0},t)M_{1 }^{X_{A}}(y(k_{0}))H(k_{0},t)^{-1}\mathcal{A}^{j}}{3^{\frac{5}{4}}2\sqrt{5}tk_{ 0}^{\frac{3}{2}}}\right)\] \[-\frac{1}{2}\partial_{x}\left((\begin{array}{cc}\alpha&\alpha^{ 2}&1\end{array})\;\frac{\sum_{j=0}^{2}\alpha^{j}\mathcal{A}^{-j}H(-k_{0},t)M_{1 }^{X_{B}}(y(-k_{0}))H(-k_{0},t)^{-1}\mathcal{A}^{j}}{3^{\frac{5}{4}}2\sqrt{5} tk_{0}^{\frac{3}{2}}}\right)+O\left(\frac{\ln t}{t}\right)\] \[= -\frac{1}{2}\partial_{x}\left(\frac{\alpha^{2}\beta_{21}^{A}\delta _{A}^{0}e^{t\Phi_{21}(k_{0})}+\alpha\beta_{12}^{A}(\delta_{A}^{0})^{-1}e^{-t \Phi_{21}(k_{0})}+\alpha^{2}\beta_{21}^{B}(\delta_{B}^{0})^{-1}e^{t\Phi_{21}(- k_{0})}+\alpha\beta_{12}^{B}\delta_{B}^{0}e^{-t\Phi_{21}(-k_{0})}}{3^{\frac{5}{4}}2 \sqrt{5}tk_{0}^{\frac{3}{2}}}\right)+O\left(\frac{\ln t}{t}\right)\]
\[= -\frac{1}{3^{\frac{5}{4}}2\sqrt{5}tk_{0}^{\frac{3}{2}}}\left( \sqrt{\nu_{1}}\partial_{x}\cos\left(\frac{19\pi}{12}-(\arg y_{1}+\arg\Gamma(i \nu_{1}))-(36\sqrt{3}tk_{0}^{5})+\nu_{1}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\right.\] \[\left.\left.+\nu_{4}\ln(4)+\frac{1}{\pi}\int_{-k_{0}}^{-\infty} \log_{\pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})+\frac{1}{\pi }\int_{k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}( s)|^{2})\right)\] \[+\sqrt{\nu_{4}}\partial_{x}\cos\left(\frac{11\pi}{12}-(\arg y_{4} +\arg\Gamma(i\nu_{4}))-(36\sqrt{3}tk_{0}^{5})+\nu_{4}\ln(3^{\frac{7}{2}}20tk_{0} ^{5})\right.\right.\] \[\left.\left.+\nu_{1}\ln(4)+\frac{1}{\pi}\int_{k_{0}}^{\infty}\log _{0}\frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})+\frac{1}{\pi}\int_ {-k_{0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2}(s)|^{ 2})\right))+O\left(\frac{\ln t}{t}\right)\]
\[= -\frac{1}{3^{\frac{3}{4}}2\sqrt{5}tk_{0}^{\frac{1}{2}}}(\sqrt{\nu_{1}} \sin\left(\frac{19\pi}{12}-(\arg y_{1}+\arg\Gamma(i\nu_{1}))-(36\sqrt{3}tk_{0}^{5 })+\nu_{1}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{4}\ln(4)+\frac{1}{\pi}\int_{-k_{0}}^{-\infty}\log_{ \pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})+\frac{1}{\pi}\int_ {k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}(s)|^{2 })\right)\] \[+\sqrt{\nu_{4}}\sin\left(\frac{11\pi}{12}-(\arg y_{4}+\arg\Gamma( i\nu_{4}))-(36\sqrt{3}tk_{0}^{5})+\nu_{4}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{1}\ln(4)+\frac{1}{\pi}\int_{k_{0}}^{\infty}\log_{0} \frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})+\frac{1}{\pi}\int_{- k_{0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2}(s)|^{2 })\right)+O\left(\frac{\ln t}{t}\right),\]
where we have used the fact that \(\overline{\beta_{12}^{A,B}}=\beta_{21}^{A,B}\), \(\overline{\delta_{A,B}^{0}}=(\delta_{A,B}^{0})^{-1}\) and \(\frac{d}{dx}=\frac{d}{dk_{0}}\frac{1}{180tk_{0}^{5}}\).
**Theorem 3.16**.: _Suppose \(u(x,t)\) is a Schwartz class solution of \(SK\) equation (1.1) (or \(KK\) equation (1.2)) with initial data \(u_{0}(x)\) in Schwartz space, then in the generic case and \(\frac{x}{t}\) in a compact subsets of \((0,\infty)\), the solution has the following asymptotics as \(t\rightarrow\infty\)_
\[u(x,t)= -\frac{1}{3^{\frac{3}{4}}2\sqrt{5}tk_{0}^{\frac{1}{2}}}(\sqrt{ \nu_{1}}\sin\left(\frac{19\pi}{12}-(\arg y_{1}+\arg\Gamma(i\nu_{1}))-(36\sqrt{ 3}tk_{0}^{5})+\nu_{1}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{4}\ln(4)+\frac{1}{\pi}\int_{-k_{0}}^{-\infty}\log_{ \pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})+\frac{1}{\pi} \int_{k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}(s )|^{2})\right)\] \[+\sqrt{\nu_{4}}\sin\left(\frac{11\pi}{12}-(\arg y_{4}+\arg\Gamma( i\nu_{4}))-(36\sqrt{3}tk_{0}^{5})+\nu_{4}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{1}\ln(4)+\frac{1}{\pi}\int_{k_{0}}^{\infty}\log_{0} \frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})+\frac{1}{\pi}\int_{- k_{0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2}(s)|^{2 })\right)\] \[+O\left(\frac{\ln t}{t}\right). \tag{3.4}\]
Proof.: To be specific, denote
\[y_{1}=r_{1}(k_{0}),\ y_{4}=r_{2}(-k_{0}),\ \nu_{1}=-\frac{1}{2\pi}\ln\left(1- \left|r_{1}\left(k_{0}\right)\right|^{2}\right),\ \nu_{4}=-\frac{1}{2\pi}\ln\left(1-\left|r_{2}\left(-k_{0}\right)\right|^{2} \right),\]
\[\beta_{21}^{A}=\sqrt{\nu_{1}}\exp i\left(\frac{\pi}{4}-\arg y-\arg\Gamma(i\nu_{1 })\right),\quad e^{t\Phi_{21}(k_{0})}=\exp-i(36\sqrt{3}tk_{0}^{5}).\]
Recall
\[\delta_{A}^{0}=\frac{a^{-2i\nu_{1}}e^{-2\chi_{1}(k_{0})}}{\tilde{\delta}_{v_{1 }}(k_{0})},\]
and
\[a^{-2i\nu_{1}}=\exp\left(i\nu_{1}\ln(3^{\frac{5}{2}}20tk_{0}^{3})\right),\quad e ^{-2\chi_{1}(k_{0})}=\exp\left(-\frac{1}{\pi i}\int_{k_{0}}^{\infty}\log_{0}|s- k_{0}|d\ln(1-|r_{1}(s)|^{2})\right),\]
otherwise,
\[(\tilde{\delta}_{v_{1}})^{-1}=\frac{\delta_{2}(k_{0})\delta_{6}(k_ {0})}{\delta_{3}(k_{0})\delta_{5}(k_{0})\delta_{4}^{2}(k_{0})} =\exp\left(i\nu_{4}\ln(4)-\frac{1}{\pi i}\int_{-k_{0}}^{-\infty}\log_ {\pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})\right)\] \[\ast\exp\left(i\nu_{1}\ln(3k_{0}^{2})+\frac{1}{\pi i}\int_{k_{0}}^{ \infty}\log_{0}|s-\omega k_{0}|d\ln(1-|r_{1}(s)|^{2})\right),\]
with
\[\delta_{3}(k_{0})\delta_{5}(k_{0})=\delta_{1}(\omega^{2}k_{0})\delta_ {1}(\omega k_{0})=\exp\left(-i\nu_{1}\ln(3k_{0}^{2})-\frac{1}{\pi i}\int_{k_{0}} ^{\infty}\log_{0}|s-\omega k_{0}|d\ln(1-|r_{1}(s)|^{2})\right),\] \[\delta_{2}(k_{0})\delta_{6}(k_{0})=\delta_{4}(\omega^{2}k_{0}) \delta_{4}(\omega k_{0})=\exp\left(-i\nu_{4}\ln(k_{0}^{2})-\frac{1}{\pi i}\int_ {-k_{0}}^{-\infty}\log_{\pi}|s-\omega k_{0}|d\ln(1-|r_{2}(s)|^{2})\right),\] \[\delta_{4}^{2}(k_{0})=\exp\left(-i\nu_{4}\ln(4k_{0}^{2})-\frac{1} {\pi i}\int_{-k_{0}}^{-\infty}\log_{\pi}|s-k_{0}|d\ln(1-|r_{2}(s)|^{2})\right).\]
In conclusion, we have
\[\alpha^{2}\beta_{21}^{A}\delta_{A}^{0}e^{t\Phi_{21}(k_{0})}=\sqrt {\nu_{1}}\exp\left(\frac{4\pi i}{3}+i\left(\frac{\pi}{4}-\arg y-\arg\Gamma(i \nu_{1})\right)-i(36\sqrt{3}tk_{0}^{5})+i\nu_{1}\ln(3^{\frac{7}{2}}20tk_{0}^{ 5})\right.\] \[\left.+i\nu_{4}\ln(4)-\frac{1}{\pi i}\int_{-k_{0}}^{-\infty}\log _{\pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})-\frac{1}{\pi i} \int_{k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}(s )|^{2})\right)\] \[=\sqrt{\nu_{1}}\exp\left(\frac{19\pi i}{12}-i\left(\arg y+\arg \Gamma(i\nu_{1})\right)-i(36\sqrt{3}tk_{0}^{5})+i\nu_{1}\ln(3^{\frac{7}{2}}20tk _{0}^{5})\right.\] \[\left.+i\nu_{4}\ln(4)-\frac{1}{\pi i}\int_{-k_{0}}^{-\infty}\log _{\pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})-\frac{1}{\pi i} \int_{k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}( s)|^{2})\right).\]
On the other hand, we have
\[\alpha\beta_{12}^{B}\delta_{B}^{0}e^{-t\Phi_{21}(-k_{0})}=\sqrt {\nu_{4}}\exp\left(\frac{2\pi i}{3}+i\left(\frac{\pi}{4}-\arg y-\arg\Gamma(i \nu_{4})\right)-i(36\sqrt{3}tk_{0}^{5})+i\nu_{4}\ln(3^{\frac{7}{2}}20tk_{0}^{ 5})\right.\] \[\left.+i\nu_{1}\ln(4)-\frac{1}{\pi i}\int_{k_{0}}^{\infty}\log_{ 0}\frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})-\frac{1}{\pi i} \int_{-k_{0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2 }(s)|^{2})\right)\] \[=\sqrt{\nu_{4}}\exp\left(\frac{11\pi i}{12}-i\left(\arg y+\arg \Gamma(i\nu_{4})\right)-i(36\sqrt{3}tk_{0}^{5})+i\nu_{4}\ln(3^{\frac{7}{2}}20tk _{0}^{5})\right.\] \[\left.+i\nu_{1}\ln(4)-\frac{1}{\pi i}\int_{k_{0}}^{\infty}\log_{ 0}\frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})-\frac{1}{\pi i}\int_ {-k_{0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2}(s) |^{2})\right),\]
where
\[\beta_{12}^{B}=\sqrt{\nu_{4}}\exp i\left(\frac{\pi}{4}-\arg y-\arg\Gamma(i\nu _{4})\right),\quad e^{-t\Phi_{21}(-k_{0})}=\exp-i(36\sqrt{3}tk_{0}^{5}),\]
again,
\[\delta_{B}^{0}=\frac{a^{-2i\nu_{4}}e^{-2\chi_{4}(-k_{0})}}{\tilde{\delta}_{v_{4 }}(k_{0})},\]
and
\[a^{-2i\nu_{4}}=\exp\left(i\nu_{4}\ln(3^{\frac{5}{2}}20tk_{0}^{3}) \right),\quad e^{-2\chi_{4}(-k_{0})}=\exp\left(-\frac{1}{\pi i}\int_{-k_{0}}^{- \infty}\log_{\pi}|s+k_{0}|d\ln(1-|r_{2}(s)|^{2})\right),\]
and
\[(\tilde{\delta}_{v_{4}})^{-1}=\frac{\delta_{3}(-k_{0})\delta_{5}(-k_{0})}{ \tilde{\delta}_{2}(-k_{0})\delta_{6}(-k_{0})\delta_{6}^{2}(-k_{0})}=\exp\left(i \nu_{1}\ln(4)-\frac{1}{\pi i}\int_{k_{0}}^{\infty}\log_{\pi}\frac{|s+\omega k_ {0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})\right)\] \[*\exp\left(i\nu_{4}\ln(3k_{0}^{2})+\frac{1}{\pi i}\int_{-k_{0}}^{- \infty}\log_{0}|s+\omega k_{0}|d\ln(1-|r_{2}(s)|^{2})\right),\]
with
\[\delta_{3}(-k_{0})\delta_{5}(-k_{0}) =\delta_{1}(-\omega^{2}k_{0})\delta_{1}(-\omega k_{0})=\exp\left(-i \nu_{1}\ln(k_{0}^{2})-\frac{1}{\pi i}\int_{k_{0}}^{\infty}\log_{0}|s+\omega k_{0 }|d\ln(1-|r_{1}(s)|^{2})\right),\] \[\delta_{2}(-k_{0})\delta_{6}(-k_{0}) =\delta_{4}(-\omega^{2}k_{0})\delta_{4}(-\omega k_{0})=\exp\left(-i \nu_{4}\ln(3k_{0}^{2})-\frac{1}{\pi i}\int_{-k_{0}}^{-\infty}\log_{\pi}|s+ \omega k_{0}|d\ln(1-|r_{2}(s)|^{2})\right),\] \[\delta_{1}^{2}(-k_{0})=\exp\left(-i\nu_{1}\ln(4k_{0}^{2})-\frac{1 }{\pi i}\int_{k_{0}}^{\infty}\log_{\pi}|s+k_{0}|d\ln(1-|r_{1}(s)|^{2})\right).\]
### The long-time asymptotics of solution for the modified SK-KK equation
The modified SK-KK equation (1.11) has Lax pair
\[\left\{\begin{array}{l}\Phi_{x}=\mathcal{M}\Phi,\\ \Phi_{t}=\mathcal{N}\Phi,\end{array}\right. \tag{3.5}\]
where
\[\mathcal{M} =\left(\begin{array}{ccc}0&1&0\\ 0&-w&1\\ \lambda&0&w\end{array}\right),\] \[\mathcal{N} =\left(\begin{array}{ccc}-6\lambda w^{2}+6\lambda w_{x}&N_{12}& -3w_{xx}+9\lambda+6w_{x}w\\ -6\lambda ww_{x}+9\lambda^{2}+3\lambda w_{xx}&N_{22}&N_{23}\\ N_{31}&9\lambda^{2}&N_{33}\end{array}\right),\]
with \(N_{12}=-3w_{x}^{2}-w^{4}+w_{xxx}-9\lambda w-4w_{x}w^{2}+w_{xx}w,N_{22}=-3 \lambda w_{x}+3\lambda w^{2}+w_{xxxx}w+w^{5}-5w_{xx}w^{2}-5w_{x}w_{xx}-5w_{x}^ {2}w,N_{23}=-w^{4}+3w_{x}^{2}-2w_{xxx}+2w_{x}w^{2}+4w_{xx}w,N_{31}=-3\lambda w _{x}^{2}-\lambda w^{4}+\lambda w_{xxx}+9w^{2}\lambda-4\lambda w_{x}w^{2}+ \lambda w_{xx}w\), \(N_{33}=-w_{xxxx}+3\lambda w^{2}-3\lambda w_{x}-w^{5}+5w_{xx}w^{2}+5w_{x}w_{xx}+5 w_{x}^{2}w\).
By means of the same procedure as before, it is found that the Riemann-Hilbert problem associated with the modified SK-KK equation (1.11) is just the one in Subsection 2.10 and the reconstruction formula for potential function \(w(x,t)\) is
\[w(x,t)=3\lim_{k\to\infty}(kM(x,t,k))_{13}. \tag{3.6}\]
Following the similar way of Deift-Zhou steepest-descent method [33], the long-time asymptotics of the modified SK-KK equation (1.11) is formulated below
\[w(x,t)= -\frac{1}{3^{\frac{1}{2}}2\sqrt{5}tk_{0}^{\frac{2}{3}}}[\sqrt{ \nu_{1}}\cos\left(\frac{19\pi}{12}-(\arg y_{1}+\arg\Gamma(i\nu_{1}))-(36\sqrt{ 3}tk_{0}^{5})+\nu_{1}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{4}\ln(4)+\frac{1}{\pi}\int_{-k_{0}}^{-\infty}\log_{ \pi}\frac{|s-\omega k_{0}|}{|s-k_{0}|}d\ln(1-|r_{2}(s)|^{2})+\frac{1}{\pi}\int _{k_{0}}^{\infty}\log_{0}\frac{|s-k_{0}|}{|s-\omega k_{0}|}d\ln(1-|r_{1}(s)|^ {2})\right)\] \[+\sqrt{\nu_{4}}\cos\left(\frac{11\pi}{12}-(\arg y_{4}+\arg\Gamma( i\nu_{4}))-(36\sqrt{3}tk_{0}^{5})+\nu_{4}\ln(3^{\frac{7}{2}}20tk_{0}^{5})\right.\] \[\left.+\nu_{1}\ln(4)+\frac{1}{\pi}\int_{k_{0}}^{\infty}\log_{0} \frac{|s+\omega k_{0}|}{|s+k_{0}|}d\ln(1-|r_{1}(s)|^{2})+\frac{1}{\pi}\int_{-k_ {0}}^{-\infty}\log_{\pi}\frac{|s+k_{0}|}{|s+\omega k_{0}|}d\ln(1-|r_{2}(s)|^{2 })\right)]\] \[+O\left(\frac{\ln t}{t}\right). \tag{3.7}\]
Moreover, the Miura transformations in (1.12) can recover the long-time asymptotics of the SK equation (1.1) and KK equation (1.2) once again.
## 4. The verification of the theoretical results: Numerical simulations
Now it is time to verify the theoretical results of the long-time asymptotics for the SK equation (1.1) by direct numerical simulations. To do so, for the SK equation (1.1), take the initial-value condition of the form
\[u(x,0)=u_{0}(x)=\frac{1}{600}(xe^{-\frac{x^{2}}{20}}-e^{-\frac{x^{2}}{10}}). \tag{4.1}\]
Fig. 10 demonstrates the evolutions of the solution \(u(x,t)\) to the SK equation (1.1) with initial data (4.1) at time \(t=50\) and \(t=100\) by two different ways, where the dashed red line shows the leading-order asymptotics from the Riemann-Hilbert formulation and the solid blue line shows the wave profile obtained by numerical simulation. It is seen that the theoretical results agree very well with the direct numerical simulation, which determines the reliability of the Deift-Zhou steepest-descent method [33]. From Fig. 10(a) and Fig. 10(b), it is expected that the asymptotic formula provides a better and better approximation as \(t\) increases. The convergence is weak for small values of \(x\), which is consistent with the fact that the asymptotic estimate (3.4) is not uniform near \(x=0\).
For the KK equation (1.2), consider the initial gaussian wavepacket of the form
\[v(x,0)=v_{0}(x)=-\frac{1}{10}e^{-\frac{x^{2}}{20}}. \tag{4.2}\]
Fig. 11 displays the evolutions of the solution \(v(x,t)\) to the KK equation (1.2) with initial gaussian wavepacket (4.2) at time \(t=25\) and \(t=45\) by two different ways, where the dashed
Figure 10. The comparisons of the leading-order asymptotic approximation from Riemann-Hilbert problem and direct numerical simulations of the SK equation (1.1) with initial data (4.1) at time \(t=50\) and \(t=100\), respectively.
red line shows the leading-order asymptotics from the Riemann-Hilbert formulation and the solid blue line shows the wave profile obtained by numerical simulation. It is also observed that the theoretical results agree very well with the direct numerical simulation. From Fig. 11(a) and Fig. 11(b), it is also expected that the asymptotic formula provides a better and better approximation as \(t\) increases.
For the modified SK-KK equation (1.11), take the initial gaussian wavepacket of the form
\[w(x,0)=w_{0}(x)=-\frac{1}{10}e^{-\frac{x^{2}}{20}}. \tag{4.3}\]
Fig. 12 displays the evolutions of the solution \(w(x,t)\) to the modified SK-KK equation (1.11) with initial gaussian wavepacket (4.3) at time \(t=50\) and \(t=100\) by two different ways, where the dashed red line shows the leading-order asymptotics from the Riemann-Hilbert formulation and the solid blue line shows the wave profile obtained by numerical simulation.
It is obvious that the Miura transformation \(u=(w_{x}-w^{2})/6\) in (1.12) can also recover the solution of the SK equation (1.1). Thus we give the comparisons of direct numerical simulation and the leading-order asymptotic approximation (3.4) along with the long-time asymptotics for modified SK-KK equation (1.11) and the Miura transformation in (1.12). Fig. 13 shows these comparisons by considering the initial data of \(u(x,0)\) in (4.1), where the solid blue line shows the wave profile obtained by numerical simulation, the dashed red line shows the leading-order asymptotics (3.4) from the Riemann-Hilbert formulation of the
Figure 11. The comparisons of the leading-order asymptotic approximation from Riemann-Hilbert problem and direct numerical simulations of the KK equation (1.2) with initial gaussian wavepacket (4.2) at time \(t=25\) and \(t=45\), respectively.
Figure 12. The comparisons of the leading-order asymptotic approximation from Riemann-Hilbert problem and direct numerical simulations of the modified SK-KK equation (1.11) with initial gaussian wavepacket (4.3) at time \(t=50\) and \(t=100\), respectively.
Figure 13. The comparisons of the leading-order asymptotic approximation from Riemann-Hilbert problem, asymptotic solution from the Miura transformation \(u=(w_{x}-w^{2})/6\) and direct numerical simulations of the SK equation (1.1) with initial data (4.1) at time \(t=100\).
SK equation (1.1), while the dotted purple line displays the solution of SK equation (1.1) from the Miura transformation \(u=(w_{x}-w^{2})/6\) in (1.12) and leading-order asymptotic approximation (3.7) of the modified SK-KK equation (1.11). It is seen that the theoretical results agree very well with the direct numerical simulation, which determines the reliability of the Deift-Zhou steepest-descent method [33].
## 5. The Painleve Region
It is seen from Figs. 10-13 that the long-time asymptotic solutions are invalid near \(x=0\). It is conjectured that this region can be expressed by the solution of the fourth-order Painleve I equation. The self-similar transformation motivates this conjecture.
For the region \(|\frac{x}{t^{1/5}}|\leq C\), letting \(k\rightarrow\frac{k}{t^{1/5}}\) and denoting \(\tau=tk_{0}^{5}\), the \(\theta_{21}\) becomes into
\[\theta_{21}(k)=9\sqrt{3}i(k^{5}-5k_{0}^{4}t^{4/5}k)=9\sqrt{3}i(k^{5}-5\tau^{4 /5}k)\]
Take the self-similar transformation \(w(x,t)=(5t)^{-\frac{1}{5}}y(s)\) with \(s=\frac{x}{\sqrt[4]{5}\sqrt[4]{t}}\), then one can get the equation
\[y^{(5)}-5y^{(3)}y^{2}-5y^{\prime\prime 2}+5y^{4}y^{\prime}-5y^{\prime 3}+ \left(-5y^{(3)}-s\right)y^{\prime}-y\left(20y^{\prime}y^{\prime\prime}+1\right)\]
\[=y^{(5)}-5y^{\prime 3}-10yy^{\prime}y^{\prime\prime}-5y^{\prime\prime 2}-5y^{ \prime}y^{(3)}-sy^{\prime}-y-5y^{(3)}y^{2}-10yy^{\prime}y^{\prime\prime}+5y^{ 4}y^{\prime}.\]
Integrating this equation yields
\[y^{(4)}=5y(y^{\prime})^{2}+5y^{\prime}y^{\prime\prime}+sy+5y^{2}y^{\prime\prime }-y^{5}, \tag{5.1}\]
which is just the first Painleve transcendence according to the fourth-order Painleve I equation in [34].
|
2308.01051 | $θ$-splitting Densities and Reflection Positivity | A simple condition is given that is sufficient to determine whether a measure
that is absolutely continuous with respect to a Gau{\ss}ian measure on the
space of distributions is reflection positive. It readily generalises
conventional lattice results to an abstract setting, enabling the construction
of many reflection positive measures that are not supported on lattices. | Jobst Ziebell | 2023-08-02T09:48:32Z | http://arxiv.org/abs/2308.01051v2 | # \(\theta\)-splitting Densities and Reflection Positivity
###### Abstract
A simple condition is given that is sufficient to determine whether a measure that is absolutely continuous with respect to a Gaussian measure on the space of distributions is reflection positive. It readily generalises conventional lattice results to an abstract setting, enabling the construction of many reflection positive measures that are not supported on lattices.
## 1 Introduction
Reflection positivity is one of the pillars of Euclidean quantum field theories. It is readily established for wide sets of Gaussian measures but for non-Gaussian measures, the author feels that - with the exception of measures supported on lattices - there is no general framework that can be easily applied. That is fixed in this article by introducing the set of \(\theta\)-splitting functions, which can work as densities to directly generalise the lattice methods used e.g. in [1]. The result is very simple: Given a \(\theta\)-invariant reflection positive Gaussian measure and applying a measurable density to it that is \(\theta\)-splitting, the outcome is a reflection positive measure.
## 2 Preliminaries
A **locally convex space** is a real topological vector space whose topology is induced by some family of seminorms. The **dual** of a locally convex space \(X\) equipped with the strong dual topology will be denoted by \(X^{*}_{\beta}\). **Inner products** denoted with round brackets \((\cdot,\cdot)\) are taken to be \(\mathbb{R}\)-bilinear. Throughout this work, \(d\in\mathbb{N}\) is fixed. We shall work on the following spaces of real test functions with their canonical LF topologies [12, p. 131-133]:
\[\mathcal{D}:=\mathcal{D}(\mathbb{R}^{d+1})\qquad\text{and}\qquad\mathcal{D}_{+ }:=\mathcal{D}(\mathbb{R}_{>0}\times\mathbb{R}^{d})\,. \tag{1}\]
Let us denote the corresponding continuous **restriction map** by \(\pi_{+}:\mathcal{D}_{\beta}^{*}\rightarrow(\mathcal{D}_{+})_{\beta}^{*}\) (see e.g. [17, p. 245-246]). \(\mathcal{D}\) and \(\mathcal{D}_{+}\) as well as their strong duals \(\mathcal{D}_{\beta}^{*}\) and \((\mathcal{D}_{+})_{\beta}^{*}\) are complete [17, Theorem 13.1], barrelled [18, p. 347], nuclear spaces [17, p. 530] (hence, reflexive by [19, p. 147]) that are also Lusin spaces [14, p. 128] and thus in particular Souslin spaces.
**Theorem 2.1** ([16, Lemma 6.4.2.(ii), Lemma 6.6.4]).: _Let \(X\) and \(Y\) be Souslin spaces. Then, the Borel \(\sigma\)-algebra of \(X\times Y\) coincides with the \(\sigma\)-algebra generated by all products of Borel sets in \(X\) and \(Y\) respectively._
In this work, a **measure** is taken to be a countably additive nonnegative function on a \(\sigma\)-algebra. A **Borel measure** is thus a measure on a Borel \(\sigma\)-algebra and a **Radon measure** is a Borel measure that is inner regular over compact sets. A **centred Gaussian measure** on a locally convex space \(X\) is a Borel probability measure with the property that the pushforward measures by elements of \(X^{*}\) are centred Gaussians or the Dirac delta measure \(\delta_{0}\) at the origin. One can in general consider non-Radon Gaussian measures on locally convex spaces. However, every Borel measure on the spaces \(\mathcal{D}_{\beta}^{*},(\mathcal{D}_{+})_{\beta}^{*}\) and countable products thereof is automatically Radon [16, Theorem 7.4.3].
A subset \(A\subseteq X\) is \(\boldsymbol{\mu}\)**-measurable** with respect to a measure \(\mu\) on some \(\sigma\)-algebra \(\mathcal{A}\) on \(X\) if it is in the Lebesgue completion \(\mathcal{A}_{\mu}\) of \(\mathcal{A}\) with respect to \(\mu\). Similarly, a function \(f:X\rightarrow[-\infty,\infty]\) is \(\boldsymbol{\mu}\)**-measurable** if the preimage of every Borel subset of \([-\infty,\infty]\) is in \(\mathcal{A}_{\mu}\). Likewise, \(f:X\rightarrow[-\infty,\infty]\) is \(\boldsymbol{\mu}\)**-integrable** if \(f\) is \(\mu\)-measurable and \(\int|f|\mathrm{d}\mu<\infty\). A subset \(A\subseteq X\) is \(\boldsymbol{\mu}\)**-negligible** if it is a subset of some \(B\in\mathcal{A}\) with \(\mu(B)=0\).
The **pushforward** of a Borel measure \(\mu\) on a Hausdorff space \(X\) by a continuous function \(f:X\to Y\) to a Hausdorff space \(Y\) will be denoted by \(f_{*}\mu\). It is automatically a Borel measure on \(Y\) and if \(\mu\) is Radon, so is \(f_{*}\mu\)[16, Theorem 9.1.1.(i)]. The **convolution** of two Borel measures \(\mu\) and \(\nu\) on a Souslin locally convex space \(X\) is given by \(\mu*\nu=s_{*}(\mu\times\nu)\) where \(s:X\times X\to X,(x,y)\mapsto x+y\). This is well-defined by theorem 2.1.
To every finite Borel measure \(\mu\) on a locally convex space \(X\) we associate its **characteristic function**\(\hat{\mu}:X^{*}\rightarrow\mathbb{C}\) with
\[\phi\mapsto\int_{X}\exp\left[i\phi\left(x\right)\right]\mathrm{d}\mu\left(x \right)\,. \tag{2}\]
It is well-known that two Radon measures on a locally convex space are equal if and only if their characteristic functions are equal [16, Lemma 7.13.5]. Moreover, if \(\mu\) is a centred Gaussian measure on \(X\), its characteristic function is given by
\[\hat{\mu}\left(\phi\right)=\exp\left[-\frac{1}{2}\left(\phi,\phi\right)_{L^{2 }\left(\mu\right)}\right] \tag{3}\]
for all \(\phi\in X^{*}\)[16, Theorem 2.2.4, Corollary 2.2.5].
**Theorem 2.2**.: _Let \(f:X\to Y\) be a continuous map from a Souslin space \(X\) to a Hausdorff space \(Y\). Then, for every Borel set \(B\subseteq X\), \(f(B)\) is measurable by any Radon measure on \(Y\)._
Proof.: Since every Borel subset of a Souslin space is Souslin [14, p. 96 Theorem 3], this follows directly from [1, Theorem A.3.15].
**Corollary 2.3**.: _Let \(p:X\to Y\) be a continuous map from a Souslin space \(X\) to a Hausdorff space \(Y\) and \(\mu\) a Radon measure on \(X\). Then every function \(f:Y\to[-\infty,\infty]\) with the property that \(f\circ p\) is \(\mu\)-measurable is \((p_{*}\mu)\)-measurable._
Proof.: First, note that \(p(X)\) is \((p_{*}\mu)\)-measurable by the preceeding theorem. Now, letting \(B\subset[-\infty,\infty]\) be a Borel set, we have
\[p^{-1}\left(f^{-1}\left(B\right)\right)=A\cup N_{1} \tag{4}\]
for some Borel subset \(A\subseteq X\) and some \(\mu\)-negligible set \(N_{1}\subseteq X\). For brevity, let \(N_{2}=Y\setminus p(X)\), which is clearly \((p_{*}\mu)\)-negligible. Then,
\[\begin{split} f^{-1}\left(B\right)&=\left[f^{-1} \left(B\right)\cap p(X)\right]\cup\left[f^{-1}\left(B\right)\cap N_{2}\right] \\ &=p\left(p^{-1}\left(f^{-1}\left(B\right)\right)\right)\cup\left[ f^{-1}\left(B\right)\cap N_{2}\right]\\ &=p\left(A\right)\cup p\left(N_{1}\right)\cup\left[f^{-1}\left(B \right)\cap N_{2}\right]\,.\end{split} \tag{5}\]
\(p(A)\) is \((p_{*}\mu)\)-measurable by the preceeding theorem and \(p(N_{1})\) as well as \(f^{-1}(B)\cap N_{2}\) are clearly \((p_{*}\mu)\)-negligible.
We close this section by a simple lemma on positive semidefinite matrices.
**Lemma 2.4** ([14, Satz VII]).: _Let \(N\in\mathbb{N}\) and \(A,B\) be positive semidefinite \(N\times N\) matrices with respect to the standard inner product on \(\mathbb{C}^{N}\). Then the matrix \((A_{m,n}B_{m,n})_{m,n=1}^{N}\) given by component-wise multiplication is positive semidefinite._
Proof.: Diagonalising \(B\) by a unitary matrix \(U\), we obtain
\[B_{m,n}=\sum_{a=1}^{N}U_{m,a}^{*}\lambda_{a}U_{n,a} \tag{6}\]
for some nonegative numbers \(\lambda_{1},\ldots,\lambda_{N}\). Hence, for any \(c\in\mathbb{C}^{N}\),
\[\sum_{m,n,a,b=1}^{N}c_{m}^{*}A_{m,n}B_{m,n}c_{n}=\sum_{a=1}^{N}\lambda_{a} \sum_{m,n=1}^{N}\left(U_{m,a}c_{m}\right)^{*}A_{m,n}\left(U_{n,a}c_{n}\right) \geq 0\,. \tag{7}\]
## 3 Reflection Positivity
On \(\mathbb{R}^{d+1}\) we define the operation of **time reflection** which we shall denote by \(\theta:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1},(x_{1},\ldots,x_{d+1})\mapsto(-x_{ 1},\ldots,x_{d+1})\). By a slight abuse of notation, \(\theta\) extends continuously and linearly to \(\mathcal{D}\) and \(\mathcal{D}_{\beta}^{*}\) in the obvious way.
**Definition 3.1** ([12, p. 90]).: Let \(\mu\) be a finite Borel measure on \(\mathcal{D}_{\beta}^{*}\). Then \(\mu\) is **reflection positive** if for every sequence \((\phi_{n})_{n\in\mathbb{N}}\) in \(\mathcal{D}_{+}\), every sequence \((c_{n})_{n\in\mathbb{N}}\) of complex numbers and every \(N\in\mathbb{N}\),
\[\sum_{m,n=1}^{N}c_{m}^{*}\hat{\mu}\left(\phi_{m}-\theta\phi_{n}\right)c_{n}\geq 0\,. \tag{8}\]
Furthermore, \(\mu\) is \(\boldsymbol{\theta}\)**-invariant** if \(\theta_{*}\mu=\mu\).
To begin with, let us recapitulate two of the most important (in the author's opinion) theorems on reflection positive measures along with their proofs.
**Theorem 3.2** ([12, Theorem 6.2.3]).: _Let \(\mu\) be a finite, reflection positive Borel measure on \(\mathcal{D}_{\beta}^{*}\) with the property that for every \(\phi\in\mathcal{D}_{+}\) the function \(\mathbb{R}\to\mathbb{C},t\mapsto\hat{\mu}(t\phi)\) has an analytic continuation to some neighbourhood of zero in the complex plane. Then, \((\phi,\theta\phi)_{L^{2}(\mu)}\geq 0\) for all \(\phi\in\mathcal{D}_{+}\)._
Proof.: For \(\lambda>0\) let \(\psi_{1}=\lambda\phi\), \(\psi_{2}=0\), \(c_{1}=\lambda^{-1}\) and \(c_{2}=-\lambda^{-1}\). Since \(\mu\) is reflection positive, we obtain
\[\begin{split} 0&\leq\sum_{m,n=1}^{2}c_{m}^{*}\hat{\mu} \left(\psi_{m}-\theta\psi_{n}\right)c_{n}\\ &=\frac{1}{\lambda^{2}}\int_{\mathcal{D}_{\beta}^{*}}\left(\exp \left[i\lambda T\left(\phi-\theta\phi\right)\right]-\exp\left[-i\lambda T\left( \phi\right)\right]-\exp\left[-i\lambda T\left(\theta\phi\right)\right]+1 \right)\mathrm{d}\mu\left(T\right)\,.\end{split} \tag{9}\]
By a classical theorem of Lukacs [13, p. 192], the moment-generating functions of the pushforward measures \(\phi_{*}\mu\), \((\theta\phi)_{*}\mu\) and \((\phi-\theta\phi)_{*}\mu\) exist as integrals in some neighbourhood of zero. Consequently, we can take \(\lambda\to 0\) under the integral and obtain
\[\lim_{\lambda\to 0}\sum_{m,n=1}^{2}c_{m}^{*}\hat{\mu}\left(\psi_{m}-\theta \psi_{n}\right)c_{n}=\int_{\mathcal{D}_{\beta}^{*}}T\left(\phi\right)T\left( \theta\phi\right)\mathrm{d}\mu\left(T\right)=\left\langle\phi,\theta\phi \right\rangle_{L^{2}(\mu)}\geq 0\,. \tag{10}\]
**Theorem 3.3** ([12, Theorem 6.2.2]).: _Let \(\mu\) be a \(\theta\)-invariant Gaussian measure on \(\mathcal{D}_{\beta}^{*}\). Then \(\mu\) is reflection positive if and only if \((\phi,\theta\phi)_{L^{2}(\mu)}\geq 0\) for all \(\phi\in\mathcal{D}_{+}\)._
Proof.: \(\Rightarrow\): This is clear by the preceeding theorem.
\(\Leftarrow\): Let \((\cdot,\cdot)\) denote the inner product in \(L^{2}(\mu)\) and let \((\phi_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathcal{D}_{+}\), \((c_{n})_{n\in\mathbb{N}}\) a sequence of complex numbers and \(N\in\mathbb{N}\). Then, \(\theta\)-invariance implies
\[\sum_{m,n=1}^{N}c_{m}^{*}\hat{\mu}\left(\psi_{m}-\theta\psi_{n}\right)c_{n}= \sum_{m,n=1}^{N}c_{m}^{*}\hat{\mu}\left(\phi_{m}\right)\exp\left[\left(\phi_{m },\theta\phi_{n}\right)\right]\hat{\mu}\left(\phi_{n}\right)c_{n}\,. \tag{11}\]
Since \(\hat{\mu}\) is real, the statement follows if \((\exp\left[\left(\phi_{m},\theta\phi_{n}\right)\right])_{m,n=1}^{N}\) is a positive semidefinite matrix. Since \((\phi_{m},\theta\phi_{n})=(\theta\phi_{m},\phi_{n})\) by the \(\theta\)-invariance of \(\mu\), \(\theta\) extends to a positive semidefinite linear operator on the complexification of \(\mathrm{span}\{\phi_{n}:n\in\mathbb{N}\}\). Consequently, \(((\phi_{m},\theta\phi_{n}))_{m,n=1}^{N}\) is positive semidefinite. By decomposing the exponential as a power series, the claim now follows from lemma 2.4.
The main theorem of this article depends on the following simple property of a function with respect to \(\theta\).
**Definition 3.4**.: A function \(F:\mathcal{D}_{\beta}^{*}\rightarrow[-\infty,\infty]\) is called \(\boldsymbol{\theta}\)**-splitting** if there exists a function \(G:(\mathcal{D}_{+})_{\beta}^{*}\rightarrow[-\infty,\infty]\) such that
\[F=G\circ\pi_{+}+G\circ\pi_{+}\circ\theta\,. \tag{12}\]
**Theorem 3.5**.: _Let \(\mu\) be a \(\theta\)-invariant reflection positive centred Gaussian measure on \(\mathcal{D}_{\beta}^{*}\). Then, for any \(\mu\)-measurable \(\theta\)-splitting function \(F:\mathcal{D}_{\beta}^{*}\rightarrow[-\infty,\infty]\) with \(exp\circ F\in L^{1}(\mu)\), the finite Borel measure_
\[\omega=\exp\left[F\right]\cdot\mu \tag{13}\]
_is reflection positive._
Proof.: Define
\[j:\mathcal{D}_{\beta}^{*}\rightarrow(\mathcal{D}_{+})_{\beta}^{*}\times( \mathcal{D}_{+})_{\beta}^{*}\qquad T\mapsto(\pi_{+}T,\pi_{+}\theta T). \tag{14}\]
\(j\) is clearly continuous such that the pushforward measure \(j_{*}\mu\) is a Radon measure \(\nu\) on \((\mathcal{D}_{+})_{\beta}^{*}\times(\mathcal{D}_{+})_{\beta}^{*}\). Now, let
\[F_{2}:(\mathcal{D}_{+})_{\beta}^{*}\times(\mathcal{D}_{+})_{\beta}^{*} \rightarrow\mathbb{R}\qquad(T,K)\mapsto G\left(T\right)+G\left(K\right)\,. \tag{15}\]
Then, for every \(T\in\mathcal{D}_{\beta}^{*}\),
\[\left(F_{2}\circ j\right)(T)=G\left(\pi_{+}T\right)+G\left(\pi_{+}\theta T \right)=F\left(T\right)\,, \tag{16}\]
such that \(F_{2}\) is \(\nu\)-measurable by corollary 2.3. Turning to reflection positivity, let \((\phi_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathcal{D}_{+}\) and note that
\[\begin{split}\hat{\omega}\left(\phi_{m}-\theta\phi_{n}\right)& =\int_{\mathcal{D}_{\beta}^{*}}\exp\left[iT\left(\phi_{m}\right) -iT\left(\theta\phi_{n}\right)+F\left(T\right)\right]\mathrm{d}\mu\left(T \right)\\ &=\int_{\mathcal{D}_{\beta}^{*}}\exp\left[i\ j\left(T\right) \left(\phi_{m},-\phi_{n}\right)+\left(F_{2}\circ j\right)\left(T\right)\right] \mathrm{d}\mu\left(T\right)\\ &=\int_{((\mathcal{D}_{+})_{\beta}^{*})^{2}}\exp\left[iT\left( \phi_{m}\right)-iK\left(\phi_{n}\right)+F_{2}\left(T,K\right)\right]\mathrm{d} \nu\left(T,K\right)\\ &=\int_{((\mathcal{D}_{+})_{\beta}^{*})^{2}}\exp\left[iT\left( \phi_{m}\right)-iK\left(\phi_{n}\right)+G\left(T\right)+G\left(K\right)\right] \mathrm{d}\nu\left(T,K\right)\,.\end{split} \tag{17}\]
The above expression suggests to find a disintegration of \(\nu\) that separates the \(T\) and \(K\) variables. To that end, recall that \(\mu\) is Gaussian such that for any \(\phi,\psi\in\mathcal{D}_{+}\), we have
\[\hat{\nu}\left(\phi,\psi\right)=\int_{\mathcal{D}_{\beta}^{*}}\exp\left[iT \left(\phi\right)+iT\left(\theta\psi\right)\right]\mathrm{d}\mu\left(T\right)= \exp\left[-\frac{1}{2}\left\|\phi+\theta\psi\right\|_{L^{2}(\mu)}^{2}\right]\,. \tag{18}\]
Furthermore, by theorem 3.2, Cauchy-Schwartz and the \(\theta\)-invariance of \(\mu\),
\[0\leq\left\langle\phi,\theta\phi\right\rangle_{L^{2}(\mu)}\leq\left\langle\phi,\phi\right\rangle_{L^{2}(\mu)}\,. \tag{19}\]
Moreover, since \((\mathcal{D}_{+})_{\beta}^{*}\) is a reflexive, nuclear, barrelled space, there exist uniquely determined Radon Gaussian measures \(P\) and \(Q\) on \((\mathcal{D}_{+})_{\beta}^{*}\) with
\[\hat{P}\left(\phi\right) =\exp\left[-\frac{1}{2}\left\langle\phi,\phi\right\rangle_{L^{2} \left(\mu\right)}+\frac{1}{2}\left\langle\phi,\theta\phi\right\rangle_{L^{2} \left(\mu\right)}\right]\,, \tag{20}\] \[\hat{Q}\left(\phi\right) =\exp\left[-\frac{1}{2}\left\langle\phi,\theta\phi\right\rangle_{ L^{2}\left(\mu\right)}\right] \tag{21}\]
by Minlos theorem [2, Theorem 7.13.9]. Defining the diagonal map
\[\Delta:(\mathcal{D}_{+})_{\beta}^{*}\rightarrow(\mathcal{D}_{+})_{\beta}^{*} \times(\mathcal{D}_{+})_{\beta}^{*}\qquad T\mapsto(T,T) \tag{22}\]
it is clear that
\[\hat{\nu}\left(\phi,\psi\right)=\hat{P}\left(\phi\right)\hat{P}\left(\psi \right)\hat{Q}\left(\phi+\psi\right)=\hat{P}\left(\phi\right)\hat{P}\left( \psi\right)\widehat{\Delta_{*}Q}\left(\phi,\psi\right) \tag{23}\]
for all \(\phi,\psi\in\mathcal{D}_{+}\). Equivalently, \(\nu=(P\times P)*(\Delta_{*}Q)\) by theorem 2.1. Hence, it is straightforward to verify that
\[\hat{\omega}\left(\phi_{m}-\theta\phi_{n}\right)=\int_{(( \mathcal{D}_{+})_{\beta}^{*})^{3}}\exp \bigl{[}i\left(T+L\right)\left(\phi_{m}\right)-i\left(K+L\right) \left(\phi_{n}\right) \tag{24}\] \[\qquad\qquad+G\left(T+L\right)+G\left(K+L\right)\bigr{]}\mathrm{d }\left(P\times P\times Q\right)\left(T,K,L\right)\,.\]
Now, the functions
\[H_{m}\left(L\right)=\int_{(\mathcal{D}_{+})_{\beta}^{*}}\exp\left[-i\left(T+L \right)\left(\phi_{m}\right)+G\left(T+L\right)\right]\mathrm{d}P\left(T\right) \tag{25}\]
for \(m\in\mathbb{N}\) are well-defined \(Q\)-almost everywhere. Thus, using Fubini, we arrive at
\[\sum_{m,n=1}^{N}c_{m}^{*}\hat{\omega}\left(\phi_{m}-\theta\phi_{n}\right)c_{n }=\int_{(\mathcal{D}_{+})_{\beta}^{*}}\left|\sum_{n=1}^{N}c_{n}H_{n}\left(L \right)\right|^{2}\mathrm{d}Q\left(L\right)\geq 0 \tag{26}\]
for any \(N\in\mathbb{N}\) and any sequence \((c_{n})_{n\in\mathbb{N}}\) of complex numbers.
This theorem is strikingly simple and can be applied very easily. Let us call a locally convex space \(X\) together with a continuous, linear map \(j:X\to\mathcal{D}_{\beta}^{*}\) a \(\boldsymbol{\theta}\)**-model space**, if there is a continuous, linear operator (slight abuse of terminology) \(\theta:X\to X\) such that \(\theta\circ j=j\circ\theta\).
_Example 3.6_.: Examples of such \(\theta\)-model spaces are e.g. function spaces on \(\theta\)-symmetric lattice subsets of \(\mathbb{R}^{d+1}\), \(\mathcal{D}\) or the space of Schwartz functions on \(\mathbb{R}^{d+1}\) together with their respective usual injections into \(\mathcal{D}_{\beta}^{*}\).
_Remark 3.7_.: The above examples cover most of what is used in literature on Euclidean interacting quantum field theories and are also Souslin spaces.
We may now extend the definition of a \(\boldsymbol{\theta}\)**-splitting** function to \(\theta\)-model spaces.
**Definition 3.8**.: A function \(F:X\to[-\infty,\infty]\) on a \(\theta\)-model space \((X,j)\) is called \(\boldsymbol{\theta}\)**-splitting** if there exists a function \(G:X\to[-\infty,\infty]\) such that
\[F=G\circ\pi_{+}^{X}+G\circ\pi_{+}^{X}\circ\theta \tag{27}\]
Here, \(\pi_{+}^{X}:X\to X/j^{-1}(\ker\pi_{+})\) is the canonical quotient map.
**Corollary 3.9**.: _Let \((X,j)\) be a Souslin \(\theta\)-model space. Furthermore, let \(\mu\) be a Gaussian measure on \(X\) with the property that \(j_{*}\mu\) is \(\theta\)-invariant and reflection positive. Then, for any \(\mu\)-measurable \(\theta\)-splitting function \(F:X\to[-\infty,\infty]\) with \(exp\circ F\in L^{1}(\mu)\), the finite Borel measure_
\[\omega=j_{*}\left(\exp\left[F\right]\cdot\mu\right) \tag{28}\]
_is reflection positive._
Proof.: Let \(G\) and \(\pi_{+}^{X}\) be given as in definition 3.8 and define the function \(G_{2}:(\mathcal{D}_{+})_{\beta}^{*}\to[-\infty,\infty]\) given by
\[T\mapsto\begin{cases}G(\pi_{+}^{X}x)&\text{if }\exists\,x\in X:T=\pi_{+}jx \\ 0&\text{else.}\end{cases} \tag{29}\]
To see that \(G_{2}\) is well-defined, note that if \(\pi_{+}jx=\pi_{+}jy\) for some \(x,y\in X\), we have that there is some \(T\in\ker\pi_{+}\) with \(j(x-y)=T\), i.e. \(x-y\in j^{-1}(\ker\pi_{+})=\ker\pi_{+}^{X}\). Now, define the function \(F_{2}:\mathcal{D}_{\beta}^{*}\to[-\infty,\infty]\) given by
\[T\mapsto G_{2}\left(\pi_{+}T\right)+G_{2}\left(\pi_{+}\theta T\right)\,. \tag{30}\]
Clearly, \(F_{2}\circ j=F\) such that \(F_{2}\) is \((j_{*}\mu)\)-measurable by corollary 2.3. Consequently, \(\omega=\exp[F_{2}]\cdot(j_{*}\mu)\) and theorem 3.5 applies.
We finish this article by a simple example.
_Example 3.10_.: Let \(\mathcal{S}\) denote the space of Schwartz functions on \(\mathbb{R}^{d+1}\). Define \(j:\mathcal{S}\to\mathcal{D}_{\beta}^{*}\) by \(j(\phi)(\psi)=\int_{\mathbb{R}^{d+1}}\psi\phi\) for all \(\phi\in\mathcal{S}\) and \(\psi\in\mathcal{D}\). Moreover, let \(\mu\) be a Gaussian measure on \(\mathcal{S}\) with the property that \(j_{*}\mu\) is \(\theta\)-invariant and reflection positive. Furthermore, let \(F:\mathcal{S}\to\mathbb{R},\phi\mapsto-\lambda\int_{\mathbb{R}^{d+1}}\phi^{4}\) for some \(\lambda>0\). Then,
\[F(\phi)=-\lambda\int_{\mathbb{R}_{>0}\times\mathbb{R}^{d}}\phi^{4}-\lambda \int_{\mathbb{R}_{>0}\times\mathbb{R}^{d}}\left(\theta\phi\right)^{4} \tag{31}\]
provides a \(\theta\)-splitting of \(F\).
## Acknowledgments
This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 406116891 within the Research Training Group RTG 2522/1. |
2310.14170 | Learning Invariant Molecular Representation in Latent Discrete Space | Molecular representation learning lays the foundation for drug discovery.
However, existing methods suffer from poor out-of-distribution (OOD)
generalization, particularly when data for training and testing originate from
different environments. To address this issue, we propose a new framework for
learning molecular representations that exhibit invariance and robustness
against distribution shifts. Specifically, we propose a strategy called
``first-encoding-then-separation'' to identify invariant molecule features in
the latent space, which deviates from conventional practices. Prior to the
separation step, we introduce a residual vector quantization module that
mitigates the over-fitting to training data distributions while preserving the
expressivity of encoders. Furthermore, we design a task-agnostic
self-supervised learning objective to encourage precise invariance
identification, which enables our method widely applicable to a variety of
tasks, such as regression and multi-label classification. Extensive experiments
on 18 real-world molecular datasets demonstrate that our model achieves
stronger generalization against state-of-the-art baselines in the presence of
various distribution shifts. Our code is available at
https://github.com/HICAI-ZJU/iMoLD. | Xiang Zhuang, Qiang Zhang, Keyan Ding, Yatao Bian, Xiao Wang, Jingsong Lv, Hongyang Chen, Huajun Chen | 2023-10-22T04:06:44Z | http://arxiv.org/abs/2310.14170v1 | # Learning Invariant Molecular Representation in Latent Discrete Space
###### Abstract
Molecular representation learning lays the foundation for drug discovery. However, existing methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for training and testing originate from different environments. To address this issue, we propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts. Specifically, we propose a strategy called "first-encoding-then-separation" to identify invariant molecule features in the latent space, which deviates from conventional practices. Prior to the separation step, we introduce a residual vector quantization module that mitigates the over-fitting to training data distributions while preserving the expressivity of encoders. Furthermore, we design a task-agnostic self-supervised learning objective to encourage precise invariance identification, which enables our method widely applicable to a variety of tasks, such as regression and multi-label classification. Extensive experiments on 18 real-world molecular datasets demonstrate that our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts. Our code is available at [https://github.com/HICAI-ZJU/iMoLD](https://github.com/HICAI-ZJU/iMoLD).
## 1 Introduction
Computer-aided drug discovery has played an important role in facilitating molecular design, aiming to reduce costs and alleviate the high risk of experimental failure [1; 2]. In recent years, the emergence of deep learning has led to a growing interest in molecular representation learning, which aims to encode molecules as low-dimensional and dense vectors [3; 4; 5; 6; 7; 8]. These learned representations have demonstrated their availability in various tasks, including target structure prediction [9], binding affinity analysis [10], drug re-purposing [11] and retrosynthesis [12].
Despite the significant progress in molecular representation methods, a prevailing assumption in the traditional approaches is that data sources are independent and sampled from the same distribution. However, in practical drug development, molecules exhibit diverse characteristics and may originate from different distributions [13; 14]. For example, in virtual screening scenarios [15], the distribution shift occurs not only in the molecule itself, e.g., size [13] or scaffold [16] changes, but also in the
target [14], e.g., the emergent COVID-19 leads to a new target from unknown distributions. This out-of-distribution (OOD) problem poses a challenge to the generalization capability of molecular representation methods and results in the degradation of performance in downstream tasks [17; 18].
Current studies mainly focus on regular Euclidean data for OOD generalization. Most studies [18; 19; 20; 21; 22] adopt the invariance principle [23; 24], which highlights the importance of focusing on the critical causal factors that remain invariant to distribution shifts while overlooking the spurious parts [23; 19; 20]. Although the invariance principle has shown effectiveness on Euclidean data, its application to non-Euclidean data necessitates further investigation and exploration. Molecules are often represented as graphs, a typical non-Euclidean data, where atoms as nodes and bonds as edges, thereby preserving rich structural information [3]. The complicated molecular graph structure makes it challenging to accurately distinguish the invariant causal parts from diverse spurious correlations [25].
Preliminary studies have made some attempts on molecular graphs [26; 25; 27; 28; 29]. They explicitly divide graphs to extract invariant substructures, at the granularity of nodes [28], edges [26; 27; 28; 29], or motifs [25]. These attempts can be summarized as the "first-separation-then-encoding" paradigm (Figure 1 (a)), which first divides the graph into invariant and spurious parts and then encodes each part separately. We argue that this practice is suboptimal for extremely complex and entangled graphs, such as real-world molecules [30; 31], since some intricate properties cannot be readily determined by analyzing a subset of the molecular structure [30; 31]. Besides, some methods [25; 27] require assumptions and inferences about the environmental distribution, which are often untenable for molecules due to the intricate environment. Additionally, the downstream tasks related to molecules are diverse, including regression and classification. However, some methods such as CIGA [26] and DisC [32] can only be applied to single-label classification tasks, due to the constraints imposed by the invariant learning objective function.
To fill these gaps, we present a novel molecule invariant learning framework to effectively capture the invariance of molecular graphs and achieve generalized representation against distribution shifts. In contrast to the conventional approaches, we propose a "first-encoding-then-separation strategy" (Figure 1 (b)). Specifically, we first employ a Graph Neural Network (GNN) [33; 34; 35] to encode the molecule (i.e., encoding GNN), followed by a residual vector quantization module to alleviate the over-fitting to training data distributions while preserving the expressivity of the encoder. We then utilize another GNN to score the molecule representation (i.e., scoring GNN), which measures the contributions of each dimension to the target in latent space, resulting in a clear separation between invariant and spurious representations. Finally, we design a self-supervised learning objective [36; 37; 38] that aims to encourage the identified invariant features to effectively preserve label-related information while discarding environment-related information. It is deserving of note that the objective is task-agnostic, which means that our method can be applied to various tasks, including regression and single- or multi-label classification.
Our main contributions can be summarized as follows:
* We propose a paradigm of "first-encoding-then-separation" using an encoding GNN and a scoring GNN, which enables us to effectively identify invariant features from highly complex graphs.
* We introduce a residual vector quantization module that strikes a balance between model expressivity and generalization. The quantization acts as the bottleneck to enhance generalization, while the residual connection complements the model's expressivity.
* We design a self-supervised invariant learning objective that facilitates the precise capture of invariant features. This objective is versatile, task-agnostic, and applicable to a variety of tasks.
Figure 1: (a) First-Separation-Then-Encoding: the input is separated using a subgraph generator, then each subgraph is encoded respectively. (b): First-Encoding-Then-Separation: the input is encoded, then the representation is separated by a scorer.
* We conduct comprehensive experiments on a diverse set of real-world datasets with various distribution shifts. The experimental results demonstrate the superiority of our method compared to state-of-the-art approaches.
## 2 Related Work
OOD Generalization and Invariance Principle.The susceptibility of deep neural networks to substantial performance degradation under distribution shifts has led to a proliferation of research focused on out-of-distribution (OOD) generalization [39]. Three lines of methods have been studied for OOD generalization on Euclidean data, including group distributionally robust optimization [40; 41; 42], domain adaptation [43; 44; 45] and invariant learning [19; 20; 21]. Group distributionally robust optimization considers groups of distributions and optimize across all groups simultaneously. Domain adaptation aims to align the data distributions but may fail to find an optimal predictor without additional assumptions [19; 20; 46]. Invariant learning aims to learn invariant representation satisfying the invariant principle [20; 24], which includes two assumptions: (1) sufficiency, meaning the representation has sufficient predictive abilities, (2) invariance, meaning representation is invariant to environmental changes. However, most methods require environmental labels, which are expensive to obtain for molecules [26], and direct application of these methods to complicated non-Euclidean molecular structure does not yield promising results [13; 22; 26].
OOD Generalization on Graphs.Recently, there has been growing attention on the graph-level representations under distribution shifts from the perspective of invariant learning. Some methods [26; 32; 27; 28; 25] follows the "first-separation-then-encoding" paradigm, and they make attempt to capture the invariant substructures by dividing nodes and edges in the explicit structural space. However, these methods suffer from the difficulty in dividing molecule graphs in raw space due to the complex and entangled molecular structure [47]. Moreover, MoleOOD [25] and GIL [27] require inference of unavailable environmental labels, which entails prior assumptions on the environmental distribution. And the objective of invariant learning may hinder the application, e.g., CIGA [26] and DisC [32] can only apply to single-label classification rather than to regression and multi-label tasks. Additionally, OOD-GNN [48] does not use the invariance principle. It proposes to learn disentangled graph representations, but requires computing global weights for all data, leading to a high computational cost. OOD generalization on graphs can also be improved by another line of relevant works [29; 49; 47] on GNN explainability [50; 51], which aims to provide a rationale for prediction. However, they may fail in some distribution shift cases [26]. And DIR [29] and GSAT [49] also divide graphs in the raw structural space. Although GREA [47] learns in the latent feature space, it only conducts separation on each node while neglecting the different significance of each representation dimensionality. In this work, we focus on the OOD generalization of molecular graphs, against multi-type of distribution shifts, e.g., scaffold, size, and assay, as shown in Figure 2.
Vector Quantization.Vector Quantization (VQ) [52; 53] acts as a bottleneck of representation learning. It discretizes continuous input data in the hidden space by assigning them to the nearest vectors in a predefined codebook. Some studies [53; 54; 55; 56] have demonstrated its effectiveness to enhance model robustness against data corruptions. Other studies [57; 58; 59] find that taking VQ as an inter-component communication within neural networks can contribute to the model generalization ability. However, we posit that while VQ can improve generalization against distribution shifts, it may also limit the model's expressivity and potentially lead to under-fitting. To address this concern, we propose to equip the conventional VQ with a residual connection to strike a balance between model generalization and expressivity.
Figure 2: An overview of distribution shifts in molecules. Distribution shifts occur when molecules originate from different scaffold, size or assay environments.
Preliminaries
### Problem Definition
We focus on the OOD generalization of molecules. Let \(\mathcal{G}\) be the molecule graph space and \(\mathcal{Y}\) be the label space, the goal is to find a predictor \(f\,:\mathcal{G}\rightarrow\mathcal{Y}\) to map the input \(G\in\mathcal{G}\) into the label \(Y\in\mathcal{Y}\). Generally, we are given a set of datasets collected from multiple environments \(E_{all}:D=\{D^{e}\}_{e\in E_{all}}\). Each \(D^{e}\) contains pairs of an input molecule and its label: \(D^{e}=\{(G_{i},Y_{i})\}_{i=1}^{n_{e}}\) that are drawn from the joint distribution \(P(G,\,Y|E=e)\) of environment \(e\). However, the information of environment \(e\) is always not available for molecules, thus we redefine the training joint distribution as \(P_{train}(G,\,Y)=P(G,\,Y|E=e),\forall e\in E_{train}\), and the testing joint distribution as \(P_{test}(G,\,Y)=P(G,\,Y|E=e),\forall e\in E_{all}\setminus E_{train}\), where \(P_{train}(G,\,Y)\neq P_{test}(G,\,Y)\). We denote joint distribution across all environment as \(P_{all}(G,\,Y)=P(G,\,Y|E=e),\forall e\in E_{all}\). Formally, the goal is to learn an optimal predictor \(f^{*}\) based on training data and can generalize well across all distributions:
\[f^{*}=\arg\min_{f}\mathbb{E}_{(G,\,Y)\sim P_{all}}\left[\ell\left(f\left(G \right),\,Y\right)\right], \tag{1}\]
where \(\ell(\cdot,\cdot)\) is the empirical risk function. Moreover, since the joint distribution \(P(G,\,Y)\) can be written as \(P(Y\,|G)P(G)\), the OOD problem can be refined into two cases, namely covariate and concept shift [60; 61; 62; 63]. In covariate shift, the distribution of input differs. Formally, \(P_{train}(G)\neq P_{test}(G)\) and \(P_{train}(\,Y|\,G)=P_{test}(\,Y|\,G)\). While concept shift occurs when the conditional distribution changes as \(P_{train}(\,Y|\,G)\neq P_{test}(\,Y|\,G)\) and \(P_{train}(G)=P_{test}(G)\). We will consider and distinguish between both cases in our experiments.
### Molecular Representation Learning
We denote a molecule graph by \(G=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}\) is the set of nodes (e.g., atoms) and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) is the set of edges (e.g., chemical bonds). Generally, the predictor \(f\) can be denoted as \(\rho\circ g\), containing an encoder \(g:\mathcal{G}\rightarrow\mathbb{R}^{d}\) that extracts representation for each molecule and a downstream classifier \(\rho:\mathbb{R}^{d}\rightarrow\mathcal{Y}\) that predicts the label with the molecular representation. In particular, the encoder \(g\) operates in two stages: firstly, by employing a graph neural network [33; 34; 35] to generate node representations \(\mathbf{H}\) according to the following equation:
\[\mathbf{H}=\left[\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{|\mathcal{V }|}\right]^{\top}=\mathrm{GNN}(G)\in\mathbb{R}^{|\mathcal{V}|\times d}, \tag{2}\]
where \(\mathbf{h}_{v}\in\mathbb{R}^{d}\) is the representation of node \(v\). Secondly, the encoder utilizes a readout operator to obtain the overall graph representation \(\mathbf{z}\):
\[\mathbf{z}=\mathrm{READOUT}(\mathbf{H})\in\mathbb{R}^{d}. \tag{3}\]
The readout operator can be implemented using a simple, permutation invariant function such as average pooling.
## 4 Method
This section presents the details of our proposed method that learns invariant **Mo**lecular representation in **L**atent **D**iscrete space, called **i**Mo**LD**. Figure 3 shows the overview of iMoLD, which mainly consists of three steps: 1) Using a GNN encoder and a residual vector quantization module to obtain the molecule representation (Section 4.1); 2) Separating the representation into invariant and spurious parts through a GNN scorer (Section 4.2); 3) Optimizing the above process with a task-agnostic self-supervised learning objective (Section 4.3).
### Encoding with Residual Vector Quantization
The current mainstream methods [26; 25; 28; 27; 32] adopt the "first-separation-then-encoding" paradigm, which explicitly divides graphs into invariant and spurious substructures on the granularity of edge, node, or motif, and then encodes each substructure individually. In contrast, we use the opposite paradigm that first encodes the whole molecule followed by separation.
Specifically, given an input molecule \(G\), we first use a GNN to encode it, resulting in node representations \(\mathbf{H}\):
\[\mathbf{H}=\mathrm{GNN}_{E}(G)\in\mathbb{R}^{|\mathcal{V}|\times d}, \tag{4}\]
where \(\mathrm{GNN}_{E}\) represents the encoding GNN, \(|\mathcal{V}|\) is the number of nodes in \(G\), and \(d\) is the dimensionality of features. Inspired by the studies [57, 59] that Vector Quantization (VQ) [52, 53] is helpful to improve the model generalization on computer vision tasks, we propose a Residual Vector Quantization (RVQ) module to refine the obtained representations.
In the RVQ module, VQ is used to discretize continuous representations into discrete ones. Formally, it introduces a shared learnable codebook as a discrete latent space: \(\mathcal{C}=\left\{\mathbf{e}_{1},\mathbf{e}_{2},\ldots\mathbf{e}_{|\mathcal{C }|}\right\}\), where each \(\mathbf{e}_{k}\in\mathbb{R}^{d}\). For each node representation \(\mathbf{h}_{v}\) in \(\mathbf{H}\), VQ looks up and fetches the nearest neighbor in the codebook and outputs it as the result. Mathematically,
\[\mathrm{Q}(\mathbf{h}_{v})=\mathbf{e}_{k},\quad\text{where}\quad k=\operatorname* {argmin}_{k\in\{1,\ldots,|\mathcal{C}|\}}\left\|\mathbf{h}_{v}-\mathbf{e}_{k} \right\|_{2}, \tag{5}\]
and \(\mathrm{Q}(\cdot)\) denotes the discretization operation which quantizes \(\mathbf{h}_{v}\) to \(\mathbf{e}_{k}\) in the codebook.
The VQ operation acts as a bottleneck to enhance generalization and alleviate the easy-over-fitting issue caused by distribution shifts. However, it also impairs the expressivity of the model by using a limited discrete codebook to replace the original continuous input, suffering from a potential under-fitting issue. Accordingly, we propose to equip the conventional VQ with a residual connection to strike a balance between model generalization and expressivity. In specific, we incorporate both the continuous and discrete representations to update node representations \(\mathbf{H}\) to \(\mathbf{H}^{\prime}\):
\[\mathbf{H}^{\prime}=\left[\mathrm{Q}(\mathbf{h}_{1})+\mathbf{h}_{1},\mathrm{ Q}(\mathbf{h}_{2})+\mathbf{h}_{2},\ldots,\mathrm{Q}(\mathbf{h}_{|\mathcal{V}|})+ \mathbf{h}_{|\mathcal{V}|}\right]^{\top}. \tag{6}\]
Similar to VQ-VAE [52, 53], we employ the exponential moving average updates for the codebook:
\[N_{k}^{(t)}=N_{k}^{(t-1)}*\eta+n_{k}^{(t)}(1-\eta),\quad\mathbf{m}_{k}^{(t)}= \mathbf{m}_{k}^{(t-1)}*\eta+\sum_{v}^{n_{k}^{(t)}}\mathbf{h}_{v}^{(t)}(1-\eta ),\quad\mathbf{e}_{k}^{(t)}=\frac{\mathbf{m}_{k}^{(t)}}{N_{k}^{(t)}}, \tag{7}\]
where \(n_{k}^{(t)}\) is the number of node representations in the \(t\)-th mini-batch that are quantized to \(\mathbf{e}_{k}\), and \(\eta\) is a decay parameter between 0 and 1.
### Separation at Nodes and Features
After encoding, we separate the representation into invariant parts and spurious parts. It is worth noting that our separation is not only performed at the node dimension but also takes into account the feature dimension in the latent space. The reasons are two-fold: 1) Distribution shifts on molecules
Figure 3: An overview of iMoLD. Firstly, given a batch of inputs, we learn invariant and spurious representations (\(\mathbf{z}^{\text{Imv}}\) and \(\mathbf{z}^{\text{Spin}}\)) for each input in latent discrete space by a first-encoding-then-separation paradigm. An encoding GNN and an RVQ module are involved to obtain molecule representation, then the representation is separated through a scoring GNN. The invariant \(\mathbf{z}^{\text{Imv}}\) is used to predict the label \(\vec{y}\). Then a task-agnostic self-supervised learning objective across the batch is designed to facilitate the acquisition of reliable invariant \(\mathbf{z}^{\text{Inv}}\).
can occur at both the structure level and the attribute level [26], corresponding to the node dimension \(|\mathcal{V}|\) and the feature dimension \(d\) in \(\mathbf{H^{\prime}}\), respectively. 2) The resulting representation may be highly entangled, thus it is advisable to perform a separation on each dimension in the latent space.
Specifically, we use another GNN as a scorer to obtain the separating score \(\mathbf{S}\):
\[\mathbf{S}=\sigma\left(\mathrm{GNN}_{S}\left(G\right)\right)\in\mathbb{R}^{| \mathcal{V}|\times d}, \tag{8}\]
where \(\mathrm{GNN}_{S}\) represents the scoring GNN, \(|\mathcal{V}|\) is the number of nodes in \(G\), and \(d\) is the dimensionality of features. \(\sigma(\cdot)\) denotes the Sigmoid function to constrain each entry in \(\mathbf{S}\) falls into the range of \((0,1)\). Then we can capture the invariant and complementary spurious features at both structure and attribute granularity in the latent representation space by applying the separating scores to node representations:
\[\mathbf{H}^{\text{Inv}}=\mathbf{H^{\prime}}\odot\mathbf{S},\quad\mathbf{H}^{ \text{Spu}}=\mathbf{H^{\prime}}\odot\left(1-\mathbf{S}\right), \tag{9}\]
where \(\mathbf{H}^{\text{Inv}}\) and \(\mathbf{H}^{\text{Spu}}\) denote the invariant and spurious node representations respectively, and \(\odot\) is the element-wise product. Finally, the invariant and spurious representation (denoted as \(\mathbf{z}^{\text{Inv}}\) and \(\mathbf{z}^{\text{Spu}}\) respectively) of \(G\) can be generated by a readout operator:
\[\mathbf{z}^{\text{Inv}}=\mathrm{READOUT}(\mathbf{H}^{\text{Inv}})\in\mathbb{R }^{d},\quad\mathbf{z}^{\text{Spu}}=\mathrm{READOUT}(\mathbf{H}^{\text{Spu}}) \in\mathbb{R}^{d}. \tag{10}\]
### Learning Objective
Our OOD optimization objectives are composed of an invariant learning loss, a task prediction loss, and two additional regularization losses.
Task-agnostic Self-supervised Invariant Learning.Invariant learning aims to optimize the encoding \(\mathrm{GNN}_{E}\) and the scoring \(\mathrm{GNN}_{S}\) to produce precise invariant and spurious representations. In particular, we need \(\mathbf{z}^{\text{Inv}}\) to be invariant under environmental changes. Additionally, we expect the objective to be independent of the downstream task, which allows the method to be not restricted to a specific type of task. To achieve these, we design a task-agnostic and self-supervised invariant learning objective. Specifically, we disturb \(\mathbf{z}^{\text{Inv}}_{i}\) via concatenating a corresponding \(\mathbf{z}^{\text{Spu}}_{j}\) in a shuffled batch, resulting in an augmented representation \(\widetilde{\mathbf{z}}^{\text{Inv}}_{i}\):
\[\widetilde{\mathbf{z}}^{\text{Inv}}_{i}=\mathbf{z}^{\text{Inv}}_{i}\oplus \mathbf{z}^{\text{Spu}}_{j\in[1,B]}, \tag{11}\]
where \(\oplus\) denotes concatenation operator and \(B\) is batch size.
Inspired by a simple self-supervised learning framework that takes different augmentation views as similar positive pairs and does not require negative samples [37], we treat \(\mathbf{z}^{\text{Inv}}_{i}\) and \(\widetilde{\mathbf{z}}^{\text{Inv}}_{i}\) as positive pairs and push them to be similar, using an MLP-based predictor (denoted as \(\omega\)) that transforms the output of one view and aligns it to the other view. We minimize their negative cosine similarity:
\[\mathcal{L}_{\text{inv}}=-\sum_{i=1}^{B}\mathrm{sim}(\mathrm{sg}[\mathbf{z}^{ \text{Inv}}_{i}],\omega(\widetilde{\mathbf{z}}^{\text{Inv}}_{i})), \tag{12}\]
where \(\mathrm{sim}(\cdot,\cdot)\) represents the formula of cosine similarity and \(\mathrm{sg}[\cdot]\) denotes stop-gradient operation to prevent collapsing [37]. We employ \(\mathcal{L}_{\text{inv}}\) as our objective for invariant learning to ensure the invariance of \(\mathbf{z}^{\text{Inv}}\) against distribution shifts.
Task Prediction.The objective of task prediction is to provide an invariant presentation \(\mathbf{z}^{\text{Inv}}\) with sufficient predictive abilities. During training, the choice of prediction loss function depends on the type of task. For classification tasks, we employ the cross-entropy loss, while for regression tasks, we use the mean squared error loss. Take the binary classification task as an example, the cross-entropy loss is computed between the predicted \(\widehat{y}_{i}=\rho(\mathbf{z}^{\text{Inv}}_{i})\) and the ground-truth label \(y_{i}\):
\[\mathcal{L}_{\text{pred}}=\sum_{i=1}^{B}\left(y_{i}\log\widehat{y}_{i}+(1-y_{i })\log\left(1-\widehat{y}_{i}\right)\right). \tag{13}\]
**Scoring GNN Regularization.** For the scoring \(\mathrm{GNN}_{S}\), we apply a penalty to the separation of invariant features, preventing abundance or scarcity and ensuring a reasonable selection. To achieve this, we introduce a regularization term \(\mathcal{L}_{\text{reg}}\) to constrain the size of the selected invariant features:
\[\mathcal{L}_{\text{reg}}=\left|\frac{\langle\mathbf{J},\mathbf{S}\rangle_{F}}{| \mathcal{V}|\times d}-\gamma\right|, \tag{14}\]
where \(\mathbf{J}\in\mathbb{R}^{|\mathcal{V}|\times d}\) denotes a matrix with all entries as 1, \(\langle\cdot,\cdot\rangle_{F}\) is the Frobenius dot product and \(\gamma\) is a pre-defined threshold between 0 and 1.
Codebook Regularization.In the RVQ module, following VQ-VAE [52; 53], to encourage \(\mathbf{h}_{v}\) to remain close to the selected codebook embedding \(\mathbf{e}_{k}\) in Equation (5) and prevent it from frequently fluctuating between different embeddings, we add a commitment loss \(\mathcal{L}_{\text{cmt}}\) to foster that \(\mathbf{h}_{v}\) commits to \(\mathbf{e}_{k}\) and does not grow uncontrollably:
\[\mathcal{L}_{\text{cmt}}=\sum_{v\in\mathcal{V}}\left\|\mathrm{sg}[\mathbf{e}_ {k}]-\mathbf{h}_{v}\right\|_{2}^{2}. \tag{15}\]
Finally, the learning objective can be defined as the weighted sum of the above losses:
\[\mathcal{L}=\mathcal{L}_{\text{pred}}+\lambda_{1}\mathcal{L}_{\text{inv}}+ \lambda_{2}\mathcal{L}_{\text{reg}}+\lambda_{3}\mathcal{L}_{\text{cmt}}, \tag{16}\]
where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are hyper-parameters to control the weights of \(\mathcal{L}_{\text{inv}}\) (in Equation (12)), \(\mathcal{L}_{\text{reg}}\) (in Equation (14)) and \(\mathcal{L}_{\text{cmt}}\) (in Equation (15)), respectively.
## 5 Experiments
In this section, we conduct extensive experiments to answer the research questions. (**RQ1**) Can our method iMoLD achieve better OOD generalization performance against SOTA baselines? (**RQ2**) How does each of the component we propose contribute to the final performance? (**RQ3**) How can we understand the latent discrete space induced by the proposed RVQ?
### Experimental Setup
Datasets.We employ two real-world benchmarks for OOD molecular representation learning. Details of datasets are in Appendix A.
* **GOOD**[63], which is a systematic benchmark tailored specifically for graph OOD problems. We utilize three molecular datasets for the graph prediction task: (1) **GOOD-HIV**[16], where the objective is binary classification to predict whether a molecule can inhibit HIV; (2) **GOOD-ZINC**[64], which is a regression dataset aimed at predicting molecular solubility; and (3) **GOOD-PCBA**[16], which includes 128 bioassays and forms 128 binary classification tasks. Each dataset comprises two environment-splitting strategies (scaffold and size), and two shift types (covariate and concept) are applied per splitting outcome, resulting in a total of 12 distinct datasets.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{GOOD-HIV \(\uparrow\)} & \multicolumn{3}{c}{GOOD-ZINC \(\downarrow\)} & \multicolumn{3}{c}{GOOD-PCBA \(\uparrow\)} \\ \cline{2-13} & \multicolumn{3}{c}{scaffold} & size & \multicolumn{3}{c}{scaffold} & size & \multicolumn{3}{c}{scaffold} & size & \multicolumn{3}{c}{scaffold} & size & \multicolumn{3}{c}{scaffold} & size \\ \hline & automatic & concept & overtime & concept & overtime & concept & overtime & concept & overtime & concept & overtime \\ \hline EMM & 665(25.30) & 27.86(25.30) & 59.10(22.29) & 10.93(22.29) & 10.93(20.01) & 0.13(0.004)(0.062) & 0.23(1100.007) & 0.13(0.004)(0.065) & 11.10(30.34)(0.005) & 12.93(0.008)(0.015) & 12.93(0.004)(0.016) & 12.58(0.004)(0.05) \\ IBM & 90.17(20.70) & 72.16(27.37) & 59.94(25.50) & 40.21(20.26) & 0.13(0.039)(0.043) & 0.02(0.028) & 0.13(0.039)(0.039) & 12.93(0.005) & 12.93(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) \\ Weiss & 90.16(25.30) & 72.16(27.37) & 59.61(27.30) & 61.7(27.30) & 0.13(0.045) & 0.13(0.044)(0.016) & 0.23(0.043)(0.016) & 0.13(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) \\ GNND & 86.15(25.30) & 72.16(27.37) & 57.78(28.28) & 67.79(21.55) & 65.79(20.15) & 0.13(0.045) & 0.13(0.023)(0.042) & 0.27(0.014)(0.016) & 0.13(0.004)(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) \\ GNND & 86.45(25.30) & 72.16(27.37) & 57.78(28.28) & 67.79(21.55) & 67.99(20.15) & 0.13(0.045) & 0.12(0.023)(0.016) & 0.27(0.014)(0.016) & 0.13(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) \\ GNND & 86.45(25.40) & 72.16(27.37) & 59.38(28.68) & 65.13(21.1) & 0.146(0.088) & 0.13(0.040)(0.042) & 0.23(0.028)(0.014) & 0.148(0.001) & 12.20(0.004)(0.001) & 12.20(0.004)(0.029) & 12.77(0.209) & 12.78(0.004)(0.016) \\ Wang & 90.68(25.80) & 71.80(25.31) & 73.91(21.11) & 62.23(20.30) & 0.23(0.193) & 0.13(0.017) & 0.23(0.151) & 0.15(0.023)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) \\ \hline DR & 86.45(25.10) & 71.41(28.57) & 73.79(14.15) & 73.94(14.55) & 0.13(0.039) & 0.254(0.034) & 0.45(0.780) & 0.142(0.014) & 0.13(0.004)(0.005) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.017) & 12.57(0.004)(0.03) & 12.58(0.004)(0.03) \\ GST & 70.90(17.18) & 72.51(0.97) & 67.72(0.97) & 73.22(0.36) & 56.79(20.15) & 0.148(0.007) & 0.146(0.004) & 0.210(0.005) & 10.00(0.000) & 10.00(0.000) & 10.64(0.017) & 12.58(0.004)(0.016) & 12.58(0.004)(0.016) & 12.58(0.004)(0.003) & 12.58(0.004)(0.003) & 12.58(0.004)(0.003) \\ GNN & 70.91(28.70) & 72.51(0.97) & 67.72(0.97) & 73.22(0.56) & 75.69(20.15) & 0.148(0.007) & 0.146(0.0
* **DrugOOD**[13], which is a OOD benchmark for AI-aided drug discovery. This benchmark provides three environment-splitting strategies, including assay, scaffold and size, and applies these three splitting to two measurements (IC50 and EC50). As a result, we obtain 6 datasets, and each dataset contains a binary classification task for drug target binding affinity prediction. A detailed description of each environment-splitting strategy is also in Appendix A.
Baselines.We thoroughly compare our method against ERM [65] and two groups of OOD baselines: (1) general OOD algorithms used for Euclidean data, which include domain adaptation methods such as Coral [44] and DANN [43], group distributionally robust optimization method (GroupDRO [41]), invariant learning methods such as IRM [19] and VREx [40] and data augmentation method (Mixup [66]). And (2) graph-specific algorithms, including Graph OOD algorithms such as CAL [28], DisC [32], MoleOOD [25] and CIGA [26], as well as interpretable graph learning methods such as DIR [29], GSAT [49] and GREA [47]. Details of baselines and implementation are in Appendix B.
Evaluation.We report the ROC-AUC score for GOOD-HIV and DrugOOD datasets as the task is binary classification. For GOOD-ZINC, we use the Mean Average Error (MAE) since the task is regression. While for GOOD-PCBA, we use Average Precision (AP) averaged over all tasks as the evaluation metric due to extremely imbalanced classes. We run experiments 10 times with different random seeds, select models based on the validation performance and report the mean and standard deviations on the test set.
### Main Results (RQ1)
Table 1 and Table 2 present the empirical results on GOOD and DrugOOD benchmarks, respectively. Our method iMoLD achieves the best performance on 16 of the 18 datasets and ranks second on the other two datasets. Among the compared baselines, the graph-specific OOD methods perform best on only 11 datasets, and some general OOD methods outperform them on another 7 datasets. This suggests that although some advanced graph-specific OOD methods can achieve superior performance on some synthetic datasets (e.g., to predict whether a specific motif is present in a synthetic graph [29]), they may not perform well on molecules due to the realistic and complex data structures and distribution shifts. In contrast, our method is able to achieve the best performance on most of the datasets, which indicates that the proposed identification of invariant features in the latent space is effective for applying the invariance principle to the molecular structure. We also observe that MoleOOD, a method designed specifically for molecules, does not perform well on GOOD-ZINC and GOOD-PCBA, possibly due to its dependence on inferred environments. Inferring environments may become more challenging for larger-scale datasets, such as GOOD-ZINC and GOOD-PCBA, which contain hundreds of thousands of data and more complex tasks (e.g., PCBA has a total of 128 classification tasks). Our method does not require the inference of environment, and is shown to be effective on datasets of diverse scales and tasks.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{IC50 \(\uparrow\)} & \multicolumn{3}{c}{EC50 \(\uparrow\)} \\ \cline{2-7} & Assay & Scaffold & Size & Assay & Scaffold & Size \\ \hline ERM & 71.63(0.76) & 68.79(0.47) & 67.50(0.38) & 67.39(2.90) & 64.98(1.29) & 65.10(0.38) \\ IRM & 71.15(0.57) & 67.22(0.62) & 61.58(0.58) & 67.77(2.71) & 63.86(1.36) & 59.19(0.83) \\ Coral & 71.28(0.91) & 68.36(0.61) & 64.53(0.32) & 72.08(2.80) & 64.83(1.64) & 58.47(0.43) \\ MixUp & 71.49(1.08) & 68.59(0.27) & 67.79(0.39) & 67.81(4.06) & 65.77(1.83) & 65.77(0.60) \\ \hline DIR & 69.84(1.41) & 66.33(0.65) & 62.92(1.89) & 65.81(2.93) & 63.76(3.22) & 61.56(4.23) \\ GSAT & 70.59(0.43) & 66.45(0.50) & 66.70(0.37) & 73.82(2.62) & 64.25(0.63) & 62.65(1.79) \\ GREA & 70.23(1.17) & 67.02(0.28) & 66.59(0.56) & 74.17(1.47) & 64.50(0.78) & 62.81(1.54) \\ CAL & 70.09(1.03) & 65.90(1.04) & 66.42(0.50) & 74.54(4.18) & 65.19(0.87) & 61.21(1.76) \\ DisC & 61.40(2.56) & 62.70(2.11) & 61.43(1.06) & 63.71(5.56) & 60.57(2.27) & 57.38(2.48) \\ MoleOOD & 71.62(0.52) & 68.58(1.14) & 65.62(0.77) & 72.69(1.46) & 65.74(1.47) & 65.51(1.24) \\ CIGA & 71.86(1.37) & **69.14(0.70)** & 66.92(0.54) & 69.15(5.79) & 67.32(1.35) & 65.65(0.82) \\ iMoLD & **72.11(0.51)** & 68.84(0.58) & **67.92(0.43)** & **77.48(1.70)** & **67.79(0.88)** & **67.09(0.91)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation performance on DrugOOD benchmark. The best is marked with **boldface** and the second best is with underline.
### Ablation Studies of Components (RQ2)
We conduct ablation studies to analyze how each proposed component contributes to the final performance. Three groups of variants are included: (1) The first group is to investigate the RVQ module. It consists of ERM and ERM (+RVQ), to explore the performance of the simplest ERM and the role of applying the RVQ to it. It also consists of variants of our method, including w/o VQ, which uses \(\mathbf{H}\) instead of \(\mathbf{H}^{\prime}\) in Equation (9) to obtain \(\mathbf{H}^{\text{inv}}\) and \(\mathbf{H}^{\text{Spu}}\); w/o R, which obtains \(\mathbf{H}^{\text{Inv}}\) and \(\mathbf{H}^{\text{Spu}}\) without a residual connection by using the output of discretization in Equation (5). (2) The second group is to investigate the impact of each individual training objective, including w/o \(\mathcal{L}_{\text{inv}}\), w/o \(\mathcal{L}_{\text{reg}}\) and w/o \(\mathcal{L}_{\text{cnt}}\), which remove \(\mathcal{L}_{\text{inv}}\), \(\mathcal{L}_{\text{reg}}\) and \(\mathcal{L}_{\text{cnt}}\) from the final objective function in Equation (16), respectively. (3) The third group consists of variants that replace our proposed self-supervised invariant learning objective with other methods, including w/ \(\mathcal{L}_{CIGA}\) and w/ \(\mathcal{L}_{GREA}\), which represent the objective functions using CIGA [26] and GREA [47], respectively.
Table 3 reports the results on the scaffold-split of GOOD-HIV and the size-split of GOOD-PCBA. We observe that the addition of the RVQ module can improve performance without using any invariant learning algorithms. This indicates that the designed RVQ is effective. Notably, on the GOOD-HIV dataset, removing the VQ module results in significant performance degradation. While on the GOOD-PCBA dataset, removing the residual connection leads to a significant performance drop. This observation may be attributed to the larger scale and more complex task of the GOOD-PCBA dataset. Although VQ improves the generalization, it reduces the expressivity capacity of the model. Thus, achieving a balance between the benefits and weaknesses brought by VQ is necessary with the proposed RVQ module. By observing the second group, we can conclude that each module is instrumental to the final performance. Removing the invariant learning objective \(\mathcal{L}_{\text{inv}}\) and commitment loss \(\mathcal{L}_{\text{cnt}}\) results in more pronounced performance degradation, suggesting that the invariant learning objective is effective and indispensable, as well as the need for a constraint term to justify the output of the encoder and the codebook in VQ. By comparing the third group of results, we find that our proposed learning objective outperforms the existing methods, which shows that our approach can not only be applied to a wider range of tasks but also achieve better performance. Moreover, to investigate the robustness of our model, we perform a hyper-parameter sensitivity analysis in Appendix C.1.
### Analysis of Latent Discrete Space (RQ3)
To further explore why our method performs better and to understand the superiority of the introduced discrete latent space, we visualize the projection of extracted \(\mathbf{z}^{\text{Inv}}\) on both training and validation set
Figure 4: Visualization of the extracted features on training and validation set when the model achieves the highest score on the validation set. _Score(train)_ and _Score(val)_ are ROC-AUC scores on the training and validation set, respectively. _D(Y=0)_ and _D(Y=1)_ are distances between the features on the training and validation sets of each class.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{GOOD-HIV-Scaffold} & \multicolumn{2}{c}{GOOD-PCBA-Size} \\ \cline{2-5} & covariate & concept & covariate & concept \\ \hline MoLD & **72.93(2.29)** & **74.32(1.63)** & **18.02(0.73)** & **18.21(1.10)** \\ ERM & 69.55(2.39) & 72.48(1.26) & 17.55(0.46) & 15.60(0.55) \\ ERM (+RVQ) & 70.18(1.07) & 73.14(1.99) & 17.84(0.47) & 15.60(0.40) \\ w/o VQ & 70.18(1.60) & 72.88(0.66) & 17.88(1.57) & 17.37(0.82) \\ w/o R & 72.40(2.21) & 73.08(0.15) & 17.37(0.62) & 12.95(0.29) \\ \hline w/o \(\mathcal{L}_{\text{inv}}\) & 70.73(1.57) & 71.88(1.80) & 17.55(0.34) & 17.05(0.87) \\ w/o \(\mathcal{L}_{\text{inv}}\) & 71.47(0.72) & 72.39(0.87) & 17.33(0.68) & 179.10(0.34) \\ w/o \(\mathcal{L}_{\text{cnt}}\) & 70.20(0.40) & 71.98(1.36) & 17.24(1.02) & 17.68(2.20) \\ \hline w/ \(\mathcal{L}_{GREA}\) & 71.23(3.1) & 73.31(1.73) &
when the model achieves the best score on the validation set, using t-SNE [67] on the covariate-shift dataset of GOOD-HIV-Scaffold in Figure 4 (d). We also visualize the results of some baselines, including the vanilla ERM (Figure 4 (a)), the ERM equipped with VQ after encoder (ERM(+VQ), Figure 4 (b)), and with the RVQ module (ERM(+RVQ), Figure 4 (c)). We additionally compute the 1-order Wasserstein distance [68] between the features on the training and validation sets of each class, to quantify the dissimilarity in the feature distribution across varying environments. We find that adding the VQ or RVQ after the encoder results in a more uniform distribution of features and lower feature distances, due to the fact that VQ makes it possible to reuse previously encountered embeddings in new environments by discretizing them. Moreover, the feature distribution of our method is more uniform and the distance is smaller. This suggests that our method is effective in identifying features that are invariant across different environments. Moreover, we also observe that all ERM methods achieve lower validation scores with higher training scores, implying that these methods are prone to overfitting. Our method, on the other hand, achieves not only a higher validation score but also a higher corresponding training score, thereby demonstrating its ability to overcome the problem of easy overfitting and improve the generalization ability effectively.
## 6 Conclusion
In this work, we propose a new framework that learns invariant molecular representation against distribution shifts. We adopt a "first-encoding-then-separation" strategy, wherein a combination of encoding GNN and residual vector quantization is utilized to derive molecular representation in latent discrete space. Then we learn a scoring GNN to identify invariant features from this representation. Moreover, we design a task-agnostic self-supervised learning objective to enable precise invariance identification and versatile applicability to various tasks. Extensive experiments on real-world datasets demonstrate the superiority of our method on the molecular OOD problem. Overall, our proposed framework presents a promising approach for learning invariant molecular representations and offers valuable insights for addressing distribution shifts in molecular data analysis.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China (2022YFB4500300), National Natural Science Foundation of China (NSFCU19B2027, NSFC91846204, NSFC62302433), joint project DH-2022ZY0012 from Donghai Lab, and sponsored by CCF-Tencent Open Fund (CCF-Tencent RAGR20230122). We want to express gratitude to the anonymous reviewers for their hard work and kind comments.
|
2303.14048 | Continuity of Thresholded Mode-Switched ODEs and Digital Circuit Delay
Models | Thresholded mode-switched ODEs are restricted dynamical systems that switch
ODEs depending on digital input signals only, and produce a digital output
signal by thresholding some internal signal. Such systems arise in recent
digital circuit delay models, where the analog signals within a gate are
governed by ODEs that change depending on the digital inputs.
We prove the continuity of the mapping from digital input signals to digital
output signals for a large class of thresholded mode-switched ODEs. This
continuity property is known to be instrumental for ensuring the faithfulness
of the model w.r.t. propagating short pulses. We apply our result to several
instances of such digital delay models, thereby proving them to be faithful. | Arman Ferdowsi, Matthias Függer, Thomas Nowak, Ulrich Schmid | 2023-03-24T15:00:42Z | http://arxiv.org/abs/2303.14048v1 | # Continuity of Thresholded Mode-Switched ODEs and Digital Circuit Delay Models
###### Abstract.
Thresholded mode-switched ODEs are restricted dynamical systems that switch ODEs depending on digital input signals only, and produce a digital output signal by thresholding some internal signal. Such systems arise in recent digital circuit delay models, where the analog signals within a gate are governed by ODEs that change depending on the digital inputs.
We prove the continuity of the mapping from digital input signals to digital output signals for a large class of thresholded mode-switched ODEs. This continuity property is known to be instrumental for ensuring the faithfulness of the model w.r.t. propagating short pulses. We apply our result to several instances of such digital delay models, thereby proving them to be faithful.
mode-switched ordinary differential equations; thresholding operator; continuity; circuit delay models; faithfulness +
Footnote †: journal: Computer Science
## 1. Introduction
A natural class of hybrid systems can be described by the dynamics of a continuous process, which is controlled by externally supplied digital mode switch signals, and provides a digital output based on whether some internal signal crosses a threshold, see Fig. 1 for an illustration. Examples are digitally controlled thermodynamic processes, hydrodynamic systems, and, in particular, digital integrated circuits. The continuous dynamics of these systems are described by _Ordinary Differential Equations_ (ODEs) for the temperature, the pipe's pressures and fill-levels, or the gate's currents and voltages over time. Digital mode switches are used to switch between ODE systems, e.g., by turning on a heater, closing a valve, or applying an input transition to a gate's input. The environment of the hybrid system is only notified if the temperature or fill-level crosses a threshold, or, in the case of a digital gate, is said to produce an output transition when some internal voltage crosses a threshold.
In this work, we consider the composition of such hybrid systems in a circuit, where digital threshold signals of one component drive mode switch signals of a downstream component. We give conditions that ensure the continuity of the outputs of such circuits with respect to their inputs and provide two application examples in the context of circuit delay models. The proven continuity property shows that small variations of the inputs lead to small variations of the output signal, a property that is necessary for digital circuit models to be consistent with physical analog ODE models.
**Digital circuits, continuity, and faithful delay models.** Analog simulations of digital circuits are time-consuming and are thus replaced by digital simulations whenever possible. Typical application domains that require simulation of precise circuit transition times are particularly timing-critical, asynchronous parts of a circuit, e.g., inter-neuron links using time-based encoding in hardware-implemented spiking neural networks (Beng et al., 2015), where the worst-case delay estimates provided by static timing analysis techniques are not sufficient for ensuring correct operation.
A mandatory prerequisite for dynamic timing analysis are digital delay models, which allow to accurately determine the input-to-output delay of every constituent gate in a circuit. Suitable models must also account for the fact that the delay of an individual signal transition usually depends on the previous transition(s), in particular, when they were close. The simplest class of such models are _single-history delay models_(Beng et al., 2015; Gif-sur-Yvette and Fugger, 2015; Gif-sur-Yvette and Fugger, 2015), where the input-to-output delay \(\delta(T)\) of a gate depends on the previous-output-to-input delay \(T\).
It has been proved by Fugger et al. (Fugger et al., 2015) that a certain continuity property of single-history models is mandatory for the digital abstraction to faithfully model the analog reality. In particular, the predicted output transitions must not be substantially affected by arbitrarily short input glitches. For example, the constant-low input
Figure 1. Thresholded mode-switched ODE with a single mode input \(i\), the delayed input \(i_{d}\), two continuous states \(x,y\), and two thresholded outputs \(\Theta_{\alpha}(x)\) and \(\Theta_{\beta}(y)\).
signal and an arbitrarily short low-high-low pulse must produce arbitrary close gate output signals. So far, the only delay model known to ensure this continuity property is the _involution delay model_ (IDM) [8], which consists of zero-time Boolean gates interconnected by single-input single-output involution delay channels. An IDM channel is characterized by a delay function \(\delta\), which is a negative involution, i.e., \(-\delta(-\delta(T))=T\). In its generalized version, different delay functions \(\delta_{\uparrow}\) resp. \(\delta_{\downarrow}\) are assumed for rising resp. falling transitions, requiring \(-\delta_{\uparrow}(-\delta_{\downarrow}(T))=T\). Unlike all other existing delay models, the IDM has been proved to faithfully model glitch propagation for the so-called short-pulse filtration problem [8], and is hence the only candidate for a faithful delay model known so far [7].
It has also been shown [8] that involution delay functions arise naturally in a 2-state thresholded hybrid channel model, which consists of a pure delay component, a key-rate limiter with a rising and falling switching waveform, and an ideal comparator (Fig. 2): The binary-valued input \(i_{a}\) is delayed by \(\delta_{\min}>0\), which assures causality of channels, i.e., \(\delta_{\uparrow/\downarrow}(0)>0\). For every transition on \(i_{d}\), the generalized slew-rate limiter switches to the corresponding waveform (\(f_{\downarrow}/f_{\uparrow}\) for a falling/rising transition). The essential property here is that the analog output voltage \(o_{a}\) is a _continuous_ (but not necessarily smooth) function of time. Finally, the comparator generates the output \(o_{d}\) by digitizing \(o_{a}\) w.r.t. the discretization threshold voltage \(V_{th}\).
Whereas the accuracy of IDM predictions for single-input, single-output circuits like inverter chains or clock trees turned out to be very good, this is less so for circuits involving multi-input gates [14]. It has been revealed by Ferdowsi et al. [4] that this is primarily due to the IDM's inherent lack of properly covering output delay variations caused by _multiple input switching_ (MIS) in close temporal proximity [3], also known as the _Charlie effect_: compared to the _single input switching_ case, output transitions are sped up/slowed down with decreasing transition separation time on different inputs. Single-input, single-output delay channels like IDM cannot exhibit such a behavior.
To capture MIS effects in a 2-input NOR gate, Ferdowsi et al. [4] hence proposed an alternative digital delay model based on a 4-state hybrid gate model. It has been obtained by replacing the 4 transistors in the RC-model of a CMOS NOR gate by ideal zero-time switches, which results in one mode per possible digital state of the inputs \((A,B)\in\{(0,0),(0,1),(1,0),(1,1)\}\). In each mode, the voltage of the the output signal and an internal node are governed by constant-coefficient first order ODEs. When an input signal changes its state, the system switches to the new mode and its corresponding ODEs.
Albeit digitizing this hybrid gate model, using a comparator with a suitable threshold voltage \(V_{th}\) as in Fig. 2, leads to a quite accurate digital delay model, it turned out to still fail to capture the MIS delay for a rising output transition. In a follow-up paper [5], Ferdowsi et. al. hence introduced a refined gate delay model, where the switching-on of the pMOS transistors is not instantaneous, but rather governed by a simple time evolution function \(\sim 1/t\), inspired by the Shichman-Hodges transistor model [12]. The resulting 4-state hybrid model consists of a single not-constant-coefficient first-order ODE per mode, and has been shown to accurately model MIS effects.
Whereas the experimental evaluation of the modeling accuracy of the hybrid models discussed above shows that they outperform the simple IDM model [14], it is not clear whether they are also _faithful_ digital delay models. What would be needed here is a proof that the digital delay models obtained by digitizing hybrid models satisfy the continuity property required for faithfulness.
**Contributions.** Our paper answers this question in the affirmative. More generally, we prove that any thresholded hybrid model like the one shown in Fig. 1 that satisfies some mild conditions on their ODEs results in a continuous digitized hybrid model. We then show that the above hybrid gate models fall into this category, and that the proven continuity implies faithful short-pulse propagation of any such model. Since the square of a signal is (proportional to) its power, this also implies a continuity property from the input signal power to the output signal power. Consequently, these delay models are indeed promising candidates for the correct and timing+power-accurate simulation of digital circuits. In more detail:
1. We show that any hybrid model, where mode \(m\) is governed by a system of first-order ODEs \(\frac{dx}{dt}=f_{m}(t,x)\), leads to a continuous digital delay model, provided all the \(f_{m}\) are continuous in \(t\) and Lipschitz continuous in \(x\), with a common Lipschitz constant for every \(t>0\) and \(m\).
2. We prove that the parallel composition of finitely many digitized hybrid gates in a circuit result in a unique and Zeno-free execution, under some mild conditions regarding causality. In conjunction with our continuity result, we prove that the resulting model is faithful w.r.t. solving the canonical short-pulse filtration problem.
3. We demonstrate that the hybrid gate models proposed in [4; 5] satisfy these properties, and are hence continuous and thus faithful.
**Paper organization.** In Section 2, we instantiate our general continuity result (Theorem 5). Section 3 presents our main continuity result for hybrid gate models (Theorem 6 and Theorem 7), and Section 4 deals with circuit composition. In Section 5, we provide examples for the hybrid models considered in this work: a simple heater from literature [9], the simple hybrid gate model [4], and the advanced gate model [5]. Some conclusions and directions of future research are provided in Section 6.
Figure 2. Hybrid involution delay channel model (upper part) with a sample execution (bottom part). Adapted from [8].
## 2. Thresholded Mode-Switched ODEs
In this section, we provide a generic proof that every hybrid model that adheres to some mild conditions on its ODEs leads to a continuous digital delay model. We start with proving continuity in the analog domain and then establish continuity of the digitized signal obtained by feeding a continuous real-valued signal into a threshold voltage comparator. Combining those results will allow us to assert the continuity of digital delay channels like the one shown in Fig. 2.
### Continuity of ODE mode switching
For a vector \(x\in\mathds{R}^{n}\), denote by \(\|x\|\) its Euclidean norm. For a piecewise continuous function \(f:[a,b]\to\mathds{R}^{n}\), we write \(\|f\|_{1}=\int_{a}^{b}\|f(t)\|\,dt\) for its \(1\)-norm and \(\|f\|_{\infty}=\sup_{t\in[a,b]}\|f(t)\|\) for its supremum norm. The projection function of a vector in \(\mathds{R}^{n}\) onto its \(k^{\text{th}}\) component, for \(1\leq k\leq n\), is denoted by \(\pi_{k}:\mathds{R}^{n}\to\mathds{R}\).
In this section, we will consider non-autonomous first-order ODEs of the form \(\frac{d}{dt}\,x(t)=f(t,x(t))\), where the non-negative \(t\in\mathds{R}_{+}\) represents the time parameter, \(x(t)\in U\) for some arbitrary open set \(U\subseteq\mathds{R}^{n}\), \(x_{0}\in U\) is some initial value, and \(f:\mathds{R}_{+}\times U\to\mathds{R}^{n}\) is chosen from a set \(F\) of bounded functions that are continuous for \((t,x)\in[0,T]\times U\), where \(0<T<\infty\), and Lipschitz continuous in \(U\) with a common Lipschitz constant for all \(t\in[0,T]\) and all choices of \(f\in F\). It is well-known that every such ODE has a unique solution \(x(t)\) with \(x(0)=x_{0}\) that satisfies \(x(t)\in U\) for \(t\in[0,T]\), is continuous in \([0,T]\), and differentiable in \((0,T)\).
The following lemma shows the continuous dependence of the solutions of such ODEs on their initial values. To be more explicit, the exponential dependence of the Lipschitz constant on the time parameter allows temporal composition of the bound. The proof can be found in standard textbooks on ODEs (Henderson, 1999, Theorem 2.8).
**Lemma 1**.: _Let \(U\subseteq\mathds{R}^{n}\) be an open set and let \(f:\mathds{R}\times U\to\mathds{R}^{n}\) be Lipschitz continuous with Lipschitz constant \(K\) for \(t\in[0,T]\) with \(T>0\), and let \(x,y:[0,T]\to U\) be continuous functions that are differentiable on \((0,T)\) such that \(\frac{d}{dt}\,x(t)=f(t,x(t))\) and \(\frac{d}{dt}\,y(t)=f(t,y(t))\) for all \(t\in(0,T)\). Then, \(\|x(t)-y(t)\|\leq e^{tK}\|x(0)-y(0)\|\) for all \(t\in[0,T]\)._
A _step function_\(s:\mathds{R}_{+}\to(0,1)\) is a right-continuous function with left limits, i.e., \(\lim_{t\to t_{0}^{-}}s(t)=s(t_{0})\) and \(\lim_{t\to t_{0}^{-}}s(t)\) exists for all \(t_{0}\in\mathds{R}_{+}\). A _binary signal_\(s\) is a step function \(s:[0,T]\to\{0,1\}\), a _mode-switch signal_\(a\) is a step function \(a:[0,T]\to F\), \(t\mapsto a_{t}\).
Given a mode-switch signal \(a\), a _matching output signal_ for \(a\) is a function \(x_{a}:[0,T]\to U\) that satisfies
1. \(x_{a}(0)=x_{0}\),
2. the function \(x_{a}\) is continuous,
3. for all \(t\in(0,T)\), if \(a\) is continuous at \(t\), then \(x_{a}\) is differentiable at \(t\) and \(\frac{d}{dt}\,x_{a}(t)=a_{t}(t,x_{a}(t))\).
For (iii), recall that the domain of \(a\) is \(F\).
**Lemma 2.1** (Existence and uniqueness of matching output signal).: _Given a mode-switch signal \(a\), the matching output signal \(x_{a}\) for a exists and is unique._
Proof.: \(x_{a}\) can be constructed inductively, by pasting together the solutions \(x_{t_{j}}\) of \(\frac{d}{dt}\,x_{t_{j}}(t)=a_{t_{j}}(t,x_{t_{j}}(t))\), where \(t_{0}=0\) and \(t_{1}<t_{2}<\dots\) are \(a\)'s switching times in \(S_{a}\): For the induction basis \(j=0\), we define \(x_{a}(t)=x_{t_{0}}(t)\) with initial value \(x_{t_{0}}=x_{t_{0}}(t_{0}):=x_{0}\) for \(t\in[0,t_{1}]\). Obviously, (i) holds by construction, and the continuity and differentiability of \(x_{t_{0}}(t)\) at other times ensures (ii) and (iii).
For the induction step \(j\to j+1\), we assume that we have constructed \(x_{a}(t)\) already for \(0\leq t\leq t_{j}\). For \(t\in[t_{j},t_{j+1}]\), we define \(x_{a}(t):=x_{t_{j+1}}(t)\) with initial value \(x_{t_{j+1}}=x_{t_{j+1}}:=x_{a}(t_{j})=x_{t_{j}}(t_{j})\). Continuity of \(x_{a}(t)\) at \(t=t_{j}\) follows by construction, and the continuity and differentiability of \(x_{t_{j+1}}(t)\) again ensures (ii) and (iii).
Given two mode-switch signals \(a\), \(b\), we define their distance as
\[d_{T}(a,b)=\lambda\big{(}\{t\in[0,T]\mid a_{t}\neq b_{t}\}\big{)} \tag{1}\]
where \(\lambda\) is the Lebesgue measure on \(\mathds{R}\). The distance function \(d_{T}\) is a metric on the set of mode-switch signals.
The following Theorem 2.2 shows that the mapping \(a\mapsto x_{a}\) is continuous.
**Theorem 2**.: _Let \(K\geq 1\) be a common Lipschitz constant for all functions in \(F\) and let \(M\) be a real number such that \(\|f(t,x(t))\|\leq M\) for all \(f\in F\), all \(x\in U\), and all \(t\in[0,T]\). Then, for all mode-switch signals \(a\) and \(b\), if \(x_{a}\) is the output signal for \(a\) and \(x_{b}\) is the output signal for \(b\), then \(\|x_{a}-x_{b}\|_{\infty}\leq 2Me^{tK}d_{T}(a,b)\). Consequently, the mapping \(a\mapsto x_{a}\) is continuous._
Proof.: Let \(S=\{t\in(0,T)\mid a\text{ or }b\text{ is discontinuous at }t\}\cup\{0,T\}\) be the set of switching times of \(a\) and \(b\). The set \(S\) must be finite, since both \(a\) and \(b\) are right-continuous on a compact interval. Let \(0=s_{0}<s_{1}<s_{2}<\dots<s_{m}=T\) be the increasing enumeration of \(S\).
We show by induction on \(k\) that
\[\forall t\in[0,s_{k}]\colon\quad\|x_{a}(t)-x_{b}(t)\|\leq 2Me^{tK}d_{t}(a,b) \tag{2}\]
for all \(k\in\{0,1,2,\dots,m\}\). The base case \(k=0\) is trivial. For the induction step \(k\mapsto k+1\), we distinguish the two cases \(a_{k_{k}}=b_{s_{k}}\) and \(a_{k_{k}}\neq b_{s_{k}}\).
If \(a_{k_{k}}=b_{s_{k}}\), then we have \(a_{t}=b_{t}\) for all \(t\in[s_{k},s_{k+1})\) and hence \(d_{t}(a,b)=d_{k_{k}}(a,b)\) for all \(t\in[s_{k},s_{k+1}]\). Moreover, we can apply Lemma 1 and obtain
\[\forall t\in[s_{k},s_{k+1}]\colon\quad\|x_{a}(t)-x_{b}(t)\|\leq e^{(t-s_{k})K} \|x_{a}(s_{k})-x_{b}(s_{k})\|\enspace. \tag{3}\]
Plugging in (2) for \(t=s_{k}\) reveals that (2) holds for all \(t\in[s_{k},s_{k+1}]\) as well.
If \(a_{k_{k}}\neq b_{s_{k}}\), then \(x_{a}\) and \(x_{b}\) follow different differential equations in the interval \(t\in[s_{k},s_{k+1}]\). We can, however, use the mean-value theorem for vector-valued functions (Henderson, 1999, Theorem 5.19) to obtain
\[\forall t\in[s_{k},s_{k+1}]\colon\quad\|x_{a}(t)-x_{a}(s_{k})\|\leq M(t-s_{k}) \enspace\text{and} \tag{4}\]
\[\forall t\in[s_{k},s_{k+1}]\colon\quad\|x_{b}(t)-x_{b}(s_{k})\|\leq M(t-s_{k}). \tag{5}\]
This, combined with the induction hypothesis, the equality \(d_{t}(a,b)=d_{s_{k}}(a,b)+(t-s_{k})\), and the inequalities \(1\leq e^{tK}\) and \(e^{s_{k}K}\leq e^{tK}\)
implies
\[\|x_{a}(t)-x_{b}(t)\| \leq\|x_{a}(t)-x_{a}(s_{k})\|\] \[\quad+\|x_{a}(s_{k})-x_{b}(s_{k})\|+\|x_{b}(s_{k})-x_{b}(t)\|\] \[\leq 2M(t-s_{k})+2Me^{k*K}d_{s_{k}}(a,b)\] \[\leq 2Me^{tK}(t-s_{k})+2Me^{tK}d_{s_{k}}(a,b)\] \[=2Me^{tK}\big{(}d_{t}(a,b)-d_{s_{k}}(a,b)\big{)}+2Me^{tK}d_{s_{k}}( a,b)\] \[=2Me^{tK}d_{t}(a,b)\]
for all \(t\in[s_{k},s_{k+1}]\). This concludes the proof.
We conclude this section with the remark that the (proof of the) continuity property of Theorem 2 is very different from the standard (proof of the) continuity property of controlled variables in closed thresholded hybrid systems. Mode switches in such systems are caused by the time evolution of the system itself, e.g., when some controlled variable exceeds some value. Consequently, such systems can be described by means of a _single_ ODE system with discontinuous righthand side (Bartos et al., 2016).
By contrast, in our hybrid systems, the mode switches are solely caused by changes of digital inputs that are _externally_ controlled: For every possible pattern of the digital inputs, there is a dedicated ODE system that controls the analog output. Consequently, the time evolution of the output now also depends on the time evolution of the inputs. Proving the continuity of the (discretized) output w.r.t. different (but close, w.r.t. some metric) digital input signals requires relating the output of _different_ ODE systems.
### Continuity of thresholding
For a real number \(\xi\in\mathds{R}\) and a function \(x:[a,b]\to\mathds{R}\), denote by \(\Theta_{\xi}(x)\) the thresholded version of \(x\) defined by
\[\Theta_{\xi}(x):[a,b]\to\{0,1\},\quad\Theta_{\xi}(x)(t)=\begin{cases}0&\text{ if }x(t)\leq\xi,\\ 1&\text{ if }x(t)>\xi.\end{cases} \tag{6}\]
**Lemma 3**.: _Let \(\xi\in\mathds{R}\) and let \(x:[a,b]\to\mathds{R}\) be a continuous strictly monotonic function with \(x(b)=\xi\). Then, for every \(\varepsilon>0\), there exists a \(\delta>0\) such that, for every continuous function \(y:[a,b]\to\mathds{R}\), the condition \(\|x-y\|_{\infty}<\delta\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1}<\varepsilon\)._
Proof.: We show the lemma for the case that \(x\) is strictly increasing. The proof for strictly decreasing \(x\) is analogous.
Set \(\chi=x(a)\). Since \(x\) is bijective onto the interval \([\chi,\xi]\), it has an inverse function \(x^{-1}:[\chi,\xi]\to[a,b]\). The inverse function \(x^{-1}\) is continuous because the domain \([a,b]\) is compact (Theorem 4.17).
The relation \(t\leq x^{-1}(\xi-\delta)\) implies \(x(t)+\delta\leq\xi\). Hence, if \(\|x-y\|_{\infty}<\delta\), then \(y(t)\leq x(t)+\delta\leq\xi\) for all \(t\leq x^{-1}(\xi-\delta)\). This means that \(\Theta_{\xi}(y)(t)=0\) for all \(t\leq x^{-1}(\xi-\delta)\), so \(t>x^{-1}(\xi-\delta)\) for every \(t\in[a,b]\) where \(\Theta_{\xi}(y)(t)=1\).
By assumption, we have \(\Theta_{\xi}(x)(t)=0\) for all \(t\in[a,b]\). Thus,
\[\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1} =\lambda\big{(}\{t\in[0,T]\mid\Theta_{\xi}(y)=1\}\big{)}\] \[=\lambda\big{(}\{t\in[0,T]\mid y(t)>\xi\}\big{)} \tag{7}\] \[\leq b-x^{-1}(\xi-\delta).\]
Note that continuity of \(y\) is sufficient to ensure that the set in (7) is measurable. Since \(x^{-1}\) is continuous, we have \(x^{-1}(\xi-\delta)\to x^{-1}(\xi)=b\) as \(\delta\to 0\). In particular, for every \(\varepsilon>0\), there exists a \(\delta>0\) such that \(b-x^{-1}(\xi-\delta)<\varepsilon\). This concludes the proof.
The following Lemma 4 shows that we can drop the assumption \(x(b)=\xi\) in Lemma 3:
**Lemma 4**.: _Let \(\xi\in\mathds{R}\) and let \(x,y:[a,b]\to\mathds{R}\) be two continuous functions where \(x\) is strictly monotonic. Then, for every \(\varepsilon>0\), there exists a \(\delta>0\) such that \(\|x-y\|_{\infty}<\delta\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1}<\varepsilon\)._
Proof.: We again show the lemma for the case that \(x\) is strictly increasing. The proof for strictly decreasing \(x\) is analogous.
Let \(\varepsilon>0\). We distinguish three cases:
(i) If \(x(b)<\xi\), then we have \(\Theta_{\xi}(x)(t)=0\) for all \(t\in[a,b]\). Choosing \(\delta=\xi-x(b)\), we deduce \(y(t)<x(t)+\delta\leq x(b)+\xi-x(b)=\xi\) for all \(t\in[a,b]\) whenever \(\|x-y\|_{\infty}<\delta\). Hence, we get \(\Theta_{\xi}(y)(t)=0\) for all \(t\in[a,b]\) and thus \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1}=0<\varepsilon\).
(ii) If \(x(a)>\xi\), then we can choose \(\delta=x(a)-\xi\) and get \(\Theta_{\xi}(y)(t)=\Theta_{\xi}(x)(t)=1\) for all \(t\in[a,b]\) whenever \(\|x-y\|_{\infty}<\delta\). In particular, \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1}=0<\varepsilon\).
(iii) If \(x(a)\leq\xi\leq x(b)\), then there exists a unique \(c\in[a,b]\) with \(x(c)=\xi\). Applying Lemma 3 on the restriction of \(x\) on the interval \([a,c]\), we get the existence of a \(\delta_{1}>0\) such that \(\|x-y\|_{[a,c],\infty}<\delta_{1}\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[a,c],1}<\varepsilon/2\); herein, \(\|\cdot\|_{[a,c],\infty}\) and \(\|\cdot\|_{[a,c],1}\) denote the supremum-norm and the 1-norm on the interval \([a,c]\), respectively. Applying Lemma 3 on the restriction of \(x\) on the interval \([c,b]\) after the coordinate transformation \(t\mapsto-t\) yields the existence of a \(\delta_{2}>0\) such that \(\|x-y\|_{[c,b],\infty}<\delta_{2}\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[c,b],1}<\varepsilon/2\). Setting \(\delta=\min\{\delta_{1},\delta_{2}\}\), we thus get \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[a,b],1}=\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\| _{[a,c],1}+\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[c,b],1}<\varepsilon/2+ \varepsilon/2=\varepsilon\) whenever \(\|x-y\|_{[a,b],\infty}<\delta\).
The following Theorem 5 shows that the mapping \(x\mapsto\Theta_{\xi}(x)\) is continuous for a given function \(x\), provided that \(x\) has only finitely many local optima, i.e., points where \(x^{\prime}(t)=0\):
**Theorem 5**.: _Let \(\xi\in\mathds{R}\) and let \(x,y:[0,T]\to\mathds{R}\) be two differentiable functions. Assume that \(x\) has only finitely many local optima. Then, for every \(\varepsilon>0\), there exists a \(\delta>0\) such that \(\|x-y\|_{\infty}<\delta\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{1}<\varepsilon\). Consequently, the mapping \(x\mapsto\Theta_{\xi}(x)\) is continuous._
Proof.: Let \(\mathcal{N}=\{t\in[0,T]\mid x\text{ has a local optimum at }t\}\cup\{0,T\}\), which is finite by assumption, and \(t_{0}<t_{1}<t_{2}<\cdots<t_{m}\) be the increasing enumeration of \(\mathcal{N}\). By the mean-value theorem, the function \(x\) is strictly monotonic in every interval \([t_{k},t_{k+1}]\) for \(k\in\{0,1,2,\ldots,m-1\}\).
Let \(\varepsilon>0\). Applying Lemma 4 to the restriction of \(x\) on each of the intervals \([t_{k},t_{k+1}]\), we get the existence of \(\delta_{k}>0\) such that \(\|x-y\|_{[t_{k},t_{k+1}],\infty}<\delta_{k}\) implies \(\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[t_{k},t_{k+1}],1}<\varepsilon/m\) for each \(k\in\{0,1,2,\ldots,m-1\}\). Setting \(\delta=\min\{\delta_{0},\delta_{1},\delta_{2},\ldots,\delta_{m-1}\}\), we thus obtain
(8) \[\|\Theta_{\xi}(x)-\Theta_{\xi}(y)\|_{[0,T],1} =\sum_{k=0}^{m-1}\|\
## 3. Continuity of digitized hybrid gate
To prepare for our general result about the continuity of hybrid gate models, we will first (re)prove the continuity of IDM channels as shown in Fig. 2, which has been established by a quite tedious direct proof in (Becker, 2017). In our notation, an IDM channel consists of:
* A nonnegative minimum delay \(\delta_{\min}\geq 0\) and a delay function \(\Delta_{\delta_{\min}}(s)\) that maps the binary input signal \(i_{a}\), augmented with the left-sided limit \(i_{a}(0-)\) as the _initial value1_ that can be different from \(i_{a}(0)\), to the binary signal \(i_{d}=\Delta_{\delta_{\min}}(i_{a})\), defined by Footnote 1: In (Becker, 2017), this initial value of a signal was encoded by extending the time domain to the whole R and using \(i_{a}(-\infty)\). \[\Delta_{\delta_{\min}}(i_{a})(t)=\begin{cases}i_{a}(0-)&\text{if }t<\delta_{\min}\\ i_{a}(t-\delta_{\min})&\text{if }t\geq\delta_{\min}\end{cases}.\] (9)
* An open set \(U\subseteq\mathds{R}^{n}\), where \(\pi_{1}[U]\) represents the analog output signal and \(\pi_{k}[U]\), \(k=\{2,3,\ldots,n\}\), specifies the internal state variables of the model. In this fashion,2 we presume that \(\pi_{1}[U]=(0,1)\), i.e., the range of output signals is contained in the interval \((0,1)\). Footnote 2: In real circuits, the interval \((0,1)\) typically needs to be replaced by \((0,V_{\text{DD}})\).
* Two bounded functions \(f_{\uparrow},f_{\downarrow}:\mathds{R}\times U\to\mathds{R}^{n}\) with the following properties:
* \(f_{\uparrow},f_{\downarrow}\) are continuous for \((t,x)\in[0,T]\times U\), for any \(0<T<\infty\), and Lipschitz continuous in \(U\), which entails that every trajectory \(x\) of the ODEs \(\frac{d}{dt}\,x(t)=f_{\uparrow}(t,x(t))\) and \(\frac{d}{dt}\,x(t)=f_{\downarrow}(t,x(t))\) with any initial value \(x(0)\in U\) satisfies \(x(t)\in U\) for all \(t\in[0,T]\), recall Section 2.1.
* for no trajectory \(x\) of the ODEs \(\frac{d}{dt}\,x(t)=f_{\uparrow}(t,x(t))\) and \(\frac{d}{dt}\,x(t)=f_{\downarrow}(t,x(t))\) with initial value \(x(0)\in U\) does \(\pi_{1}\circ x\) have infinitely many local optima, i.e., critical points with \((\pi_{1}\circ x)^{\prime}(t)=0\).
* An initial value \(x_{0}\in U\), with \(x_{0}=f_{\uparrow}\) if \(i_{a}(0-)=1\) and \(x_{0}=f_{\downarrow}\) if \(i_{a}(0-)=0\).
* A mode-switch signal \(a:[0,T]\to\{f_{\uparrow},f_{\downarrow}\}\) defined by setting \(a(t)=f_{\uparrow}\) if \(i_{d}(t)=1\) and \(a(t)=f_{\downarrow}\) if \(i_{d}(t)=0\).
* The analog output signal \(o_{a}=x_{a}\), i.e., the output signal for \(a\) and initial value \(x_{0}\).
* A threshold voltage \(\bar{\xi}=V_{th}\in(0,1)\) for the comparator that finally produces the binary output signal \(o_{d}=\Theta_{\bar{\xi}}(o_{a})\).
By combining the results from Section 2.1 and 2.2, we obtain:
Theorem 6 ().: _The channel function of an IDM channel, which maps from the input signal \(i_{a}\) to the output signal \(o_{d}\), is continuous with respect to the \(1\)-norm on the interval \([0,T]\)._
Proof.: The mapping from \(i_{a}\) to \(o_{d}\) is continuous as the concatenation of continuous mappings:
* The mapping from \(i_{a}\mapsto i_{d}\) is continuous since \(\Delta_{\delta_{\min}}\) is trivially continuous for input and output binary signals with the \(1\)-norm.
* The mapping \(i_{d}\mapsto a\) is a continuous mapping from the set of signals equipped with the \(1\)-norm to the set of mode-switch signals equipped with the metric \(d_{T}\), since the points of discontinuity of \(a\) are the points where \(i_{d}\) is discontinuous.
* By Theorem 2, the mapping \(a\mapsto x_{a}\) is a continuous mapping from the set of mode-switch signals equipped with the metric \(d_{T}\) to the set of piecewise differentiable functions \([0,T]\to U\) equipped with the supporting-norm. Since \(\|(x_{1},\ldots,x_{n})\|_{1}=\|x_{1}\|_{1}+\cdots+\|x_{n}\|_{1}\) for every \(x\in U\), this follows from \(\|\pi_{1}(x)\|_{1}\leq\|x\|_{1}\).
* By Theorem 5, the mapping \(\pi_{1}\circ x_{a}\mapsto\Theta_{\bar{\xi}}(\pi_{1}\circ x_{a})\) is a continuous mapping from the set of piecewise differentiable functions \([0,T]\to(0,1)\) equipped with the supremum-norm to the set of piecewise differentiable functions \([0,T]\to(0,1)\) equipped with the supremum-norm. Since \(\|(x_{1},\ldots,x_{n})\|_{1}=\|x_{1}\|_{1}+\cdots+\|x_{n}\|_{1}\) for every \(x\in U\), this follows from \(\|\pi_{1}(x)\|_{1}\leq\|x\|_{1}\).
* By Theorem 5, the mapping \(\pi_{1}\circ x_{a}\mapsto\Theta_{\bar{\xi}}(\pi_{1}\circ x_{a})\) is a continuous mapping from the set of piecewise differentiable functions \([0,T]\to(0,1)\) equipped with the supremum-norm to the set of binary signals equipped with the \(1\)-norm.
General digitized hybrid gates have \(c\geq 1\) binary input signals \(i_{a}=(i_{a}^{1},\ldots,i_{a}^{c})\), augmented with _initial values_\((i_{a}^{1}(0-),\ldots,i_{a}^{c}(0-))\), and a single binary output signal \(o_{d}\), and are specified as follows:
Definition 0 (Digitized hybrid gate).: A digitized hybrid gate with \(c\) inputs consists of:
* \(c\) delay functions \(\Delta_{\delta_{j}}(s)\) with \(\delta_{j}\geq 0\), \(1\leq j\leq c\), that map the binary input signal \(i_{d}^{j}\) with initial value \(i_{d}^{j}(0-)\) to the binary signal \(i_{d}^{j}=\Delta_{\delta_{j}}(i_{a}^{j})\), defined by \[\Delta_{\delta_{j}}(i_{a}^{j})(t)=\begin{cases}i_{d}^{j}(0-)&\text{if }t<\delta_{j}\\ i_{d}^{j}(t-\delta_{j})&\text{if }t\geq\delta_{j}\end{cases}.\] (10)
* An open set \(U\subseteq\mathds{R}^{n}\), where \(\pi_{1}[U]\) represents the analog output signal and \(\pi_{k}[U]\), \(k=\{2,3,\ldots,n\}\), specifies the internal state variables of the model.
* A set \(F\) of bounded functions \(f^{t}:\mathds{R}\times U\to\mathds{R}^{n}\), with the following properties:
* \(f^{t}\) is continuous for \((t,x)\in[0,T]\times U\), for any \(0<T<\infty\), and Lipschitz continuous in \(U\), with a common Lipschitz constant, which entails that every trajectory \(x\) of the ODE \(\frac{d}{dt}\,x(t)=f^{t}(t,x(t))\) with any initial value \(x(0)\in U\) satisfies \(x(t)\in U\) for all \(t\in[0,T]\).
* for no trajectory \(x\) of the ODEs \(\frac{d}{dt}\,x(t)=f^{t}(t,x(t))\) with initial value \(x(0)\in U\) does \(\pi_{1}\circ x\) have infinitely many local optima, i.e., critical points with \((\pi_{1}\circ x)^{\prime}(t)=0\).
* A mode-switch signal \(a:[0,T]\to F\), which obtained by a continuous choice function \(a_{c}\) acting on \(i_{d}^{1}(t),\ldots,i_{d}^{c}(t)\), i.e., \(a(t)=a_{c}(i_{d}^{1}(t),\ldots,i_{d}^{c}(t))\).
* An initial value \(x_{0}\in U\), which must correspond to the mode selected by \(a_{c}(i_{d}^{1}(0-),\ldots,i_{a}^{c}(0-))\).
* The analog output signal \(o_{a}=x_{a}\), i.e., the output signal for \(a\) and initial value \(x_{0}\).
* A threshold voltage \(\bar{\xi}=V_{th}\in(0,1)\) for the comparator that finally produces the binary output signal \(o_{d}=\Theta_{\bar{\xi}}(o_{a})\).
By essentially the same proof as for Theorem 6, we obtain:
Theorem 7 ().: _The gate function of a digitized hybrid gate with \(c\) inputs, which maps from the vector of input signals \(i_{a}=(i_{a}^{1},\ldots,i_{a}^{c})\) to the output signal \(o_{d}\), is continuous with respect to the \(1\)-norm on the interval \([0,T]\)._
## 4. Composing gates in circuits
In this section, we will first compose digital circuits from digitized hybrid gates and reason about their executions. More specifically, it will turn out that, under certain conditions ensuring the causality of every composed gate, the resulting circuit will exhibit a unique execution, for every given execution of its inputs. This uniqueness is mandatory for building digital dynamic timing simulation tools.
Moreover, we adapt the proof that no circuit with IDM channels can solve the bounded SPF problem utilized in (Brock et al., 2017) to our setting: Using the continuity result of Theorem 7, we will prove that no circuit with digitized hybrid gates can solve bounded SPF. Since unbounded SPF can be solved with IDM channels, which are simple instances of digitized hybrid gate models, faithfulness follows.
### Executions of circuits
**Circuits**. Circuits are obtained by interconnecting a set of input ports and a set of output ports, forming the external interface of a circuit, and a finite set of digitized hybrid gates. We constrain the way components are interconnected in a natural way, by requiring that any gate input, channel input and output port is attached to only one input port, gate output or channel output, respectively. Formally, a _circuit_ is described by a directed graph where:
1. A vertex \(\Gamma\) can be either a _circuit input port_, a _circuit output port_, or a digitized hybrid _gate_.
2. The _edge_\((\Gamma,I,\Gamma^{\prime})\) represents a \(0\)-delay connection from the output of \(\Gamma\) to a fixed input \(I\) of \(\Gamma^{\prime}\).
3. Circuit input ports have no incoming edges.
4. Circuit output ports have exactly one incoming edge and no outgoing one.
5. A \(c\)-ary gate \(G\) has a single output and \(c\) inputs \(I_{1},\ldots,I_{c}\), in a fixed order, fed by incoming edges from exactly one gate output or input port.
**Executions**.: An _execution_ of a circuit \(\mathcal{C}\) is a collection of binary signals \(\textsc{s}_{\Gamma}\) defined on \([0,\infty)\) for all vertices \(\Gamma\) of \(\mathcal{C}\) that respects all the gate functions and input port signals. Formally, the following properties must hold:
1. If \(i\) is a circuit input port, there are no restrictions on \(s_{i}\).
2. If \(o\) is a circuit output port, then \(s_{o}=s_{G}\), where \(G\) is the unique gate output connected to \(o\).
3. If vertex \(G\) is a gate with \(c\) inputs \(I_{1},\ldots,I_{c}\), ordered according to the fixed order condition C5), and gate function \(f_{G}\), then \(s_{G}=f_{G}(s_{\Gamma_{1}},\ldots,s_{\Gamma_{c}})\), where \(\Gamma_{1},\ldots,\Gamma_{c}\) are the vertices the inputs \(I_{1},\ldots,I_{c}\) of \(C\) are connected to via edges \((\Gamma_{1},I_{1},G),\cdots,(I_{d},I_{c},G)\).
The above definition of an execution of a circuit is "existential", in the sense that it only allows checking for a given collection of signals whether it is an execution or not: For every hybrid gate in the circuit, it specifies the gate output signal, given a _fixed_ vector of input signals, all defined on the time domain \(t\in[0,\infty)\). A priori, this does not give an algorithm to construct executions of circuits, in particular, when they contain feedback loops. Indeed, the parallel composition of general hybrid automata may lead to non-unique executions and bizarre timing behaviors known as _Zeno_, where an infinite number of transitions may occur in finite time (Forde et al., 2017).
To avoid such behaviors in our setting, we require all discretized hybrid gates in a circuit to be _strictly causal_:
Definition 4.1 (Strict causality).: A digitized hybrid gate \(G\) with \(c\) inputs is strictly causal, if the pure delays \(\delta_{j}\) for every \(1\leq j\leq c\) are positive. Let \(\delta^{\mathcal{C}}_{\min}>0\) be the minimal pure delay of any input of any gate in circuit \(C\).
We proceed with defining input-output causality for gates, which is based on signal transitions. Every binary signal can equivalently be described by a sequence of transitions: A _falling transition_ at time \(t\) is the pair \((t,0)\), a _rising transition_ at time \(t\) is the pair \((t,1)\).
Definition 4.2 (Input-output causality).: The output transition \((t,.)\in s_{G}\) of a gate \(G\) is _caused_ by the transition \((t^{\prime},.)\in s^{j}_{G}\) on input \(I_{j}\) of \(G\), if \((t,.)\) occurs in the mode \(a_{c}(i^{1}_{d}(t^{+}),\ldots,i^{c}_{d}(t^{+}))\), where \(i^{j}_{d}(t^{+})\) is the pure-delay shifted input signal at input \(I_{j}\) at the last mode switching time \(t^{+}\leq t\) (see (10)) and \((t^{\prime},.)\) is the last transition in \(s^{j}_{G}\) before or at time \(t^{+}-\delta_{j}\), i.e., \(\exists(t^{\prime\prime},.)\in s^{j}_{G}\) for \(t^{\prime}<t^{\prime\prime}\leq t^{+}-\delta_{j}\).
We also assume that the output transition \((t,.)\in s_{G}\)_causally depends_ on every transition in \(s^{j}_{G}\) before or at time \(t^{+}-\delta_{j}\).
Strictly causal gates satisfy the following obvious property:
Lemma 4.3 ().: _If some output transition \((t,.)\in s_{G}\) of a strictly causal digitized hybrid gate \(G\) in a circuit \(C\) causally depends on its input transition \((t^{\prime},.)\in s^{j}_{G}\), then \(t-t^{\prime}\geq\delta_{j}\)._
The following Theorem 4.4 shows that every circuit made up of strictly causal gates has a unique execution, defined for \(t\in[0,\infty)\).
Theorem 4.4 (Unique execution).: _Every circuit \(C\) made up of finitely many strictly causal digitized hybrid gates has a unique execution, which either consists of finitely many transitions only or else requires \([0,\infty)\) as its time domain._
Proof.: We will inductively construct this unique execution by a sequence of iterations \(t\geq 1\) of a simple deterministic simulation algorithm, which determines the prefix of the sought execution up to time \(t_{t}\). Iteration \(t\) deals with transitions occurring at time \(t_{t}\), starting with \(t_{1}=0\). To every transition \(e\) generated throughout its iterations, we also assign a _causal depth_\(d(e)\) that gives the maximum causal distance to an input port: \(d(e)=0\) if \(e\) is a transition at some input port, and \(d(e)\) is the maximum of \(1+d(e^{j})\), \(1\leq j\leq c\), for every transition added at the output of a \(c\)-ary gate caused by transitions \(e^{j}\) at its inputs.
Induction basis \(t=1\): At the beginning of iteration \(1\), which deals with all transitions occurring at time \(t_{1}=0\), all gates are in their initial mode, which is determined by the initial values of their inputs. They are either connected to input ports, in which case \(s_{i}(0-)\) is used, or to the output port of some gate \(G\), in which case \(s_{G}(0)\) (determined by the initial mode of \(G\)) is used. Depending on whether \(s_{i}(0-)=s_{i}(0)\) or not, there is also an input transition \((0,s_{i}(0))\in s_{i}\) or not. All transitions in the so generated execution prefix \([0,t_{1}]=[0,0]\) have a causal depth of \(0\).
Still, the transitions that have happened by time \(t_{1}\) may cause additional _potential future transitions_. They are called future transitions, because they occur only after \(t_{1}\), and potential because they need not occur in the final execution. In particular, if there is an input transition \((0,s_{i}(0))\in s_{i}\), it may cause a mode switch of every
gate \(G\) that is connected to the input port \(i\). Due to Lemma 4.3, however, such a mode switch, and hence each of the output transitions \(e\) that may occur during that new mode (which all are assigned a causal depth \(d(e)=1\)), of \(G\) can only happen at or after time \(t_{1}+\delta_{\min}^{C}\). In addition, the initial mode of any gate \(G\) that is not mode switched may also cause output transitions \(e\) at arbitrary times \(t>0\), which are assigned a causal depth \(d(e)=0\). Since at most finitely many critical points may exist for every mode's trajectory, it follows that at most _finitely_ many such future potential transitions could be generated in each of the finitely many gates in the circuit. Let \(t_{2}>t_{1}\) denote the time of the closest transition among all input port transitions and all the potential future transitions just introduced.
Induction step \(f\to f+1\): Assume that the execution prefix for \([0,t_{\ell}]\) has already been constructed in iterations \(1,\ldots,\ell\), with at most finitely many potential future transitions occurring after \(t_{\ell}\). If the latter set is empty, then the execution of the circuit has already been determined completely. Otherwise, let \(t_{\ell+1}>t_{\ell}\) be the closest future transition time.
During iteration \(\ell+1\), all transitions occurring at time \(t_{\ell+1}\) are dealt with, exactly as in the base case: Any transition \(e\), with causal depth \(d(e)\), happening at \(t_{\ell+1}\) at a gate output or at some input port may cause a mode switch of every gate \(G\) that is connected to it. Due to Lemma 4.3, such a mode switch, and hence each of the at most finitely many output transitions \(e^{\prime}\) occurring during that new mode (which all are assigned a causal depth \(d(e^{\prime})=d(e)+1\)), of \(G\) can only happen at or after time \(t_{\ell+1}+\delta_{\min}^{C}\). In addition, the at most finitely many potential future transitions w.r.t. \(t_{\ell}\) of all gates that have not been mode-switched and actually occur at times greater than \(t_{\ell+1}\) are retained, along with their assigned causal depth, as potential future transitions w.r.t. \(t_{\ell+1}\). Overall, we again end up with at most finitely many potential future transitions, which completes the induction step.
To complete our proof, we only need to argue that \(\lim_{\ell\to\infty}t_{\ell}=\infty\) for the case where the iterations do not stop at some finite \(\ell\). This follows immediately from the fact that, for every \(k\geq 1\), there must be some iteration \(\ell\geq 1\) such that \(t_{\ell}\geq k\delta_{\min}^{C}\). If this was not the case, there must be some iteration after which no further mode switch of any gate takes place. This would cause the set of potential future transitions to shrink in every subsequent iteration, however, and thus the simulation algorithm to stop, which provides the required contradiction.
From the execution construction, we also immediately get:
**Lemma 4.5**: _For all \(\ell\geq 1\), (a) the simulation algorithm never assigns a causal depth larger than \(\ell\) to a transition generated in iteration \(\ell\), and (b) at the end of iteration \(\ell\) the sequence of causal depths of transitions in \(s_{\Gamma}\) for \(t\in[0,t_{\ell}]\) is nondecreasing for all components \(\Gamma\)._
### Impossibility of short-pulse filtration
The results of the previous subsection allow us to adapt the impossibility proof of [(8)] to our setting. We start with the the definition of the SPF problem:
**Short-Pulse Filtration.** A signal _contains a pulse_ of length \(\Delta\) at time \(T_{0}\), if it contains a rising transition at time \(T_{0}\), a falling transition at time \(T_{0}+\Delta\), and no transition in between. The _zero signal_ has the initial value \(0\) and does not contain any transition. A circuit _solves Short-Pulse Filtration (SPF)_, if it fulfills all of:
1. The circuit has exactly one input port and exactly one output port. _(Well-formedness)_
2. If the input signal is the zero signal, then so is the output signal. _(No generation)_
3. There exists an input pulse such that the output signal is not the zero signal. _(Nontriviality)_
4. There exists an \(\varepsilon>0\) such that for every input pulse the output signal never contains a pulse of length less than or equal to \(\varepsilon\). _(No short pulses)_
We allow the circuit to behave arbitrarily if the input signal is not a single pulse or the zero signal.
1. circuit _solves bounded SPF_ if additionally:
2. There exists a \(K>0\) such that for every input pulse the last output transition is before time \(T_{0}+\Delta+K\), where \(T_{0}\) is the time of the first input transition. _(Bounded stabilization time)_
A circuit is called a _forward circuit_ if its graph is acyclic. Forward circuits are exactly those circuits that do not contain feedback loops. Equipped with the continuity of digitized hybrid gates and the fact that the composition of continuous functions is continuous, it is not too difficult to prove that the inherently discontinuous SPF problem cannot be solved with forward circuits.
**Theorem 4.6**: _No forward circuit solves bounded SPF._
Suppose that there exists a forward circuit that solves bounded SPF with stabilization time bound \(K\). Denote by \(s_{\Delta}\) its output signal when feeding it a \(\Delta\)-pulse at time \(0\) as the input. Because \(s_{\Delta}\) in forward circuits is a finite composition of continuous functions by Theorem 7, \(\|s_{\Delta}\|_{[0,T],1}\) depends continuously on \(\Delta\), for any \(T\).
By the nontriviality condition (F3) of the SPF problem, there exists some \(\Delta_{0}\) such that \(s_{\Delta_{0}}\) is not the zero signal. Set \(T=2\Delta_{0}+K\).
Let \(\varepsilon>0\) be smaller than both \(\Delta_{0}\) and \(\|s_{\Delta_{0}}\|_{[0,T],1}\). We show a contradiction by finding some \(\Delta\) such that \(s_{\Delta}\) either contains a pulse of length less than \(\varepsilon\) (contradiction to the no short pulses condition (F4)) or contains a transition after time \(\Delta+K\) (contradicting the bounded stabilization time condition (F5)).
Since \(\|s_{\Delta}\|_{[0,T],1}\to 0\) as \(\Delta\to 0\) by the no generation condition (F2) of SPF, there exists a \(\Delta_{1}<\Delta_{0}\) such that \(\|s_{\Delta_{1}}\|_{[0,T],1}=\varepsilon\) by the intermediate value property of continuity. By the bounded stabilization time condition (F5), there are no transitions in \(s_{\Delta_{1}}\) after time \(\Delta_{1}+K\). Hence, \(s_{\Delta_{1}}\) is \(0\) after this time because otherwise it is \(1\) for the remaining duration \(T-(\Delta_{1}+K)>\Delta_{0}>\varepsilon\), which would mean that \(\|s_{\Delta_{1}}\|_{[0,T],1}>\varepsilon\). Consequently, there exists a pulse in \(s_{\Delta_{1}}\) before time \(\Delta_{1}+K\). But any such pulse is of length at most \(\varepsilon\) because \(\|s_{\Delta_{1}}\|_{[0,\Delta_{1}+K],1}\leq\|s_{\Delta_{1}}\|_{[0,T],1}=\varepsilon\). This is a contradiction to the no short pulses condition (F4).
We next show how to simulate (part of) an execution of an arbitrary circuit \(\mathcal{C}\) by a forward circuit \(\mathcal{C}^{\prime}\) generated from \(\mathcal{C}\) by the unrolling of feedback loops. Intuitively, the deeper the unrolling, the longer the time \(\mathcal{C}^{\prime}\) behaves as \(\mathcal{C}\).
**Definition 4.7**.: Let \(\mathcal{C}\) be a circuit, \(V\) a vertex of \(\mathcal{C}\), and \(k\geq 0\). We define the \(k\)_-unrolling of \(\mathcal{C}\) from \(V\)_, denoted by \(\mathcal{C}_{k}(V)\), to be a directed acyclic graph with a single sink, constructed as follows:
The unrolling \(\mathcal{C}_{k}(I)\) from input port \(I\) is just a copy of that input port. The unrolling \(\mathcal{C}_{k}(O)\) from output port \(O\) with incoming channel \(C\) and predecessor \(V\) comprises a copy of the output port \(O^{(k)}\) and the unrolled circuit \(\mathcal{C}_{k}(V)\) with its sink connected to \(O^{(k)}\) by an edge.
The \(0\)-unrolling \(\mathcal{C}_{0}(B)\) from hybrid gate \(B\) is a trivial Boolean gate \(X_{0}\) without inputs and the constant output value \(v\) equal to \(B\)'s initial digitized output value. For \(k>0\), the \(k\)-unrolling \(\mathcal{C}_{k}(B)\) from gate \(B\) comprises an exact copy of that gate \(B^{(k)}\). Additionally, for every incoming edge of \(B\) from \(V\) in \(\mathcal{C}\), it contains the circuit \(\mathcal{C}_{k-1}(V)\) with its sink connected to \(B^{(k)}\). Note that all copies of the same input port are considered to be the same.
To each component \(\Gamma\) in \(\mathcal{C}_{k}(V)\), we assign a value \(z(\Gamma)\in\mathbb{N}_{0}\cup\{\infty\}\) as follows: \(z(\Gamma)=\infty\) if \(\Gamma\) has no predecessor (in particular, is an input port) and \(\Gamma\notin\{X_{0},X_{1}\}\). Moreover, \(z(X_{0})=z(X_{1})=0\), \(z(Y)=z(U)\) if \(V\) is an output port connected by an edge to \(U\), and \(z(B)=\min_{c\in E^{B}}\{1+z(c)\}\) if \(B\) is a gate with its inputs connected to the components in the set \(E^{B}\). Fig. 3 shows an example of a circuit and an unrolled circuit with its \(z\) values.
Noting that, for every component \(\Gamma\) in \(\mathcal{C}_{k}(V)\), \(z(\Gamma)\) is the number of gates on the shortest path from an \(X_{0}\) node to \(\Gamma\), or \(z(\Gamma)=\infty\) if no such path exists, we immediately get:
**Lemma 4.8**.: _The \(z\)-value assigned to the sink vertex \(V^{(k)}\) of a \(k\)-unrolling \(\mathcal{C}_{k}(V)\) of \(\mathcal{C}\) from \(V\) satisfies \(z(V^{(k)})\geq k\)._
Recalling the causal depths assigned to transitions during the execution construction in Theorem 4.4, we are now in the position to prove the result for a circuit simulated by an unrolled circuit.
**Theorem 4.9**.: _Let \(\mathcal{C}\) be a circuit with input port \(I\) and output port \(O\) that solves bounded SPF. Let \(\mathcal{C}_{k}(O)\) be an unrolling of \(\mathcal{C}\), \(\Gamma\) a component in \(\mathcal{C}\), and \(\Gamma^{\prime}\) a copy of \(\Gamma\) in \(\mathcal{C}_{k}(O)\). For all input signals \(s_{I}\) on \(I\), if a transition \(e\) is generated for \(\Gamma\) by the execution construction algorithm run on circuit \(\mathcal{C}\) with input signal \(s_{I}\) and \(d(e)\leq z(\Gamma^{\prime})\), then \(e\) is also generated for \(\Gamma^{\prime}\) by the algorithm run on circuit \(\mathcal{C}_{k}(O)\) with input signal \(s_{I}\); and vice versa._
Proof.: Assume that \(e\) is the first transition violating the theorem. The input signal is the same for both circuits, and the initial digitized values of gates in \(\mathcal{C}\) and both their copies in \(\mathcal{C}_{k}(O)\) and the \(X_{0}\) gates resulting from their \(0\)-unrolling are equal as well. Hence, \(e\) cannot be any such transition (added in iteration \(1\) only).
If \(e\) was added to the output of a gate \(B\) in either circuit, the transition \(e^{\prime}\) resp. \(e^{\prime\prime}\) at one of its inputs that caused \(e\) in \(\mathcal{C}\) resp. \(\mathcal{C}_{k}(V)\) must have been different. These transitions \(e^{\prime}\) resp. \(e^{\prime\prime}\) must come from the output of some other gate \(B_{1}\), and causally precede \(e\). Hence, by Definition 4.2, \(d(e)=d(e^{\prime})+1\), and by Lemma 4.5, \(d(e)\geq d(e^{\prime\prime})\). Also by definition, \(z(B)=z(B_{1})+1\) in \(\mathcal{C}_{k}(O)\). Since \(d(e)\leq z(B)\) by assumption, we find \(d(e^{\prime})\leq z(B_{1})\) and \(d(e^{\prime\prime})\leq z(B)\), so applying our theorem to \(e^{\prime}\) and \(e^{\prime\prime}\) yields a contradiction to \(e\) being the first violating transition.
We can finally prove that bounded SPF is not solvable, even with non-forward circuits.
**Theorem 4.10**.: _No circuit solves bounded SPF._
Proof.: We first note that the impossibility of bounded SPF also implies the impossibility of bounded SPF when restricting pulse lengths to be at most some \(\Delta_{0}>0\).
Since all transitions generated in the execution construction Theorem 4.4 up to any bounded time \(t_{t}\) have bounded causal depth, let \(\zeta\) be an upper bound on the causal depth of transitions up to the SPF stabilization time bound \(\Delta_{0}+K\). Then, by Theorem 4.9 and Lemma 4.8, the \(\zeta\)-unrolled circuit \(\mathcal{C}_{\zeta}(O)\) has the same output transitions as the original circuit \(\mathcal{C}\) up to time \(\Delta_{0}+K\), and hence, by definition of bounded SPF, the same transitions for all times. But since \(\mathcal{C}_{\zeta}(O)\) is a forward circuit, it cannot solve bounded SPF by Theorem 4.6, i.e., neither can \(\mathcal{C}\).
## 5. Applications
We next discuss three examples of thresholded mode-switched ODE systems. For all non-closed systems, the proven continuity shows that similar digital inputs lead to similar digital outputs.
We start with an introductory example from control theory, the bang-bang heating controller for thermodynamic systems. Following (Burgurg et al., 2017), let \(x(t)\) be the system's temperature at time \(t\) and \(h(t)\) be the mode of the binary heating signal that can be off (0) or on (1). With a pure delay \(\delta>0\) for the heating to take effect, we assume that the heat flow is described as
\[\dot{x}=\begin{cases}-0.1x(t)&\text{if }h(t-\delta)=0\\ 5-0.1x(t)&\text{if }h(t-\delta)=1\end{cases} \tag{11}\]
for the heating being off or on, respectively: the temperature falls to \(0\) in the former case and approaches \(50\) degrees in the latter.
The heating signal is controlled by a bang-bang controller (with hysteresis) with two threshold temperatures, \(19\) and \(20\) degrees. It could be implemented by an ideal SR-latch, with pure delay \(\delta\), where the Set port (S) is driven by the inverted \(\neg\Theta_{19}(x)\), the reset port (R) is driven by \(\Theta_{21}(x)\), and the output of the latch controlling the heating mode signal \(h\).
In fact, digital circuits are a particularly rich and interesting source of application examples in general. We will demonstrate this by means of two hybrid gate models for a CMOS NOR gate (see Fig. 4 for the schematics), namely, the simple model proposed in (Burg et al., 2017) (as an instance of an autonomous ODE model) and the advanced model presented in (Burg et al., 2017) (as an instance of a non-autonomous ODE model). The SR-latch from the previous example can be implemented via two cross-coupled NOR gates.
### Simple Hybrid Model
The _simple hybrid gate model_ proposed in (Bartos et al., 2017) replaces all transistors in Fig. 4 by ideal zero-time switches, which are switched on and off at the relevant input threshold voltage \(V_{th}=V_{DD}/2\) crossing times. More precisely, depending on whether the corresponding input is 1 or 0, every pMOS transistor is removed (\(R=\infty\)) resp. replaced by a fixed resistor \(R<\infty\), and vice versa for an nMOS transistor. This leads to the following system of coupled autonomous first-order ODEs governing the analog trajectories of the gate's output in the respective mode:
* System \((1,1)\): \(V_{A}=1\), \(V_{B}=1\): If inputs \(A\) and \(B\) are 1, both nMOS transistors are conducting and thus replaced by resistors, causing the output \(O\) to be discharged in parallel. By contrast, \(N\) is completely isolated and keeps its value. This leads to the following ODEs: \[\left(\begin{array}{c}\frac{\mathrm{d}}{\mathrm{d}t}V_{int}(t)\\ \frac{\mathrm{d}}{\mathrm{d}t}V_{out}(t)\end{array}\right)=\begin{pmatrix}f_{ 1}(V_{int}(t),V_{out}(t))\\ f_{2}(V_{int}(t),V_{out}(t))\end{pmatrix}=\begin{pmatrix}0\\ -\left(\frac{1}{\mathrm{d}R_{3}}+\frac{1}{\mathrm{d}K_{4}}\right)V_{out}(t) \end{pmatrix}\]
* System \((1,0)\): \(V_{A}=1\), \(V_{B}=0\): Since \(T_{1}\) and \(T_{4}\) are open, node \(N\) is connected to \(O\), and \(O\) to _GND_. Both capacitors have to be discharged over resistor \(R_{3}\), resulting in less current that is available for discharging \(C\). One obtains: \[\left(\begin{array}{c}\frac{\mathrm{d}}{\mathrm{d}t}V_{int}(t)\\ \frac{\mathrm{d}}{\mathrm{d}t}V_{out}(t)\end{array}\right)=\begin{pmatrix}f_{ 3}(V_{int}(t),V_{out}(t))\\ f_{4}(V_{int}(t),V_{out}(t))\end{pmatrix}=\begin{pmatrix}-\frac{V_{int}(t)}{C_ {int}R_{2}}+\frac{V_{out}(t)}{C_{int}R_{2}}\\ \frac{V_{int}(t)}{CR_{2}}-\left(\frac{1}{\mathrm{d}K_{2}}+\frac{1}{\mathrm{d}K _{3}}\right)V_{out}(t)\end{pmatrix}\]
* System \((0,1)\): \(V_{A}=0\), \(V_{B}=1\): Opening transistors \(T_{2}\) and \(T_{3}\) again decouples the nodes \(N\) and \(O\). We thus get \[\left(\begin{array}{c}\frac{\mathrm{d}}{\mathrm{d}t}V_{int}(t)\\ \frac{\mathrm{d}}{\mathrm{d}t}V_{out}(t)\end{array}\right)=\begin{pmatrix}f_{ 5}(V_{int}(t),V_{out}(t))\\ f_{6}(V_{int}(t),V_{out}(t))\end{pmatrix}=\begin{pmatrix}-\frac{V_{int}(t)}{C_ {int}R_{1}}+\frac{V_{DD}}{C_{int}R_{1}}\\ -\frac{V_{out}(t)}{CR_{4}}\end{pmatrix}\]
* System \((0,0)\): \(V_{A}=0\), \(V_{B}=0\): Closing both pMOS transistors causes both capacitors to be charged over the same resistor \(R_{1}\), similarly to system \((1,0)\). Thus \[\left(\begin{array}{c}\frac{\mathrm{d}}{\mathrm{d}t}V_{int}(t)\\ \frac{\mathrm{d}}{\mathrm{d}t}V_{out}(t)\end{array}\right)=\begin{pmatrix}f_{ 7}(V_{int}(t),V_{out}(t))\\ f_{8}(V_{int}(t),V_{out}(t))\end{pmatrix}=\\ \left(-\frac{1}{C_{int}(t)R_{1}}+\frac{1}{C_{int}(t)R_{2}}V_{int}+\frac{V_{out }(t)}{C_{int}R_{2}}+\frac{V_{DD}}{C_{int}R_{1}}\right)\\ \frac{V_{int}(t)}{CR_{2}}-\frac{V_{out}(t)}{CR_{2}}\end{pmatrix}\] Every \(f_{i}\), \(i\in\{1,\ldots,8\}\), is a mapping from \(U=(0,1)^{2}\subseteq\mathds{R}^{2}\) to \(\mathds{R}\), whereas \(U\) is the vector of the voltages at the nodes \(N\) and \(O\) in Fig. 4. Solving the above ODEs provides analytic expressions for these voltage trajectories, which can even be inverted to obtain the relevant gate delays. As it turned out in (Bartos et al., 2017), although the model perfectly covers the MIS effects in the case of falling output transitions, it fails to do so in the rising output transitions case. Nevertheless, despite this accuracy shortcoming, the results of the present paper imply that the model is faithful. More specifically, we obtain the following theorem:
Theorem 5.1 ().: _For any \(i\in\{1,\ldots,8\}\), the mapping \(f_{i}\), defined above, is Lipschitz continuous._
Consequently, we can instantiate Definition 3.1 with
\[a_{i}(t_{d}^{A},t_{d}^{B})=\begin{pmatrix}f_{1}(V_{int}(t),V_{out}(t))\\ f_{2}(V_{int}(t),V_{out}(t))\end{pmatrix}\quad(i_{d}^{A},t_{d}^{B})=(1,1)\\ f_{3}(V_{int}(t),V_{out}(t))\\ f_{4}(V_{int}(t),V_{out}(t))\end{pmatrix}\quad(i_{d}^{A},t_{d}^{B})=(1,0)\\ f_{5}(V_{int}(t),V_{out}(t))\\ (f_{6}(V_{int}(t),V_{out}(t)))\\ f_{6}(V_{int}(t),V_{out}(t))\end{pmatrix}\quad(i_{d}^{A},t_{d}^{B})=(0,1)\\ f_{7}(V_{int}(t),V_{out}(t))\\ f_{8}(V_{int}(t),V_{out}(t))\end{pmatrix}\quad(i_{d}^{A},t_{d}^{B})=(0,0)\]
### Advanced Hybrid Model
Unlike the simple hybrid model (Bartos et al., 2017) outlined in the previous section, the _advanced hybrid gate model_ proposed in (Bartos et al., 2017) covers all MIS delay behaviors properly. It can be viewed as a generalization of the simple model, in which switching-on the pMOS transistors is not instantaneous but instead governed by a simple time evolution function representing the Shichman-Hodges transistor model (Shichman and Hodges, 2018). To be more specific, the idea is to replace the transistors with time-variant resistors (see Fig. 4(b)), so that the values of \(R_{i}(t)\), \(i\in\{1,\ldots,4\}\), vary between some fixed on-resistance \(R_{i}\) and the off-resistance \(\infty\), according to the following functions:
\[R_{i}^{0m}(t)=\frac{a_{i}}{t-t^{0m}}+R_{i};\ t\geq t^{0m}, \tag{12}\] \[R_{i}^{off}(t)=\infty;\ t\geq t^{off}. \tag{13}\]
Herein, \(a_{i}\) [\(\Omega\) s] and on-resistance \(R_{i}\) [\(\Omega\)] are constant slope parameters; \(t^{0m}\) resp. \(t^{off}\) represent the time when the respective transistor is switched on resp. off. The switching-on of the nMOS transistors happens instantaneously also here, so \(\alpha_{3}=\alpha_{4}=0\).
Figure 4. Transistor level implementation of the NOR gate.
Figure 5. Implementations of a CMOS NOR gate.
Applying Kirchhoff's rules to Fig. 5 leads to \(\frac{\mathrm{d}V_{out}}{\mathrm{d}I}=\frac{V_{OD}-V_{out}}{R_{1}(t)+R_{2}(t)}\)\(\frac{V_{out}}{R_{3}(t)\parallel R_{4}(t)}\), which can be transformed into the non-homogeneous non-autonomous ODE with non-constant coefficients
\[\frac{\mathrm{d}V_{out}}{\mathrm{d}I}=f(t,V_{out}(t))=-\frac{V_{out}(t)}{C\,R_{ \mathcal{G}}(t)}+U(t), \tag{14}\]
where \(\frac{1}{R_{\mathcal{G}}(t)}=\frac{1}{R_{1}(t)+R_{2}(t)}+\frac{1}{R_{3}(t)}+ \frac{1}{R_{4}(t)}\) and \(U(t)=\frac{V_{OD}}{C(R_{1}(t)+R_{2}(t))}\).
As comprehensively described in (Covington et al., 2016), depending on each particular resistor's mode in each input state transition, different expressions for \(R_{\mathcal{G}}(t),U(t)\) and thus for \(f(t,V_{out}(t))\) are obtained. They are summarized in Table 1. Note that we have used the notation \(R_{1}=R_{P_{A}}\), \(R_{2}=R_{P_{B}}\) with abbreviations \(2R=R_{P_{A}}+R_{P_{B}}\), \(R_{3}=R_{n_{A}}\), and \(R_{4}=R_{n_{B}}\) for the two nMOS transistors \(T_{3}\) and \(T_{4}\). Due to the symmetry, we end up with only six different functions.
The following theorem shows that they are continuous in the first argument and Lipschitz continuous in the second argument.
Theorem 5.2 ().: _Let \(F=\{f_{1},\ldots,f_{6}:\mathds{R}\times(0,1)\rightarrow\mathds{R}\}\) be the set of all functions described in Table 1, modulo symmetry. Every \(f_{i}\in F\), where \(i\in\{1,\ldots,6\}\), is continuous and Lipschitz continuous in the second argument \(V_{out}\left(t\right)\)._
Defining \(s(t)=(i^{A}_{d}(t^{+}),i^{B}_{d}(t^{+}))\) and \(s_{\mathcal{G}}(t)=(i^{A}_{d}(t),i^{B}_{d}(t))\), we can again instantiate Definition 3.1 by the choice function
\[a_{c}(s(t))=\begin{cases}f_{1}(t,V_{out}(t))&s(t)=(1,0)\\ f_{2}(t,V_{out}(t))&s(t)=(0,1)\\ f_{3}(t,V_{out}(t))&s(t)=(0,0),s_{\mathcal{G}}(t)=(1,0)\\ f_{4}(t,V_{out}(t))&s(t)=(0,0),s_{\mathcal{G}}(t)=(0,1)\\ f_{5}(t,V_{out}(t))&s(t)=(0,0),s_{\mathcal{G}}(t)=(1,1)\\ f_{6}(t,V_{out}(t))&s(t)=(1,1)\end{cases}\]
which, according to (14), results in \(dV_{out}(t)/dt\) being
\[\begin{cases}\frac{-V_{out}(t)}{C_{out}}&s(t)=(1,0)\\ -V_{out}(t)&s(t)=(0,1)\\ \frac{-V_{out}(t)+V_{OD}}{C\left(2R^{2}+(a_{1}a_{2}+2\lambda_{R})t+a_{1} \lambda_{L}\right)}&s(t)=(0,0),s_{\mathcal{G}}(t)=(1,0)\\ \frac{-V_{out}(t)+V_{OD}}{C\left(2R^{2}+(a_{1}a_{2}+2\lambda_{R})t+a_{2} \lambda_{L}\right)}&s(t)=(0,0),s_{\mathcal{G}}(t)=(0,1)\\ \frac{-V_{out}(t)+V_{OD}}{C\left(2R+a_{1}+a_{2}\right)}&s(t)=(0,0),s_{\mathcal{G }}(t)=(1,1)\\ \frac{-V_{out}(t)+V_{OD}}{C\left(R_{n_{A}}+R_{n_{B}}\right)}&s(t)=(1,1).\end{cases}\]
## 6. Conclusions
We presented a general continuity proof for a broad class of first-order thresholded hybrid models, as they arise naturally in digital circuits. We showed that, under mild conditions regarding causality, digitized hybrid gates could be composed to form circuits with unique and well-behaved executions. We concluded with concrete gate model instantiations of our model.
|
2309.01378 | Solving the Naturalness Problem with Feeble Coupled Sectors | The discovery of a light Higgs boson means that whatever form new physics
takes, it should keep stable the Higgs mass. Besides the well-known solutions
to the naturalness problem (Supersymmetry, Conformal symmetry, Compositeness,
etc), models that include heavy particles with feeble couplings to the Standard
Model (SM) can be considered natural, since the corrections to the Higgs mass
remains of the order of the electroweak (EW) scale. This solution can be used
for model building too, with realizations that include the see-saw mechanism
for neutrino masses and FIMP dark matter models, but it also holds for generic
sectors that have Planck-suppressed couplings with the SM. One can also
incorporate this solution within the SMEFT framework; the corresponding
higher-dimensional operators induce small corrections to both the Higgs mass
and its self-coupling, a prediction that can be tested at a future Higgs
factory. We present a natural extension of the SMEFT that describes corrections
to the SM, while also including a Feeble Coupled Sector aimed to account for
the dark cosmos, with predictions for new signals that can be tested too. | J. Lorenzo Diaz-Cruz | 2023-09-04T06:03:55Z | http://arxiv.org/abs/2309.01378v1 | # Solving the Naturalness Problem with Feeble Coupled Sectors
###### Abstract
The discovery of a light Higgs boson means that whatever form new physics takes, it should keep stable the Higgs mass. Besides the well-known solutions to the naturalness problem (Supersymmetry, Conformal symmetry, Compositeness, etc), models that include heavy particles with feeble couplings to the Standard Model (SM) can be considered natural, since the corrections to the Higgs mass remains of the order of the electroweak (EW) scale. This solution can be used for model building too, with realizations that include the see-saw mechanism for neutrino masses and FIMP dark matter models, but it also holds for generic sectors that have Planck-suppressed couplings with the SM. One can also incorporate this solution within the SMEFT framework; the corresponding higher-dimensional operators induce small corrections to both the Higgs mass and its self-coupling, a prediction that can be tested at a future Higgs factory. We present a natural extension of the SMEFT that describes corrections to the SM, while also including a Feeble Coupled Sector aimed to account for the dark cosmos, with predictions for new signals that can be tested too.
**1. Introduction.** Finding a solution to the hierarchy or naturalness problem, and exploring the corresponding models, have played an important role for the development of particle physics. The identification of quadratic divergences as a potential peril for elementary scalars [1], motivated the search for new theories where such problem could be eliminated. The importance of those solutions, such as supersymmetry, composite and technicolor models, should not be underestimated, both for conceptual and experimental aspects. In the first place they provided roads to look for and explore models that addressed such problem; furthermore those models, despite its failure or success, justified the experimental search for the Higgs boson, which many doubted it actually existed after the Standard Model (SM) was proposed. The development of radiative corrections helped to corner the Higgs in a mass region at the reach of collider experiments, thus from LEP we learned that the Higgs mass has to be above 105 GeV, while Tevatron helped to exclude the range around the ZZ threshold. Finally, the debate about whether nature likes or not elementary scalars, was closed after the discovery of the Higgs boson at the CERN LHC, with a mass \(m_{h}=125\) GeV and properties in agreement with the SM.
Further conceptual developments helped to understand better the nature of quadratic divergences. In particular, we learned that the conformal symmetry [2] could help to understand the origin of the Higgs mass as a soft-breaking effect, and therefore the SM itself can be considered as natural. Furthermore, after having provided a method to identify quadratic divergences within the dimensional regularization method [3], Veltman departed from the common lore and claimed that there are no quadratic divergences in the SM [4]. Thus, it is appropriate to say that the SM is natural, as the radiative contribution to the Higgs mass coming from its heavier particle, the top quark, keeps the Higgs mass of the order of the EW scale, suggesting the absence of the hierarchy problem within the SM [6, 5].
The real problem appears for extensions of the SM that include heavy states, with mass and couplings given by \(M_{X}\) and \(g_{x}\), respectively; after integrating them out, they leave a potential large correction to the Higgs mass (\(m_{h}\)), which is of the order \(\delta m_{h}^{2}\simeq g_{x}M_{X}^{2}\), and for \(M_{X}>>m_{W}\), there appears to be a large correction to the Higgs mass. The traditional solutions to this problem assume that the coupling constant \(g_{X}\) of the new heavy particles has a value similar to the other couplings in the SM. In SUSY models, for instance, one has both fermions and sfermions [7], and their contribution to the Higgs mass have opposite signs, and thus cancel each other. In the conformal solution [8], the tree-level contribution vanishes, and the Higgs mass itself is generated as a radiative effect. But we also know that, apart from the naturalness problem, there are other problems in the SM (flavor, generations, unification), while the evidence for neutrino masses and the dark cosmos, i.e. dark matter and dark energy, as well as the early universe inflation and the matter-antimatter asymmetry, which can not be explained by the SM. Thus, it seems there are good reasons to expect that some form of new physics should exist. One could go ahead studying models that address some Beyond the SM issues, without caring about the solution to the naturalness problem, hoping that some additional new physics will bring a solution to the problem. Alternatively, we can take to the heart the issue of naturalness, and consider only extended models that incorporate a solution to the problem, as it was done in the past with supersymmetry or other BSM scenarios.
So far we have a few experimental facts that could guide us in the search for BSM physics, namely, the detection of a light Higgs, the absence of BSM signals at LHC and the lack of direct
evidence for dark matter WIMPS. In this paper we shall try to incorporate these three hints into a single framework. We shall assume that there exists heavy particles beyond the SM, which play a role for the solution of some of its open problems, that induce a correction to the Higgs mass of the order \(\delta m_{h}^{2}\simeq g_{x}M_{X}^{2}\), but this is natural because rather than having a coupling of O(1), the coupling \(g_{x}\) is feeble, such that \(g_{x}M_{X}^{2}\simeq m_{h}^{2}\), which can be considered as a solution to the naturalness problem. We argue that this solution with q natural Feeble Coupled Sector (FECOS), should be consider on the same foot, or perhaps even more solid, than the other ones, because it is based on the current observations; furthermore, it provides a framework for model building, like SUSY or Technicolor. FECOS solution can occur in many models, such as the Type-I see-saw model for neutrino masses, or in models where a Feeble-interacting massive particle (FIMP) is the candidate for dark matter; moreover it can also work for generic models that include heavy states with suppressed couplings to the SM, including models with the gravity portal.
We can consider a generic FECOS model, and notice that when its heavy modes are integrated out, it should leave an effective Lagrangians extension of the SM, which includes a whole tower of higher dimensional operators suppressed by a scale \(\Lambda\simeq M_{X}\). Within this affective Lagrangian, there should be no problem with quadratic divergences, and the corrections to the Higgs mass should be at most of the order \(O(v/\Lambda)\), and similarly for the Higgs self coupling, a prediction that can be tested at a future Higgs factory. Furthermore, one can build a specific model that include a FECOS aimed to describe aspects of the Dark Cosmos (DC), for instance by including extra scalars and fermions to account for dark matter and the inflaton. This model, which we call the DC-SMEFT, includes renormalizable Lagrangians for both SM and DC sectors, but it also contains higher-dimensional operators for both sectors, including their mixture. This model provides some distinctive signatures that could be probed in diverse fronts, ranging form colliders to cosmology.
**2. Natural models with Feeble Coupled Sector.** To discuss the problem of naturalness, let us consider a theory that includes a SM-like Higgs boson (\(h\)) which interacts with a heavy scalar \(S\), gauge field \(V\) and fermion \(F\). The corresponding expression for the associated quadratic divergences is:
\[m_{h}^{2}(\Lambda,\mu)=m_{h}^{2}(\Lambda)+\sum_{X=S,V,F}(-1)^{2J_{x}}(2J_{x}+1 )\frac{g_{x}}{16\pi^{2}}[\Lambda^{2}-m_{h}^{2}(\Lambda)\log\frac{\Lambda^{2}} {\mu^{2}}] \tag{1}\]
where \(\mu\) denotes the renormalization scale, and \(\Lambda\) denotes the momentum cutoff. Nowadays we understand better the meaning of this equation. Namely, when \(\Lambda\) corresponds to the UV cutoff, one simply has to renormalize this effect on the Higgs mass. Given that the SM is a renormalizable theory, even when one takes \(\Lambda\rightarrow\infty\), no observable effect is left. However, the real problem resides when \(\Lambda\) represents the mass of a heavy particle i.e. \(\Lambda=M\), and when it is integrated out, it leaves a correction to the Higgs proportional to the scale \(\Lambda=M\), which could be very large. However, having a large but finite value for \(\Lambda\), is different from having to deal with a divergent term, even when it is of the order of the Planck mass, i.e. \(\Lambda<M_{pl}\); this was used in the past to constrain on the Higgs boson mass [9]. In order to cancell the correction to the Higgs mass, which has to be the order of the EW scale, one can have some relationship among the couplings or the masses [10]. The traditional solutions to this problem assume that the coupling constant \(g_{X}\) has a value similar to the other couplings in the SM, and imply the existence of new heavy particles, which
can be searched at collider or cosmolgy. SUSY [7] corresponds to the first case, and conformal symmetry is of the second type, and these are two well-known possibilities to cancel the quadratic divergence.
Here we want to argue that there is also another possibility to bring under control the naturalness problem. We shall assume that some generic extensions of the SM includes heavy particles, which induce a natural correction to the Higgs mass, of the order \(O(m_{W})\), because rather than having a coupling \(g_{x}\) of O(1), we assume that this coupling is feeble, such that \(g_{x}M_{X}^{2}\simeq m_{h}^{2}\) even for \(M_{X}>>m_{W}\), which solves the naturalness problem. We argue that this resolution to the naturalness problem with Feeble-Coupled Sectors (FECOS), should be consider on the same foot as the other solutions, or perhaps even better, because its motivation is based on the current observations. One example that proves this mechanism is feasible corresponds to the SM augmented with neutrino masses obtained through the see-saw mechanism [11]. In this case it happens that the correction to the Higgs mass is given by \(\delta m_{h}^{2}=y_{\nu}M^{2}/16\pi^{2}\), with the neutrino Yukawa coupling being given by \(y_{\nu}=Mm_{\nu}/v\), for \(m_{\nu}=0.05\) GeV, \(\delta m_{h}^{2}\) is of the order of the EW scale even for \(M\simeq 10^{8}\); and the \(\nu SM\) is natural even when it contains a heavy RH neutrino. Another class of models that falls within the FECOS category is the so called Feeble Interacting massive particle (FIMP), which is a viable solution to the dark matter problem [12, 13]. The viability of this scenario is based on a different calculation of the DM relic density, the freeze-in mechanism, which relies on the feeble coupling of the FIMP candidate.
Furthermore, the FECOS solution to the naturalness problem, can be a generic one, it is of a bottom-up type, which accommodates a light Higgs boson, and within specific models it can explain neutrino masses and dark matter. In the FECOS models we can have heavy particles, but with very small couplings, even suppressed by the Planck mass, \(m^{2}/M_{pl}\), as it occurs in supergravity. We can also consider models with a gravitational sector that includes other FECOS fields that couple gravitationally with the SM, like the heavy graviton that appears in extra dimensional models [14], with couplings suppressed by the volume of the extra-dimensional space, or by the curvature of the extra-dimension in Randall-Sundrum 5D models, where the SM resides on a EW brane, while the graviton and other fields propagate in the bulk. It is also interesting to discuss the effect of gravity on quadratic divergences, but as shown in [15], these corrections do not affect the Higgs mass.
**3. Naturalness in the SMEFT and the Dark Cosmos.** We would like to apply the FECOS solution of naturalness, to discuss some aspects of the Dark Cosmos (DC), in particular dark matter. Our complete model will contain the SM sector, as well as a new DC sector of the FECOS type. One can look directly at the effects of the new sector, for instance in neutrino physics or dark matter searches. It is also possible to calculate their effects on the properties of the SM particles, in particular the Higgs boson, which can be described with an effective Lagrangian. In fact, we should notice that the level of divergence in the fundamental theory could be less severe than one can expect from the effective lagrangian [16]. The effective Lagrangian includes renormalizable parts, for both the SM and for the new particles of the FECOS type, but it contains as well higher-dimensional operators for both sectors. For instance, the renormalizable part can include a mixing term of the form \((\Phi^{\dagger}\Phi)S^{2}\), between the SM Higgs doublet \(\Phi\) and an extra singlet \(S\). Then, after integrating out the singlet \(S\), one gets a new operator of the form \((\Phi^{\dagger}\Phi)^{3}\)
which modifies the Higgs properties. However this operator comes from a sector with suppressed coupling, and will be highly suppressed. Thus, within the FECO models the modification of the Higgs properties, such as the 3-particle self-coupling, will only show tiny deviations from the SM, probably unobservable. However, besides the naturalness problem, there are other open issues in the SM, that also demand some physics beyond the SM, such as the origin of flavor, CPV, unification, etc. As we do not have a solution to these problems, for the sake of generality, we shall consider an effective Lagrangian that describes the effects of those new particles and interactions with the SM particles. This model could include extra fermions \(\psi_{a}\) and some scalars \(S_{i}\), within a minimal setting of the FECOS type. So far we do not have enough information to include gauge interactions for these particles, unlike the SM, so we can start by including a number of fields that play a role for the resolution of the problems of Dark matter, inflation, and possibly the matter-antimatter asymmetry, similar to the \(\nu SM\)[17]. We call this model the Dark Cosmos extension of the SMEFT (DC-SMEFT), and its lagrangian has the general form:
\[{\cal L}_{DC-SMEFT}={\cal L}_{SM}+{\cal L}_{DC}+\sum_{i,d}[\frac{\alpha_{i,d}}{ \Lambda^{d-4}}O^{sm}_{d,i}+\frac{\beta_{i,d}}{\Lambda^{d-4}}O^{sm}_{d,i}] \tag{2}\]
Here \({\cal L}_{SM}\) and \({\cal L}_{DC}\) represent the renormalizable Lagrangians for the SM and Dark cosmos; for which we can include kinetic terms, Yukawa interactions and an scalar potential in \({\cal L}_{DC}\). The higher d-dimensional operators \(O^{SM}_{i,d}\), involve only the SM fields, while \(O^{DC}_{i,d}\) involve the extra fields, as well as its mixing with SM sector. For \(O^{SM}_{i,6}\) we can include the whole list of dimension-six operators [18], and as mentioned before, we are interested in the operators that modify the Higgs properties, which is discussed for the complete set of associated operators in [19]. One possible realization of this approach is the specific FIMP model presented in [20], where the DC sector includes three right-handed neutrinos (\(N_{i}\)) and a real scalar (\(S\)), two of the RH neutrinos participate in the see-saw mechanism to generate light neutrino masses, while the third one is the DM candidate. The vev of the scalar singlet generates the light neutrino masses through dimension-5 operators, and several scenarios with a stable FIMP are found to satisfy current constraints.
The associated phenomenology of these models can be studied, for instance by looking for small deviations from the SM predictions. However, it could be even more interesting to look for rare effects that are included in the higher-dimensional operators. For instance, one can look at the effects of the operators included in the model of [20], but for an unstable FIMP particle, which can provide a distinctive signature of the model. In particular, one has a dimension-five term of the form: \(L\tilde{\Phi}N_{i}S\), and after both the SM Higgs doublets \(\Phi\) and the singlet \(S\), aquire a vev, there is a mixing between the third RH neutrino, which is assumed to be the FIMP DM, and this will induce decays of the dark matter. The corresponding life-time should be larger than the age of the universe, for this scenario to be a viable, but even after such constraint is satisfied, it is possible to observe effects of decaying DM. One could also decouple the neutrino mass problem from the Dark Matter issue, by adding an extra fermion, different from the RH neutrinos, which will play de role of FIMP dark matter, while the singlet scalar could still participate in the generation of the see-saw mechanism. Detailed analysis of the model is under study. A different model, that also shows the viability of the scenario with a decaying FIMP is presented in [21].
**4. Conclusions.** We presented a discussion of the naturalness problem, and argued that
besides the well-known solutions to the naturalness problem (such as supersymmetry, Conformal symmetry, Compositeness, etc), there is also another solution, namely the case when a new sector with heavy particles have feeble-couplings with the Standard Model (SM), which induce natural corrections to the Higgs mass. This solution with Feeble-coupled sectors (FECOS), can be used for model building too, for instance in the see-saw mechanism for neutrino masses, or the FIMP dark matter model, but it is a generic one, and one can include models with heavy states that have Planck-suppressed couplings with the SM. The FECOS solution to the naturalness problem, can be a generic one, it is of a bottom-up type, which accommodates a light Higgs boson, and within specific models it can explain neutrino masses and dark matter, while being in agreement with the absence of signals in collider and direct search for DM.
Further aspects of naturalness can be discussed within the SMEFT framework, with higher-dimensional operators that induce small corrections to both the Higgs mass and its self-coupling, and measuring it provides a strong motivation for a future Higgs factory. We present a natural extension of the SMEFT that includes a FIMP dark matter candidate, which can provide new signals that test this whole new framework. In particular, decay of the DM particle could be a distinctive signature of this model.
|
2301.02053 | Max-Min Diversification with Fairness Constraints: Exact and
Approximation Algorithms | Diversity maximization aims to select a diverse and representative subset of
items from a large dataset. It is a fundamental optimization task that finds
applications in data summarization, feature selection, web search, recommender
systems, and elsewhere. However, in a setting where data items are associated
with different groups according to sensitive attributes like sex or race, it is
possible that algorithmic solutions for this task, if left unchecked, will
under- or over-represent some of the groups. Therefore, we are motivated to
address the problem of \emph{max-min diversification with fairness
constraints}, aiming to select $k$ items to maximize the minimum distance
between any pair of selected items while ensuring that the number of items
selected from each group falls within predefined lower and upper bounds. In
this work, we propose an exact algorithm based on integer linear programming
that is suitable for small datasets as well as a
$\frac{1-\varepsilon}{5}$-approximation algorithm for any $\varepsilon \in (0,
1)$ that scales to large datasets. Extensive experiments on real-world datasets
demonstrate the superior performance of our proposed algorithms over existing
ones. | Yanhao Wang, Michael Mathioudakis, Jia Li, Francesco Fabbri | 2023-01-05T13:02:35Z | http://arxiv.org/abs/2301.02053v1 | # Max-Min Diversification with Fairness Constraints:
###### Abstract
Diversity maximization aims to select a diverse and representative subset of items from a large dataset. It is a fundamental optimization task that finds applications in data summarization, feature selection, web search, recommender systems, and elsewhere. However, in a setting where data items are associated with different groups according to sensitive attributes like sex or race, it is possible that algorithmic solutions for this task, if left unchecked, will under- or over-represent some of the groups. Therefore, we are motivated to address the problem of _max-min diversification with fairness constraints_, aiming to select \(k\) items to maximize the minimum distance between any pair of selected items while ensuring that the number of items selected from each group falls within predefined lower and upper bounds. In this work, we propose an exact algorithm based on integer linear programming that is suitable for small datasets as well as a \(\frac{1-\epsilon}{5}\)-approximation algorithm for any \(\epsilon\in(0,1)\) that scales to large datasets. Extensive experiments on real-world datasets demonstrate the superior performance of our proposed algorithms over existing ones.
**Keywords:** max-min diversification, algorithmic fairness
## 1 Introduction
In recent years, algorithms have been increasingly used for data-driven automated decision-making in many domains of everyday life. This has raised concerns about the possibility that algorithms may produce unfair and discriminatory decisions for specific population groups, particularly in sensitive socio-computational domains such as voting, hiring, banking, education, and criminal justice [12, 25]. To alleviate such concerns, there has been a lot of research devoted to incorporating fairness into the algorithms for automated decision tasks, including classification [14], clustering [10], ranking [24, 32], matching [28], and data summarization [8, 20].
This paper considers the diversity maximization problem and addresses its fairness-aware variant. The problem consists in selecting a diverse subset of items from a given dataset and is encountered in data summarization [8, 23], web search [2], recommendation [21], feature selection [31], and elsewhere [34]. Existing literature on the problem of diversity maximization primarily focuses on two objectives, namely _max-min diversification_ (MMD), which aims to maximize the minimum distance between any pair of selected items, and _max-sum diversification_ (MSD), which seeks to maximize the sum of pairwise distances between selected items. As shown in Figure 1, MMD tends to cover the data range uniformly, while MSD tends to pick "outliers" and may include highly similar items in the solution. Since the notion of diversity captured by MMD better represents the property that data summarization, feature selection, and many other tasks target with their solutions, we will only consider MMD in this paper. To be precise, given a set \(V\) of \(n\) items in a metric space and a positive integer \(k\leq n\), MMD asks for a size-\(k\) subset \(S\) of \(V\) to maximize the minimum pairwise distance within \(S\).
In particular, we study the _fair max-min diversification_ (FMMD) problem, a variant of MMD that aims not only to maximize the diversity measure defined above but also to guarantee the satisfaction of group fairness constraints as described below. Let all the items in \(V\) be divided into \(C\) disjoint groups \(V_{1},\ldots,V_{C}\) by a sensitive attribute such as sex or race. To ensure a fair representation, the number of items selected from each group \(V_{c}\), where \(c\in[1,\ldots,C]\), is limited to be between lower and upper bounds specified as input. This definition of group fairness constraints captures and generalizes several existing notions of _fairness_ for groups, including proportional representation [7, 17], equal representation [19, 20], and statistical parity [14, 33], and has been widely used in optimization problems such as top-\(k\) ranking [9], submodular maximization [17], and multiwinner voting [7].
### Related Work
Erkut [15] proved that the MMD problem is NP-hard in metric spaces. Ravi _et al._[26] proposed a \(\frac{1}{2}\)-approximation greedy algorithm [16] for MMD and proved that no polynomial algorithm could achieve a better approximation factor unless P=NP. Recently, many different algorithms have been proposed for MMD in various settings. Indyk _et al._[18] proposed a \(\frac{1}{3}\)-approximation distributed algorithm for MMD based on the notion of _coresets_. Drosou and Pitoura [13] designed a \(\frac{b-1}{2b^{2}}\)-approximation cover tree-based algorithm for MMD on dynamic data, where \(b\) is the base of the cover tree. Ceccarello _et al._[6] proposed \((\frac{1}{2}-\varepsilon)\)-approximate MapReduce and streaming algorithms for MMD in metric spaces of bounded doubling dimension. Borassi _et al._[5] proposed a sliding-window algorithm for MMD. Nevertheless, none of the above algorithms are applicable to FMMD because they cannot guarantee the fulfillment of fairness constraints.
Moumoulidou _et al._[23] first proposed approximation algorithms for the fair variant of MMD. Addanki _et al._[1] improved the approximation ratios of the algorithms in [23]. Wang _et al._[30] proposed two streaming algorithms for the fair variant of MMD. However, these algorithms work for exact-size group fairness constraints, a special case of our bounded-size group fairness constraints. Moreover, as shown empirically, these algorithms provide lower-quality solutions than ours.
Besides MMD, many other optimization problems have similar group fairness-aware variants - e.g., determinantal point processes [8], \(k\)-centers [11, 19, 20], top-\(k\) ranking [9], submodular maximization [17, 29], and multiwinner voting [7]. However, since their objectives differ from MMD, the algorithms proposed for their fair variants are not directly applicable to FMMD.
### Our Results
The main results of this paper are two novel algorithms for the _fair max-min diversification_ (FMMD) problem, which selects a size-\(k\) subset \(S\) from a dataset \(V\) that maximizes the diversity value while satisfying group-fairness constraints.
We first propose \(\mathsf{FMMD}\)-E, an exact algorithm that is suitable for solving FMMD on small datasets, despite the NP-hardness of the problem. This algorithm exploits the connection between the MMD and maximum independent set (MIS) problems. It formulates FMMD as the problem of finding an independent set of vertices with group fairness constraints on an undirected graph. Then, the optimal solution of FMMD can be obtained in \(O(n^{k}\log n)\) time by solving the reduced problem via integer-linear programming (ILP).
Since \(\mathsf{FMMD}\)-E cannot scale to large datasets, we propose \(\mathsf{FMMD}\)-S, a more scalable approximation algorithm for FMMD. Specifically, for any \(\varepsilon\in(0,1)\), \(\mathsf{FMMD}\)-S provides \(\frac{1-\varepsilon}{5}\)-approximate solutions for FMMD in \(O\big{(}Ckn+C^{k}\log\frac{1}{\varepsilon}\big{)}\) time. Under the assumptions that \(k=o(\log n)\) and \(C=O(1)\), the time complexity of \(\mathsf{FMMD}\)-S is reduced to \(O\big{(}n(k+\log\frac{1}{\varepsilon})\big{)}\). The basic idea of \(\mathsf{FMMD}\)-S is, in the first step, to limit the computation to a considerably smaller subset of the original dataset (i.e., _coreset_) by running the greedy algorithm [16, 26] and, in the second step, to use an ILP-based method similar to \(\mathsf{FMMD}\)-E to obtain an approximate solution to FMMD from the subset.
Finally, we compare the performance of our algorithms with the state-of-the-art algorithms in [1, 23, 30] for the FMMD problem on real-world datasets. The results show that _i)_\(\mathsf{FMMD}\)-E provides exact solutions in reasonable time on small datasets (e.g., \(n=1,000\)); _ii)_\(\mathsf{FMMD}\)-S returns solutions of higher quality than existing approximation algorithms in comparable time while scaling to large datasets with millions of items.
Figure 1: An illustration of max-min diversification (MMD) vs. max-sum diversification (MSD). The items in the solutions are marked in blue color.
Preliminaries
In this section, we first formally define the FMMD problem, a fairness-aware variant of max-min diversification (MMD), and then provide its hardness result.
**Max-Min Diversification (MMD).** Let \(V\) be a set of \(n\) items and \(d:V\times V\rightarrow\mathbb{R}_{\geq 0}\) be a distance metric that captures the dissimilarities between items in \(V\). We remind that, by definition, \(d(\cdot,\cdot)\) satisfies the following properties for any \(u,v,w\in V\): _i)_\(d(u,v)=0\Leftrightarrow u=v\) (identity of indiscernibles); _ii)_\(d(u,v)=d(v,u)\) (symmetry); _iii)_\(d(u,v)+d(v,w)\geq d(u,w)\) (triangle inequality). For MMD, the diversity value \(div(S)\) of a subset \(S\subseteq V\) is defined as the minimum among all pairwise distances between distinct items in \(S\) - i.e., \(div(S)=\min_{u,v\in S\,:\,u\neq v}d(u,v)\). Given a set \(V\), a distance function \(d(\cdot,\cdot)\), and a positive integer \(k\leq n\), the MMD problem asks for a size-\(k\) subset \(S\) of \(V\) such that \(div(S)\) is maximized.
**Fair Max-Min Diversification (FMMD).** Let the set \(V\) be divided into \(C\) disjoint groups \(V_{1},\ldots,V_{C}\) by a sensitive categorical attribute, such as sex or race. For FMMD, the fairness-aware variant of MMD, the group fairness constraints restrict the selection of items from each group \(V_{c}\) for \(c\in[C]\) so that the number of items selected from \(V_{c}\) lies within a range of values from \(l_{c}\) to \(h_{c}\) (both inclusive). Meanwhile, it also requires that the total number of selected items is \(k\). Formally, the collection \(\mathcal{F}\) of all feasible solutions for FMMD is
\[\mathcal{F}=\{S\subseteq V:|S|=k\wedge l_{c}\leq|S\cap V_{c}|\leq h_{c}, \forall c\in[C]\}\]
and, to discard from consideration trivially empty sets \(\mathcal{F}\), we will further assume that \(l_{c}\leq h_{c}\leq|V_{c}|\) and \(\sum_{c=1}^{C}l_{c}\leq k\leq\sum_{c=1}^{C}h_{c}\). The FMMD problem asks for a subset \(S\) of \(V\) so that \(S\) satisfies the group fairness constraints (i.e., \(S\in\mathcal{F}\)) and \(div(S)\) is maximized, or formally, \(S^{*}=\operatorname*{arg\,max}_{S\in\mathcal{F}}div(S)\), where \(S^{*}\) and \(\texttt{OPT}=div(S^{*})\) denote the optimal solution of FMMD and its diversity value, respectively.
**Hardness of FMMD.** By using a reduction from the Clique problem, MMD is proven to be NP-hard for general metric spaces and cannot be approximated within any factor greater than \(\frac{1}{2}\) unless P=NP [15, 26]. Nevertheless, a greedy algorithm provides the best possible \(\frac{1}{2}\)-approximate solution in \(O(kn)\) time [16]. Although the greedy algorithm does not work for FMMD directly, as it may provide solutions that do not fall within \(\mathcal{F}\) (i.e., are not "fair"), it will be used as a subroutine for our FMMD-S algorithm in Section 3.2 for data reduction. Since MMD is a special case of FMMD when \(C=1\) and \(l_{1}\leq k\leq h_{1}\), the hardness result for MMD can be generalized to FMMD as follows:
**Theorem 2.1**: _FMMD is NP-hard and cannot be approximated by a factor of \(\frac{1}{2}+\varepsilon\) for any parameter \(\varepsilon>0\) unless P=NP._
## 3 Algorithms
In this section, we describe our proposed algorithms for FMMD. First, we propose FMMD-E, an exact algorithm that runs in \(O(n^{k}\log n)\) time (Section 3.1). Second, we propose FMMD-S, a \(\frac{1-\varepsilon}{5}\)-approximation algorithm that runs in \(O\big{(}Ckn+C^{k}\log\frac{1}{\varepsilon}\big{)}\) time for any error parameter \(\varepsilon\in(0,1)\) (Section 3.2).
### An Exact ILP-Based Algorithm
To build an exact algorithm for FMMD, we use ideas similar to [3] for the reduction from MMD to maximum independent set (MIS). Given the set of feasible solutions \(\mathcal{F}\) and a positive real number \(\delta\), the decision version of FMMD asks whether there is a set \(S\subseteq V\) such that \(S\in\mathcal{F}\) and \(div(S)\geq\delta\). Given an instance of the FMMD decision problem, we build an undirected graph \(G=(V,E)\) as follows: the set of vertices in \(G\) is identical to \(V\) and there is an edge between two vertices \(u,v\in V\) if and only if \(d(u,v)<\delta\). We remind that a vertex set \(S\) is called an _independent set_ iff no two vertices in \(S\) are adjacent. Moreover, we define the _Fair Independent Set_ (FIS) problem that determines whether there exists an independent vertex set \(S\in\mathcal{F}\) on the graph \(G\). Based on the above definitions, the lemma below asserts the equivalence between FMMD and FIS.
**Lemma 3.1**: _FMMD is equivalent to FIS._
Proof.: In the one direction, assume that the answer to FMMD is '_yes_' - i.e., there is a subset \(S\in\mathcal{F}\) of \(V\) with \(div(S)\geq\delta\). Then, we have \(d(u,v)\geq\delta\) for any \(u,v\in S\). Thus, by construction, there is no edge \((u,v)\in E\), and \(S\) is an independent vertex set of \(G\). Therefore, the answer to FIS is '_yes_' as well. In the opposite direction, assume that the answer to FIS is '_yes_' - i.e., \(S\in\mathcal{F}\) is an independent set. By definition, there is no edge between any of
its vertices in \(G\), which by construction means that \(d(u,v)\geq\delta\) for any \(u,v\in S\) and, therefore, we have \(div(S)\geq\delta\) for the given \(S\in\mathcal{F}\). Therefore, the answer to FMMD is also '_yes_'. We thus prove that the answer to FMMD is '_yes_' if and only if the answer to FIS is '_yes_', which concludes the proof.
Additionally, we have two observations for FMMD, which are easy to verify from its definition.
**Fact 1**: (Monotonicity) _If there exists a set \(S\in\mathcal{F}\) with \(div(S)\geq\delta\), then such a set will exist for any \(\delta^{\prime}\leq\delta\); If there does not exist any set \(S\in\mathcal{F}\) with \(div(S)\geq\delta\), then such a set will not exist for any \(\delta^{\prime}\geq\delta\)._
**Fact 2**: (Discontinuity) _The optimal diversity value_ OPT _for FMMD is always equal to the distance \(d(u,v)\) between some pair of items \(u,v\in V\)._
From all the above results, the following theorem asserts that FMMD is reducible to FIS.
**Theorem 3.1**: _An exact solution of FMMD is obtained by solving \(O(\log n)\) FIS instances._
Let us consider the following algorithm. First, compute and sort the distances between all pairs of items in \(V\). Then, use a binary search on the sorted array of pairwise distances to find the largest \(d^{*}\) such that the answer to its corresponding FIS instance is '_yes_'. The binary search finds \(d^{*}\) in \(O(\log n)\) steps, as the number of pairwise distances is \(O(n^{2})\). And it holds that \(div(S^{*})\geq d^{*}\) from Lemma 3.1. Observation 1 guarantees that there does not exist any \(S\in\mathcal{F}\) with \(div(S)>d^{*}\) due to the maximality of \(d^{*}\). Observation 2 ensures that \(d^{*}\) is exactly equal to OPT. Thus, the above procedure identifies the exact solution to FMMD.
**The ILP Formulation of FMMD.** In light of Theorem 3.1, what remains to obtain an exact algorithm for FMMD is to design an exact algorithm for FIS. We note that FIS without fairness constraints is equivalent to the _maximum independent set_ (MIS) problem. We thus adapt the edge-based integer-linear programming (ILP) formulation of MIS by adding fairness constraints to define an FIS instance, as shown in Eq. 3.1-3.5.
\[\max z=\sum_{i=1}^{n}x_{i} \tag{3.1}\] \[\text{s.t.} x_{i}+x_{j}\leq 1,\forall(v_{i},v_{j})\in E\] (3.2) \[\sum_{i=1}^{n}x_{i}\leq k\] (3.3) \[l_{c}\leq\sum_{v_{i}\in V_{c}}x_{i}\leq h_{c},\forall c\in[C]\] (3.4) \[x_{i}\in\{0,1\},\forall i\in[n] \tag{3.5}\]
Figure 2: Example for the reduction from Fair Max-Min Diversification (FMMD) to Fair Independent Set (FIS) on a dataset with \(n=10\) points and \(C=2\) groups in blue and red. An FMMD instance with \(k=5\), \(l_{c}=2\) and \(h_{c}=3\) for \(c=1,2\) is reduced to an FIS instance where a fair independent set of vertices is represented by triangles.
where \(x_{i}\) is a binary variable to indicate whether \(v_{i}\in V\) is included in the solution or not, the objective function in Eq. 1 and the first constraint in Eq. 2 are the same as the edge-based ILP formulation of MIS, the second constraint in Eq. 3 limits the solution size to at most \(k\), and the third constraint in Eq. 4 is on the upper and lower bounds of the number of items chosen from each group \(V_{c}\). By solving the ILP in Eq. 1-3 optimally, we will either find a fair independent set \(S=\{v_{i}\in V:x_{i}=1,i\in[n]\}\) of \(G\) if \(z=k\) or confirm that there does not exist such a set if \(z<k\).
**Algorithm Description and Complexity.** By combining the constructive proof of Theorem 1 and the ILP formulation of FMMD, we obtain \(\mathsf{FMMD}\)-E, an exact algorithm for FMMD, as presented in Algorithm 1. First, it computes the distances of all \(\frac{n(n-1)}{2}\) pairs of distinct items in \(V\) in \(O(n^{2})\) steps and sorts them in ascending order in an array \(D[1,\ldots,\frac{n(n-1)}{2}]\) in \(O(n^{2}\log n)\) steps. Then, a binary search is performed on \(D\) to find \(\mathsf{OPT}\) in \(O(\log n)\) steps. For each guess \(D[cur]\) of \(\mathsf{OPT}\), it builds an undirected graph \(G\) in \(O(n^{2})\) steps and finds a set \(S\) by solving the ILP in Eq. 1-3.5 in \(\binom{n}{k}=O(n^{k})\) steps. If \(|S|=k\), then \(S\in\mathcal{F}\) and \(div(S)\geq D[cur]\). In this case, the search space is narrowed to the upper half to check whether there is a better solution. Otherwise, or if \(|S|<k\), then \(\mathsf{OPT}<D[cur]\) and the search space is narrowed to the lower half. Finally, when the binary search is terminated, the algorithm has found the exact solution \(S^{*}\) to FMMD. The time complexity of \(\mathsf{FMMD}\)-E is \(O(n^{k}\log n)\). Moreover, since \(|D|=|E|=O(n^{2})\), its space complexity is \(O(n^{2})\).
```
1:Dataset \(V=\bigcup_{c=1}^{C}V_{c}\) with \(n=|V|\); lower and upper bounds \(l_{c},h_{c}\in\mathbb{Z}^{+}\) for \(c\in[C]\); size constraint \(k\in\mathbb{Z}^{+}\).
2:A set \(S^{*}\subseteq V\) such that \(S^{*}\in\mathcal{F}\).
3:Compute the distances of all pairs of items in \(V\) and sort them ascendingly as \(D[1,\ldots,\frac{n(n-1)}{2}]\)
4:Let \(L\gets 1\), \(H\leftarrow\frac{n(n-1)}{2}\), \(cur\leftarrow\frac{L+H}{2}\), \(S^{*}\leftarrow\emptyset\)
5:while\(H>L\)do
6: Build an undirected graph \(G=(V,E)\) where \(E=\{(u,v)\in V\times V\mid d(u,v)<D[cur]\}\)
7: Compute the solution \(\mathbf{x}\) of the ILP in Eq. 1-3.5
8: Find a set \(S=\{v_{i}\in V:x_{i}=1,i\in[n]\}\) based on \(\mathbf{x}\)
9:if\(|S|=k\)then
10: If \(div(S)>div(S^{*})\) or \(S^{*}=\emptyset\), then \(S^{*}\gets S\)
11: Let \(L\gets cur+1\) and \(cur\leftarrow\frac{L+H}{2}\)
12:else
13: Let \(H\gets cur-1\) and \(cur\leftarrow\frac{L+H}{2}\)
14:return\(S^{*}\)
```
**Algorithm 1**\(\mathsf{FMMD}\)-E
### A More Scalable Approximation Algorithm
The main drawback of \(\mathsf{FMMD}\)-E is that it cannot handle large datasets due to exponential complexity. Standard optimization libraries, such as CPLEX1 and Gurobi2, can only solve ILPs with up to several thousand variables optimally in a reasonable time. A natural approach to addressing this challenge is to identify a "_coreset_", i.e., a small subset of the original dataset on which the exact algorithm is run to look for approximate solutions. Formally, a subset \(V^{\prime}\subseteq V\) is called an \(\alpha\)-coreset (\(0\leq\alpha\leq 1\)) of \(V\) for FMMD if \(\mathsf{OPT}[V^{\prime}]\geq\alpha\cdot\mathsf{OPT}\), where \(\mathsf{OPT}[V^{\prime}]\) is the optimal diversity value for FMMD on \(V^{\prime}\).
Footnote 1: www.ibm.com/products/ilog-cplex-optimization-studio
Footnote 2: www.gurobi.com/products/gurobi-optimizer/
It now remains to answer _i)_ how such a coreset is built and _ii)_ what approximation factor is obtained. For _i)_, we are inspired by the notion of _composable coresets_[18, 31] for MMD in streaming and distributed settings. The basic idea is first to partition the dataset and run the greedy algorithm of [16] on each partition to obtain a partial solution and then compute a final solution from the union of partial solutions. In the context of FMMD, the dataset is naturally divided into \(C\) groups. Thus, we first find a solution from each group, then consider the union of all group-specific solutions as our _coreset_, and finally use \(\mathsf{FMMD}\)-E to obtain a solution from the coreset, which is feasible since the coreset size is small. We refer to the resulting algorithm as \(\mathsf{FMMD}\)-S. For _ii)_, we prove that the obtained solution offers an approximation factor of \(\frac{1-\varepsilon}{5}\) for any \(\varepsilon\in(0,1)\).
**Algorithm Description.**\(\mathsf{FMMD}\)-S is described in Algorithm 2. Initially, it invokes the greedy algorithm on \(V\) without fairness constraints to compute an initial solution \(U\) (Lines 1-3). Note that the greedy algorithm is \(\frac{1}{2}\)-approximate for MMD, and any feasible solution of FMMD must also be feasible for MMD. Therefore, the optimal
diversity OPT of FMMD is bounded by \(2\cdot div(U)\). Subsequently, the algorithm divides \(U\) by group into \(U_{1},\ldots,U_{C}\) and guesses OPT as its upper bound \(d^{\prime}=2\cdot div(U)\) (Line 4). For each \(c\in[C]\), it runs the greedy algorithm to add new items from \(V_{c}\) to \(U_{c}\) until \(|U_{c}|=k\) or there does not exist any \(v\in V_{c}\) to make \(div(U_{c}\cup\{v\})\geq d^{\prime}\) (Lines 5-9). At this point, each \(U_{c}\) is a partial group-specific solution, and the union \(V^{\prime}=\bigcup_{c}U_{c}\) of partial solutions is the _coreset_. Next, using a similar procedure to FMMD-E, it builds a graph \(G\) on \(V^{\prime}\) with diversity threshold \(\frac{d^{\prime}}{2}\) and solves the ILP of Eq. 3-3 on \(G\) to obtain a solution \(S\) (Lines 11-12). Finally, if \(|S|=k\), we have got a solution \(S\in\mathcal{F}\) with \(div(S)\geq\frac{d^{\prime}}{2}\) and \(S\) will be returned as the final solution; otherwise, \(d^{\prime}\) is decreased by a factor of \(1-\varepsilon\), where \(\varepsilon\in(0,1)\) is an error parameter, and the above procedure is executed again for the smaller \(d^{\prime}\) until a feasible solution \(S\) is found (Lines 13-16).
**Theoretical Analysis.** Next, we give the complexity and approximation factor of FMMD-S.
**Theorem 3.2**.: \(\mathsf{FMMD-S}\) _is a \(\frac{1-\varepsilon}{5}\)-approximation algorithm for FMMD running in \(O\big{(}Ckn+C^{k}\log\frac{1}{\varepsilon}\big{)}\) time._
Proof.: If there is any set \(S^{\prime}\subseteq V^{\prime}\) s.t. \(S^{\prime}\in\mathcal{F}\) and \(div(S^{\prime})\geq\frac{d^{\prime}}{2}\), then \(\mathsf{FMMD-S}\) identifies such \(S^{\prime}\) (Line 12) from the exact solution of the ILP in Eq. 3-3.5. In addition, since the greedy algorithm (Lines 1-3) is \(\frac{1}{2}\)-approximate [26], the initial value of \(d^{\prime}\) is at least \(2\cdot\frac{\mathsf{OPT}}{2}=\mathsf{OPT}>\frac{2}{5}\cdot\mathsf{OPT}\). Therefore, to prove the approximation factor, it suffices to show that there exists some \(S^{\prime}\subseteq V^{\prime}\) s.t. \(S^{\prime}\in\mathcal{F}\) and \(div(S^{\prime})\geq\frac{d^{\prime}}{2}\) when \(d^{\prime}\in[\frac{2(1-\varepsilon)}{5}\cdot\mathsf{OPT},\frac{2}{5}\cdot \mathsf{OPT}]\).
Towards this end, we next construct such a set \(S^{\prime}\) from \(V^{\prime}\). Let \(S^{*}\) be the optimal solution for FMMD on \(V\), and \(S^{*}_{c}=S^{*}\cap V_{c}\) be its subset from group \(c\). First, we initialize \(S^{\prime}=\emptyset\). Then, we consider two cases for different groups. We consider first the groups of Case #1 in arbitrary order, then those of Case #2 in arbitrary order, and select \(|S^{*}_{c}|\) items from each group \(c\) into \(S^{\prime}\).
Case #1 (\(|U_{c}|<k\)): Let \(f:V_{c}\to U_{c}\) map each item \(v\in V_{c}\) to its nearest neighbor \(f(v)\) in \(U_{c}\). Note that the condition in Line 7 ensures that \(d(v,u)<d^{\prime}\) for any \(v\in V_{c}\) and \(u\in U_{c}\). For each item \(s_{c,i}\in S^{*}_{c}\), we add item \(f(s_{c,i})\) into \(S^{\prime}\). We now show that the added items are distinct. Indeed, if \(f(s_{c,i})\equiv f(s_{c,j})\) for \(i\neq j\), then the triangle inequality would give \(d(s_{c,i},s_{c,j})\leq d(s_{c,i},f(s_{c,i}))+d(s_{c,j},f(s_{c,j}))=d(s_{c,i},f (s_{c,i}))+d(s_{c,j},f(s_{c,i}))<2\cdot d^{\prime}\leq\frac{4}{5}\cdot \mathsf{OPT}<\mathsf{OPT}\); however, at the same time we have \(d(s_{c,i},s_{c,j})\geq\mathsf{OPT}\) because \(s_{c,i},s_{c,j}\in S^{*}\), which leads to a contradiction. Moreover, because we have identified for each \(s_{c,i}\in S^{*}_{c}\) one distinct item in \(U_{c}\), we have \(|U_{c}|\geq|S^{*}_{c}|\). After processing all the groups in Case #1, we have \(d(f(s^{*}_{i}),f(s^{*}_{j}))\geq d(s^{*}_{i},s^{*}_{j})-d(s^{*}_{i},f(s^{*}_{i} ))-d(s^{*}_{j},f(s^{*}_{j}))>\mathsf{OPT}-2d^{\prime}\geq\frac{\mathsf{OPT}}{5}\) for any \(f(s^{*}_{i}),f(s^{*}_{j})\in S^{\prime}\) and thus \(div(S^{\prime})>\frac{\mathsf{OPT}}{5}\).
Case #2 (\(|U_{c}|=k\)): Let \(g:U_{c}\to S^{\prime}\) map each item \(u\in U_{c}\) to its nearest neighbor \(g(u)\) in the current instance of \(S^{\prime}\). We remove from \(U_{c}\) every \(u\in U_{c}\) with \(d(u,g(u))<\frac{d^{\prime}}{2}\). Because the condition of Line 7 ensures \(d(u_{c,i},u_{c,j})\geq d^{\prime}\) for any \(i\neq j\), there is at most one item removed for each item in \(S^{\prime}\) - otherwise, the triangle
inequality would give \(d(u_{c,i},u_{c,j})<d^{\prime}\), thus leading to a contradiction. Therefore, at least \(k-|S^{\prime}|\) items remain in \(U_{c}\). Moreover, \(|S^{*}_{c}|\leq k-|S^{\prime}|\) because \(S^{\prime}\) always contains the same number of items from each considered group as \(S^{*}\) throughout the construction process. We pick \(|S^{*}_{c}|\) items from the remaining ones and add them to \(S^{\prime}\). After this operation, we still have \(div(S^{\prime})\geq\frac{d^{\prime}}{2}\geq\frac{1-\varepsilon}{5}\cdot \texttt{OPT}\) since our earlier removal of items from \(U_{c}\) ensured \(d(u_{c,i},g(u_{c,i}))\geq\frac{d^{\prime}}{2}\) for each added \(u_{c,i}\). Finally, after processing all groups in Case #2, we get a set \(S^{\prime}\) that contains the same number of items from each group \(c\in[C]\) as \(S^{*}\), which implies that \(S^{\prime}\in\mathcal{F}\), and \(div(S^{\prime})\geq\frac{1-\varepsilon}{5}\cdot\texttt{OPT}\). Therefore, we conclude that \(\texttt{FMMD-S}\) is a \(\frac{1-\varepsilon}{5}\)-approximation algorithm for FMMD.
Since it takes \(O(nk)\) time to compute \(U\) as well as \(U_{c}\) for each \(c\in[C]\), the total time to compute \(V^{\prime}\) is \(O(Ckn)\) and \(|V^{\prime}|\leq Ck\). Then, the time to solve the ILP in Eq. 3.1-3.5 for \(\texttt{FMMD-S}\) is \(O(C^{k})\) because there are at most \(\binom{Ck}{k}=O(C^{k})\) possible solutions to consider. Moreover, the number of iterations for \(d^{\prime}\) is \(O(\log\frac{1}{\varepsilon})\) since the ratio between the first and last values of \(d^{\prime}\) is \(O(1)\). Thus, the time complexity of \(\texttt{FMMD-S}\) is \(O\big{(}Ckn+C^{k}\log\frac{1}{\varepsilon}\big{)}\). When \(k=o(\log n)\) and \(C=O(1)\), its time complexity is reduced to \(O\big{(}n(k+\log\frac{1}{\varepsilon})\big{)}\). Additionally, its space complexity is \(O(n+C^{2}k^{2})\) since the number of edges in \(G\) is \(O(C^{2}k^{2})\).
## 4 Experimental Evaluation
### Experimental Setup
In this section, we conduct extensive experiments to evaluate the performance of our proposed algorithms, i.e., \(\texttt{FMMD-E}\) and \(\texttt{FMMD-S}\). We compare them with the state-of-the-art FMMD algorithms, including \(\texttt{FairSwap}\), \(\texttt{FairFlow}\), and \(\texttt{FairGMM}\) in [23], \(\texttt{FairGreedyFlow}\) in [1], and \(\texttt{SFDM1}\) and \(\texttt{SFDM2}\) in [30]. We implemented all the above algorithms in Python 3 using the NetworkX library for building and manipulating graphs and the Gurobi optimizer for solving ILPs. All the experiments were carried out on a desktop with an Intel Core i5-9500 3.0GHz processor and 32GB RAM running Ubuntu 20.04.3 LTS. Each algorithm was run on a single thread. All data and code are publicly available at [https://osf.io/te34m/](https://osf.io/te34m/).
We use four public real-world datasets listed in Table 1, where \(dim\) is the dimensionality of the feature vector. The detailed information and preprocessing procedures on each dataset are described in Appendix A. The fairness constraints in our experiments are defined according to the _proportional representation_[7, 17]: For each group \(c\in[C]\), we set \(l_{c}=\max(1,(1-\alpha)k\cdot\frac{|V_{c}|}{n})\) and \(h_{c}=(1+\alpha)k\cdot\frac{|V_{c}|}{n}\) with \(\alpha=0.2\) in \(\texttt{FMMD-E}\) and \(\texttt{FMMD-S}\) and \(k_{c}=\lceil k\cdot\frac{|V_{c}|}{n}\rceil\) or \(\lfloor k\cdot\frac{|V_{c}|}{n}\rfloor\) in all other algorithms. All the algorithms were executed ten times in each experiment. We report the average running time and average diversity value of the solutions provided by each algorithm. We use 'N/A' to indicate that an algorithm either does not find a solution within one day or does not work when \(C>2\) (i.e., \(\texttt{FairSwap}\) and \(\texttt{SFDM1}\)). In the preliminary experiments (see Appendix B), we find that the solution quality of FMMD-S hardly improves when \(\varepsilon\) is decreased below 0.05 and so we fix \(\varepsilon=0.05\) for FMMD-S in all the remaining experiments.
### Experimental Results
Table 2 shows the diversity achieved by different algorithms on "small" datasets, i.e., datasets that were obtained by sampling 1,000 items uniformly at random from each full dataset. Figures 3-4 illustrate the performance of different algorithms on small datasets with varying \(k\). Note that \(\texttt{FairSwap}\) and \(\texttt{SFDM1}\) are specific for the case of \(C=2\) and \(\texttt{FairGMM}\) fails to finish within one day when \(C>3\) or \(k>10\) since it has to enumerate \(\binom{kC}{k}\) sets for solution computation. They are ignored in subsequent tables and figures when they cannot provide valid solutions.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **Group** & \(C\) & \(n\) & \(dim\) & **Distance Metric** \\ \hline \multirow{3}{*}{Adult} & Sex & 2 & & & \\ & Race & 5 & 48,842 & 6 & \(l_{2}\)-distance \\ & S+R & 10 & & & \\ \hline \multirow{3}{*}{CelebA} & Sex & 2 & & & \\ & Age & 2 & 202,599 & 25,088 & \(l_{1}\)-distance \\ & S+A & 4 & & & \\ \hline \multirow{3}{*}{Census} & Sex & 2 & & & \\ & Age & 7 & 2,426,116 & 25 & \(l_{1}\)-distance \\ & S+A & 14 & & & \\ \hline Twitter & Sex & 3 & 18,836 & 1,024 & Angular distance \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets used in our experiments
In general, \(\mathsf{FMMD-E}\) always provides optimal solutions for FMMD within the time limit (i.e., 24 hours) when \(n=1,000\). The _price of fairness_, measured by the decrease in diversity due to the fairness constraints, is marginal on all datasets except _Adult_ with \(C\geq 5\). \(\mathsf{FMMD-S}\) shows much higher solution quality (up to \(3.9\times\) greater in diversity value) than all approximation algorithms except \(\mathsf{FairGMM}\). Although \(\mathsf{FairGMM}\) sometimes provides slightly better solutions than \(\mathsf{FMMD-S}\), it runs more than two orders of magnitude slower. Moreover, the diversity values of all algorithms drop with \(k\) because \(div\) is a monotonically non-increasing function. The running time is independent of \(k\) for \(\mathsf{FMMD-E}\), grows exponentially with \(k\) for \(\mathsf{FairGMM}\), and increases linearly with \(k\) for \(\mathsf{FMMD-S}\) and other algorithms, which all follow from their time complexities.
Table 3 presents the diversity values and running time of different algorithms for solution size \(k=50\) on all the full datasets. The performance of different algorithms by varying the solution size \(k\) from 10 to 100 on full datasets (with all the items) is presented in Figures 5-6. In general, the running time of all algorithms increases substantially with \(n\) and \(dim\). \(\mathsf{FairGMM}\) and \(\mathsf{FMMD-E}\) fail to finish within one day and thus are omitted from Table 3. \(\mathsf{FMMD-S}\) provides better solutions (up to \(7.2\times\) higher in diversity value) than all the baselines in most cases. The only exception is that \(\mathsf{FMMD-S}\) shows slightly lower solution quality than \(\mathsf{FairSwap}\) and \(\mathsf{SFDM1}\) on _CelebA_ when \(C=2\). This is because of the extremely high dimensionality of _CelebA_ (\(d=25,088\)), where the distances between different pairs of points are less distinguishable. In such cases, the thresholding method in \(\mathsf{FMMD-S}\) is inferior to the local search methods in \(\mathsf{FairSwap}\) and \(\mathsf{SFDM1}\). Moreover, the time efficiency of \(\mathsf{FMMD-S}\) is lower than the baselines, as solving ILPs is often time-consuming. Nevertheless, on _Census_, i.e., the largest dataset with more than two million items, \(\mathsf{FMMD-S}\) still finishes the computation within 3 hours.
To evaluate the scalability of different algorithms, we vary the number \(C\) of groups and the number \(n\) of points on synthetic datasets. In particular, each dataset consists of ten two-dimensional Gaussian isotropic blobs
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Dataset** & **Group** & \(\mathsf{FairSwap}\) & \(\mathsf{FairFlow}\) & \(\mathsf{FairGMM}\) & \(\mathsf{FairGreedyFlow}\) & \(\mathsf{SFDM1}\) & \(\mathsf{SFDM2}\) & \(\mathsf{FMMD-E}\) & \(\mathsf{FMMD-S}\) & \(\mathsf{OPT^{*}}\) \\ \hline \multirow{3}{*}{Adult} & Sex & 4.51 & 3.24 & 4.81 & 2.18 & 4.03 & 4.18 & **5.30** & 4.64 & \\ & Race & N/A & 1.73 & N/A & 1.33 & N/A & 2.83 & **4.54** & 4.01 & 5.30 \\ & S+R & N/A & 0.84 & N/A & 0.99 & N/A & 2.04 & **3.12** & 2.88 & \\ \hline \multirow{3}{*}{CelebA} & Sex & 101457.3 & 55540.7 & 127354.6 & 46266.9 & 94873.3 & 93216.6 & **129818.2** & 106959.3 & \\ & Age & 110098.1 & 54649.6 & 127871.2 & 46312.3 & 102762.9 & 91578.2 & **129871.5** & 116701.6 & 129871.5 \\ \cline{1-1} & S+A & N/A & 42412.7 & N/A & 39967.2 & N/A & 88026.7 & **127974.6** & 108055.3 & \\ \hline \multirow{3}{*}{Census} & Sex & 28.4 & 15.8 & 29.8 & 14.7 & 27.2 & 28.0 & **34.0** & 30.3 & 35.0 \\ \cline{1-1} & Age & N/A & 7.6 & N/A & 9.3 & N/A & 15.7 & **34.0** & 30.3 & \\ \hline Twitter & Sex & N/A & 1.23 & 1.44 & 1.23 & N/A & 1.39 & **1.51** & 1.46 & 1.51 \\ \hline \end{tabular}
\end{table}
Table 2: Diversity values of the solutions returned by different algorithms on small datasets for solution size \(k=10\). The optimums \(\mathsf{OPT^{*}}\) without fairness constraints are reported to show “the price of fairness”.
Figure 3: Diversity values of the solutions of different algorithms with varying solution size \(k\) on small datasets.
with random centers in \([-10,10]^{2}\) and identity covariance matrices. Each point is assigned to one of the \(C\) groups uniformly at random. The Euclidean distance is used as the distance metric. For fixed \(C=2\) or \(10\), we obtain six datasets with \(n=10^{2},10^{3},\ldots,10^{7}\); and for fixed \(n=1,000\), ten datasets with \(C=2,4,\ldots,20\).
The performance of different algorithms by varying \(n\) and \(C\) on synthetic datasets for solution size \(k=20\) is presented in Figure 7. In terms of solution quality, the diversity values are steady for FMMD-E and FMMD-S but significantly drop for all other algorithms when \(C\) increases. In terms of efficiency, the running time of FMMD-E is hardly affected by \(C\). Other algorithms run slower when \(C\) is larger. Nevertheless, FMMD-S runs faster than any other algorithm when \(C\geq 8\), and its advantages in time efficiency become more significant with increasing \(C\). Finally, all algorithms' diversity values and running time grow with the dataset size \(n\). FMMD-E cannot scale to large datasets due to its exponential time complexity. All in all, FMMD-S outperforms all the other approximation algorithms in terms of solution quality for different \(C\) or \(n\), and its advantages become more apparent when \(C\) or \(n\) is larger. These results confirm the scalability of FMMD-S concerning the group size \(C\) and dataset size \(n\).
## 5 Conclusion
We investigated the problem of max-min diversification with fairness constraints (FMMD) in this paper. We proposed an exact ILP-based algorithm for this problem on small datasets. We further designed a scalable \(\frac{1-\varepsilon}{5}\)-approximation algorithm, where \(\varepsilon\in(0,1)\), on massive datasets based on our exact algorithm and the notion of _coresets_. Extensive experimental results on four real-world datasets confirmed the effectiveness, efficiency, and scalability of our proposed algorithms.
Figure 4: Running time of different algorithms with varying solution size \(k\) on small datasets.
\begin{table}
\begin{tabular}{c l l r r r r r r r r r r} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Group**} & \multicolumn{3}{c}{**FairSwap**} & \multicolumn{3}{c}{FairFlow} & \multicolumn{3}{c}{FairGreedyFlow} & \multicolumn{3}{c}{SFDM1} & \multicolumn{3}{c}{SFDM2} & \multicolumn{3}{c}{FMMD-S} \\ & & **diversity** & **time(s)** & **diversity** & **time(s)** & **diversity** & **time(s)** & **diversity** & **time(s)** & **diversity** & **time(s)** & **diversity** & **time(s)** \\ \hline \multirow{3}{*}{Adult} & Sex & 2.55 & 28.45 & 2.10 & 26.47 & 1.46 & 99.61 & 2.86 & 8.17 & 3.22 & 13.71 & **3.56** & 46.38 \\ & Race & N/A & 1.43 & 28.47 & 0.78 & 91.24 & N/A & 2.86 & 15.81 & **3.56** & 48.10 \\ & S+R & N/A & 0.99 & 32.64 & 0.50 & 148.12 & N/A & 2.55 & 23.35 & **3.61** & 45.51 \\ \hline \multirow{3}{*}{CelebA} & Sex & 125865.8 & 3093.9 & 101303.9 & 2624.1 & 57696.8 & 3558.0 & **128450.2** & 1473.4 & 117342.5 & 1354.4 & 123639.0 & 4536.4 \\ & Age & 112387.6 & 4141.2 & 74679.4 & 2598.8 & 55470.0 & 2260.8 & **126446.9** & 1024.0 & 121056.9 & 1222.3 & 113798.5 & 4626.6 \\ & S+A & N/A & 29278.4 & 2589.5 & 34066.4 & 3871.3 & N/A & 118598.7 & 1141.7 & **129866.3** & 4752.5 \\ \hline \multirow{3}{*}{Census} & Sex & 22.8 & 1533.3 & 16.8 & 1250.4 & N/A & 22.3 & 328.7 & 24.4 & 459.14 & **28.6** & 3268.8 \\ & Age & N/A & 5.2 & 1461.5 & N/A & N/A & 12.5 & 662.76 & **16.0** & 2728.2 \\ \cline{1-1} & S+A & N/A & 3.4 & 1723.5 & N/A & N/A & 11.1 & 170.98 & **15.0** & 9224.8 \\ \hline Twitter & Sex & N/A & 1.08 & 120.85 & 1.04 & 783.7 & N/A & 1.33 & 85.56 & **1.38** & 208.52 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of different algorithms on full datasets for solution size \(k=50\). FairGMM and FMMD-E are omitted because they cannot provide any solution within the time limit (i.e., 24 hours).
While a step forward in both theoretical and experimental aspects of the algorithms for the fair variant of diversity maximization, our work leaves many open problems for future exploration. A natural question is whether there is any polynomial-time \(O(1)\)-approximation algorithm for FMMD, since our \(\mathsf{FMMD}\)-S algorithm has an approximation factor of \(\frac{1-\varepsilon}{5}\) but runs in polynomial time only when \(C=O(1)\) and \(k=o(\log n)\), whereas the best-known polynomial-time algorithm in [1] only achieves an approximation factor of \(\frac{1}{C+1}\). Moreover, it would also be interesting to study the fairness-aware variants of other diversity measures (e.g., the ones in [4, 18]).
|
2304.09900 | Constraints on physical computers in holographic spacetimes | Within the setting of the AdS/CFT correspondence, we ask about the power of
computers in the presence of gravity. We show that there are computations on
$n$ qubits which cannot be implemented inside of black holes with entropy less
than $O(2^n)$. To establish our claim, we argue computations happening inside
the black hole must be implementable in a programmable quantum processor, so
long as the inputs and description of the unitary to be run are not too large.
We then prove a bound on quantum processors which shows many unitaries cannot
be implemented inside the black hole, and further show some of these have short
descriptions and act on small systems. These unitaries with short descriptions
must be computationally forbidden from happening inside the black hole. | Aleksander M. Kubicki, Alex May, David Pérez-Garcia | 2023-04-19T18:00:50Z | http://arxiv.org/abs/2304.09900v2 | # Constraints on physical computers in holographic spacetimes
###### Abstract
Within the setting of the AdS/CFT correspondence, we ask about the power of computers in the presence of gravity. We show that there are computations on \(n\) qubits which cannot be implemented inside of black holes with entropy less than \(O(2^{n})\). To establish our claim, we argue computations happening inside the black hole must be implementable in a programmable quantum processor, so long as the inputs and description of the unitary to be run are not too large. We then prove a bound on quantum processors which shows many unitaries cannot be implemented inside the black hole, and further show some of these have short descriptions and act on small systems. These unitaries with short descriptions must be computationally forbidden from happening inside the black hole.
## 1 Introduction
Complexity theory deals with the power of mathematical models of computation. It is generally believed that these models capture the computational abilities of physical computers, but making this connection precise is difficult. For instance, considering a quantum circuit model we may be tempted to equate circuit depth with the time needed to implement the computation on a physical computer. By assuming a bound on energy, that connection can be made precise via the Margolus-Levinin theorem [1]. For any given unitary however, a Hamiltonian can be constructed which implements that unitary arbitrarily quickly, even at bounded energy [2]. This means that in this Hamiltonian model of computation, an energy bound doesn't suffice to relate computational and physical notions of time. Observations such as this one leave it unclear how to connect the limits of physical computers and mathematical models of computation.
In this article we make a preliminary step towards understanding the limits of physical computers. To consider the full set of constraints on physical computers, and the full physical setting that can be exploited by a computer, we consider computation in the context of quantum gravity. We work within the framework of AdS/CFT, which claims an equivalence between quantum gravity in asymptotically anti de Sitter (AdS) spaces and a purely quantum mechanical theory (a conformal field theory, the CFT) living at the boundary of that spacetime. Our main result is a construction of a family of unitaries that a computer operating inside of a black hole with entropy \(S_{bh}\) cannot perform, where the
computation is on \(n\) qubits with \(\log S_{bh}\leq n\ll S_{bh}\) and the family we construct is of size \(2^{o(S_{bh})}\). Because \(n\ll S_{bh}\), the inputs to the computation do not themselves couple strongly to gravity. Instead, it must be the computation on these small inputs that is restricted.
While we are ultimately interested in the physical limits of computers in our universe, working within the context of the AdS/CFT correspondence gives us a precise framework for quantum gravity. As well, a fundamental observation in computer science is that the power of computers is robust to "reasonable" changes in the details of the computing model: classical computers can be described in terms of Turing machines, uniform circuits, etc. and the resources needed to solve a given computational problem will change only polynomially. Quantum computers are similarly robust. This robustness suggests understanding the power of computers in AdS is likely to yield insights that apply more broadly.
Naively, the AdS/CFT duality between a bulk quantum gravity theory and quantum mechanical boundary suggests the power of computers in quantum gravity should be equivalent in some way to quantum computers. We can imagine simulating the CFT on a quantum computer, and thereby producing the outcomes of any computations run in the dual bulk picture. This approach is complicated however by the possibility that mapping between the boundary CFT descriptions and bulk gravity description is exponentially complex [3; 4; 5; 6]. Consequently determining the result of the bulk computation from the boundary simulation may itself be highly complex, allowing for a discrepancy in efficiencies between the bulk and boundary. An intriguing observation is that this leaves open the possibility of a quantum gravity computer being much more powerful than a quantum computer [7].
In this work, we give a strategy to restrict bulk computation using the existence of the boundary quantum mechanical description. The crucial property of the bulk to boundary map we assume is state independence, which we have in AdS/CFT when reconstructing suitably small bulk subsystems. We also use that this map is isometric.1 The state independence of the bulk to boundary map allows us to relate bulk computation to programmable quantum processors, a well studied notion in quantum information theory. Using tools from functional analysis, we give a bound on the average case behaviour of programmable processors.
Footnote 1: For experts, one comment is that in our context we are assured that the relevant bulk states are physical states in the fundamental description. We are _not_ claiming that the entire bulk effective field theory (EFT) Hilbert space can be recovered from the CFT. In fact, the map from the EFT Hilbert space to the CFT Hilbert space is expected to be non-isometric [8].
Beyond the quantum processor bound, we use additional input from quantum gravity: we assume that we cannot pass more than a black holes area worth of qubits into the black hole (a special case of the covariant entropy bound), and we use that the boundary CFT has a "short" description.2 To reach the strongest version of our result, we will also make an assumption that a computation which is forbidden from happening inside a black hole also cannot be implemented inside of a smaller one.
Footnote 2: In particular we argue the CFT data can be specified in \(O(\log(L_{AdS}/G_{N}))=O(\log(c))\) bits.
Before proceeding, we note that another strategy to constrain bulk computation using the boundary description was suggested in [9], and similar ideas appear in [10; 11]. That strategy involves noting that bulk computations are supported, in a sense that can be
ade precise, by boundary entanglement. The finite entanglement between distant boundary subregions can then be used to place constraints on the size of inputs for some bulk computations, and it has been further suggested that better understanding of entanglement requirements in non-local computation may lead to computational constraints.
**Summary of our thought experiment and result**
The basic setting in which we constrain computation is shown in figure 1, where we consider a two sided black hole. A quantum system \(A\) is recorded into bulk degrees of freedom \(a\) and thrown into the black hole from the left asymptotic boundary, and a second system \(P\) is recorded into bulk degrees of freedom \(p\) and thrown in from the right. System \(A\) initially holds a state \(\ket{\psi}_{A}\), and \(P\) holds a description of a unitary that needs to be performed, along with any computing device to be used to perform it. We will impose that the computer is built from a much smaller number of degrees of freedom than the black hole we are throwing it into, so that \(n_{p}\ll S_{bh}\).3 Otherwise, we can remain agnostic as to the design and functioning of this computer -- it might exploit some exotic quantum gravitational effects in performing its computation. We aim to have the computer produce the state \(\mathbf{U}\ket{\psi}_{a}\), which will be stored somewhere in the black hole. We assume that a global reconstruction of the \(\mathcal{H}_{a}\) Hilbert space from the joint Hilbert space of both CFT's exists, and we require the reconstruction procedure is independent of the unitary to be performed.4 Thus there is some isometry \(\mathbf{R}\) that maps \(\mathcal{H}_{A}\otimes\mathcal{H}_{CFT}\rightarrow\mathcal{H}_{A}\otimes \mathcal{H}_{E}\), where \(\mathcal{H}_{A}\) holds the state \(\mathbf{U}\ket{\psi}_{A}\) if the bulk computation has succeeded.
Footnote 3: In the main text we will relax this to allow \(n_{p}\leq S_{bh}\), which is enforced in the bulk by the covariant entropy bound. To do so and still find an interesting constraint, we will need to invoke the additional assumption that going to a smaller black hole never adds computational power. We consider the simpler setting where \(n_{p}\ll S_{bh}\) in the introduction.
Footnote 4: As we will review, this is justified when \(n_{a}+n_{p}\ll S_{bh}\).
To relate this setting to quantum information theory, consider the notion of a quantum programmable processor. An exact programmable processor is an isometry \(\mathbf{T}\) which acts according to
\[\mathbf{T}_{AP\to AE}(\ket{\psi}_{A}\ket{\phi_{\mathbf{U}}}_{P})=( \mathbf{U}_{A}\ket{\psi}_{A})\ket{\phi^{\prime}_{\mathbf{U}}}_{E}. \tag{1}\]
Figure 1: A two sided black hole, with systems \(a\) and \(p\) falling in from opposite sides. The state on \(P\) describes a unitary, which should be applied to the state on \(A\).
We will also consider approximate notions of a quantum processor. The \(P\) Hilbert space holds a state \(|\phi_{\bf U}\rangle\) which we call a program state, and which specifies a unitary \({\bf U}\) to be applied. We will consider non-universal programmable processors, which have program states for only some finite set of unitaries.
Returning to our black hole, we note that we can view the insertion of the relevant degree's of freedom, time evolution, and the recovery operation as the action of a quantum processor. This is because once the program state is prepared, the remaining operations used to carry out the computation -- inserting these systems into the bulk, allowing the black hole to time evolve, then recovering the output system -- are all independent of the program state, and can be viewed as a particular choice of isometry \({\bf T}\) that acts according to equation 1. We discuss the definition of \({\bf T}\) in more detail later on, but note here that it is fixed by the description of the CFT and of the initial state of the black hole.
Quantum processors are subject to constraints. Consider processors that implement a family of diagonal unitaries on \(n_{A}\) qubits,
\[{\cal E}=\{{\bf U}^{\varepsilon}:{\bf U}^{\varepsilon}={\rm diag}(\varepsilon_ {1},\varepsilon_{2},...,\varepsilon_{2^{n_{A}}}),\varepsilon_{i}\in\pm 1\}. \tag{2}\]
For this family, one can show that an isometry \({\bf T}\) succeeds in implementing a randomly chosen unitary \({\bf U}^{\varepsilon}\in{\cal E}\) poorly whenever the number of qubits in the program state is sub-exponential in the number of data qubits. In particular, we will show that the probability \(p({\bf T},{\bf U}^{\varepsilon})\) of successfully applying the unitary5 satisfies the bound
Footnote 5: We define this probability more precisely in the main text.
\[\mathbb{E}_{\varepsilon}\,p({\bf T},{\bf U}^{\varepsilon})\leq C\frac{n_{P}} {2^{n_{A}}} \tag{3}\]
where \(n_{P}\) is the number of qubits in the program state, the average \(\mathbb{E}_{\varepsilon}\) is over all values of \(\varepsilon\), and \(C\) is a constant.
Returning to the holographic setting, take
\[n_{P}\ll S_{bh},\qquad\log CS_{bh}\leq n_{A}\ll S_{bh}. \tag{4}\]
The upper bound on \(n_{P}\) is our imposition that we are considering a computer built of many fewer degrees of freedom than the black hole. We are free to choose \(n_{A}\) as we like, and take \(n_{A}\ll S_{bh}\) to ensure the inputs to the computation fit easily into the black hole. The lower bound on \(n_{A}\) ensures \(Cn_{P}/2^{n_{A}}\) will be small and our processor bound consequently non-trivial. Inside this regime, the bound 3 implies that some unitaries \({\bf U}^{\varepsilon}\) can be implemented in the bulk only with probability less than 1. By itself this is no surprise: to specify an arbitrary \({\bf U}^{\varepsilon}\) requires \(2^{n_{A}}\) bits (the signs \(\varepsilon_{i}\)), so for some \({\bf U}^{\varepsilon}\) the program state of \(n_{P}\ll S_{bh}\leq 2^{n_{A}}/C\) qubits is too few qubits to specify the unitary, preventing the bulk computer from applying it.
More surprising is that there are also unitaries with short descriptions that cannot be implemented in the bulk. To construct one, notice that the \({\bf U}^{\varepsilon}\) inherit an ordering from the strings \(\varepsilon\). Choosing some threshold \(\delta<1\), we have from the bound 3 that some unitaries cannot be completed with probability higher than \(\delta\). We define \({\bf U}^{\overline{\varepsilon}}\) as the first unitary
which the processor \({\bf T}\) defined by our setting can't complete with probability more than \(\delta\). In the main text we argue the CFT and the initial state can be efficiently described, using in particular \(O(\log S_{bh})\) qubits, which means the description of these forbidden unitaries is small enough to be recorded into \(n_{P}\). Thus inside the black hole the computer holds a complete description of the unitary \({\bf U}^{\varepsilon}\) to be applied, but by construction the computer must fail to apply \({\bf U}^{\varepsilon}\), since otherwise the programmable processor \({\bf T}\) would succeed.
This construction shows that there are at least some computations which cannot be performed inside the black hole, despite there being no information theoretic reason they shouldn't be (i.e. the unitary is fully specified, and the inputs are available). Consequently, it is a computational restriction that forbids these unitaries from happening -- we have shown that the bulk quantum gravity computer cannot implement arbitrary computations, and in particular cannot implement the explicit computation we constructed.
To better understand the workings of our bulk computer, it is interesting to ask how hard it is to implement the computations we've shown to be forbidden. In particular, what is their complexity, when considering for example a quantum circuit model of computation? We argue that in the regime 1.4, the computation that implements the needed unitary requires circuits with memory at least \(CS_{bh}\) and depth at least \(2^{S_{bh}}\). Assuming the physical computer has similar space and time requirements would suffice as a bulk explanation for why these computations are forbidden.
**Summary of notation**
We briefly recall some asymptotic notation used in computer science and employed here. We will use
\[f(x)=O(g(x)) \iff\lim_{x\to\infty}\frac{f(x)}{g(x)}<\infty,\] \[f(x)=o(g(x)) \iff\lim_{x\to\infty}\frac{f(x)}{g(x)}=0,\] \[f(x)=\omega(g(x)) \iff\lim_{x\to\infty}\frac{f(x)}{g(x)}=\infty,\] \[f(x)=\Theta(g(x)) \iff 0<\lim_{x\to\infty}\frac{f(x)}{g(x)}<\infty.\]
In words, big \(O\) means \(f(x)\) grows not much faster than \(g(x)\), little \(o\) means \(f(x)\) grows more slowly than \(g(x)\), little \(\omega\) means \(f(x)\) grows faster than \(g(x)\), and \(\Theta\) means \(g(x)\) and \(f(x)\) grow at the same rate. Some other notation:
* We use capital Latin letters for quantum systems \(A\), \(B\),..., except when they are bulk subsystems, in which case we use lower case Latin letters \(a,b,...\), etc.
* We use bold face capital Latin letters for unitaries and isometries, \({\bf T}\), \({\bf U}\), etc.
## 2 Programmable processors
In this section we define the notion of a programmable processor more carefully, then give a bound on a particular class of processors.
### Universal and non-universal quantum processors
A classical computer functions according to the following basic structure. We input some data recorded in a string, call it \(x\), and a program, call it \(P\). Then the computer applies the program to the input data, producing output \(P(x)\). When any program can be input to the computer in this way, we say the computer is universal.
In the quantum context the analogue is known as a universal processor. In this setting a program amounts to a specification of a unitary, and the input data is a quantum state. The overall action of a processor is given by an isometry \(\mathbf{T}_{AP\to AE}\), which satisfies
\[\mathbf{T}_{AP\to AE}(\ket{\psi}_{A}\otimes\ket{\phi_{\mathbf{U}}}_{P})=( \mathbf{U}_{A}\ket{\psi}_{A})\ket{\phi^{\prime}_{\mathbf{U}}}_{E}. \tag{1}\]
In [12], the notion of universal quantum processor was defined, and it was shown that for each distinct unitary (up to a phase) the processor can implement, an orthogonal program state is needed. Because there are an infinite number of distinct unitaries, no universal processor can exist in the exact setting.
Giving up on a universal quantum processor we can consider similar but weaker objects that might be possible to construct. One possibility is to consider approximate universal processors, allowing for some error tolerance in applying the unitary \(\mathbf{U}\). Such approximate universal processors can be constructed [13], and it is known that any such construction needs the dimension of the program Hilbert space to scale exponentially with the dimension of the input Hilbert space [14]. Another route is to consider finite families of unitaries, and look for processors that apply only elements of this family, either exactly or approximately.
In this work, we will make use of results on this second notion of a quantum processor, which we now define more fully.
**Definition 1**.: A quantum processor \(\mathbf{T}:\mathcal{H}_{A}\otimes\mathcal{H}_{P}\rightarrow\mathcal{H}_{A} \otimes\mathcal{H}_{E}\) is said to implement the family of unitaries \(\mathcal{U}\) if for each \(\mathbf{U}\in\mathcal{U}\) there is a state \(\ket{\phi_{\mathbf{U}}}\in\mathcal{H}_{P}\) such that
\[\mathrm{tr}_{E}\,\mathbf{T}(\ket{\psi}\!\bra{\psi}_{A}\otimes\ket{\phi_{ \mathbf{U}}}\!\bra{\phi_{\mathbf{U}}}_{P})\mathbf{T}^{\dagger}=\mathbf{U}\ket{ \psi}\!\bra{\psi}\mathbf{U}^{\dagger} \tag{2}\]
holds for all \(\ket{\psi}\). We also call such a construction a \(\mathcal{U}\)-processor.
To define a notion of an approximate \(\mathcal{U}\)-processor, one approach would be to require 2.2 holds approximately for all \(\mathbf{U}\). Instead, we will define a quantity which captures how close to a \(\mathcal{U}\)-processor an isometry is in an averaged sense.
**Definition 2**.: **(Processor testing scenario)** Consider an isometry \(\mathbf{T}:\mathcal{H}_{A}\otimes\mathcal{H}_{P}\rightarrow\mathcal{H}_{A} \otimes\mathcal{H}_{E}\). The \(\mathcal{U}\)-processor testing scenario is as follows.
1. Choose \(\mathbf{U}_{A}\in\mathcal{U}\) uniformly and at random.
2. Choose a state \(\ket{\phi_{\mathbf{U}}}_{P}\in\mathcal{H}_{P}\). Apply \(\mathbf{T}\) to \(\ket{\Psi}_{\overline{A}A}\otimes\ket{\phi_{\mathbf{U}}}_{P}\), where \(R\) is a reference system and \(\ket{\Psi}_{\overline{A}A}\) is the maximally entangled state.
3. Measure the POVM \(\{\mathbf{U}\ket{\Psi}\!\bra{\Psi}\mathbf{U}^{\dagger},\mathcal{I}-\mathbf{U} \ket{\Psi}\!\bra{\Psi}\mathbf{U}^{\dagger}\}\).
The probability of passing this test is, using the optimal choice of program state, given by
\[p(\mathbf{T},\mathcal{U})\equiv\mathbb{E}_{\mathbf{U}\in\mathcal{U}}\sup_{|\phi \mathbf{U}\rangle}\text{tr}\Big{(}\mathbf{U}\,|\Psi\rangle\!\langle\Psi|\, \mathbf{U}^{\dagger}\mathbf{T}(|\Psi\rangle\!\langle\Psi|\otimes|\phi_{ \mathbf{U}}\rangle\!\langle\phi_{\mathbf{U}}|)\mathbf{T}^{\dagger}\Big{)}. \tag{3}\]
The quantity \(p(\mathbf{T},\mathcal{U})\) gives our quantification of how close to a \(\mathcal{U}\)-processor \(\mathbf{T}\) is.
### Lower bounds on quantum processors
Below, we will show that \(\mathcal{U}\)-processors are constrained by the size of their program Hilbert spaces. We will be interested in processors implementing the family of unitaries
\[\mathcal{E}=\{\mathbf{U}^{\varepsilon}:\mathbf{U}^{\varepsilon}=\text{diag}( \varepsilon_{1},\varepsilon_{2},...,\varepsilon_{2^{2n}}),\varepsilon_{i} \in\pm 1\}. \tag{4}\]
This family is of particular interest because it can be related to the notion of type constants in the theory of Banach spaces, which will be the technical tool that eventually leads to our bound.
We now state the main claim of this section.
**Theorem 3**.: _(Bound on \(\mathcal{E}\)-processors) Given an isometry \(\mathbf{T}:\mathcal{H}_{A}\otimes\mathcal{H}_{P}\rightarrow\mathcal{H}_{A} \otimes\mathcal{H}_{E}\), we have_
\[p(\mathbf{T},\mathcal{E})\leq\frac{C\log d_{P}}{d_{A}} \tag{5}\]
_where \(C\) is a constant._
This will be the technical statement used in the next section, and the reader uninterested in the proof may proceed to there. In the rest of this section we explain some tools needed and then give the proof. Note that this result is similar to the bound given in [14], both in the techniques we will use to prove it and the statement. The only distinction is that in [14] they give a lower bound on the dimension of the program space in terms of a measure of the worst case performance of the processor. We can read the above as a lower bound on \(d_{P}\) in terms of the performance of the processor on a particular state, the maximally entangled one, which can also be related to the average case performance of the processor.
The central mathematical structure we will exploit is the notion of a Banach space, and the theory of type constants. A Banach space \(\mathcal{B}\) is a vector space equipped with a norm \(||\cdot||_{\mathcal{B}}\), and which is complete under that norm. This can be compared to the more familiar notion of Hilbert space, which is a vector space with an inner product \(\langle\cdot,\cdot\rangle\), and which is complete under the norm induced by that inner product \(||x||=\sqrt{\langle x,x\rangle}\). Notice that every Hilbert space is also a Banach space, but the reverse is not true.
Type constants are certain numerical values associated with a given Banach space \(\mathcal{B}\) that characterize, in a sense we explain, how far from being a Hilbert space \(\mathcal{B}\) is. In particular, if a norm is defined by an inner product, it carries with it additional structure beyond what is usually given by a norm. For example, in a Hilbert space we have
\[||x+y||^{2}+||x-y||^{2}=||x||^{2}+||y||^{2}. \tag{6}\]
How badly a Banach space can violate this equality then gives some notion of how far it is from being a Hilbert space. This motivates the following definition, which follows [15]. We will only exploit the type 2 constants, but give a more general definition for completeness.
**Definition 4**.: Let \(\mathcal{B}\) be a Banach space and let \(1\leq p\leq 2\). We say \(\mathcal{B}\) is of type \(p\) if there exists a positive constant \(t\) such that for every natural number \(n\) and every sequence \(\{x_{i}\}_{i=1}^{n}\), \(x_{i}\in\mathcal{B}\) we have
\[\left(\mathbb{E}_{\varepsilon}\left[\left|\left|\sum_{i=1}^{n}\varepsilon_{i} x_{i}\right|\right|_{\mathcal{B}}^{2}\right]\right)^{1/2}\leq t\left(\sum_{i=1}^{n} \left|\left|x_{i}\right|\right|_{\mathcal{B}}^{p}\right)^{1/p}. \tag{7}\]
The infimum of the constants \(t\) that satisfy this condition is the type \(p\) constant of \(\mathcal{B}\), which we denote \(t_{\mathcal{B},p}\).
Note that in a Hilbert space \(\mathcal{H}\), we always have \(t_{\mathcal{H},2}=1\).
It is also helpful to introduce the Banach space formed by linear operators acting on a Hilbert space. Given an operator \(\mathcal{O}:\mathcal{H}\rightarrow\mathcal{H}^{\prime}\) define the operator norm,
\[||\mathcal{O}||_{\infty}=\sup_{\left|\psi\right\rangle\in\operatorname{Ball}( \mathcal{H})}||\mathcal{O}\left|\psi\right\rangle||_{\mathcal{H}^{\prime}} \tag{8}\]
where \(\operatorname{Ball}(\mathcal{H})\) is the unit ball in Hilbert space \(\mathcal{H}\). Then \(\mathcal{L}(\mathcal{H}^{\prime},\mathcal{H})\), the space of linear operators mapping \(\mathcal{H}\) into \(\mathcal{H}^{\prime}\) which also have bounded operator norm, forms a Banach space. Considering the case of finite dimensional spaces, the type 2 constant of \(\mathcal{L}(\mathcal{H}^{\prime},\mathcal{H})\) can be bounded above according to [15, 16]
\[t_{\mathcal{L}(\mathcal{H}^{\prime},\mathcal{H}),2}\leq C\sqrt{\min\{\log\dim \mathcal{H},\log\dim\mathcal{H}^{\prime}\}}. \tag{9}\]
With these ingredients, we are able to give the proof of theorem 3.
Proof.: **(Of theorem 3)** We introduce the notation
\[\left|\Psi_{\varepsilon}\right\rangle_{AR}\equiv\mathbf{U}_{A}^{\varepsilon} \left|\Psi\right\rangle_{AR}\]
and will denote the choice of program states by \(\left|\phi_{\varepsilon}\right\rangle\). The success probability \(p(\mathbf{T},\mathcal{E})\) is expressed as
\[p(\mathbf{T},\mathcal{E})=\mathbb{E}_{\varepsilon}\sup_{\left|\phi_{ \varepsilon}\right\rangle}\operatorname{tr}\left[\left|\Psi_{\varepsilon} \right\rangle\!\!\left\langle\Psi_{\varepsilon}\right|(\mathbf{T}\left|\Psi \right\rangle\!\!\left\langle\Psi\right|\otimes\left|\phi_{\varepsilon} \right\rangle\!\!\left\langle\phi_{\varepsilon}\right|\mathbf{T}^{\dagger}) \right]=\mathbb{E}_{\varepsilon}\sup_{\left|\phi_{\varepsilon}\right\rangle}|| \left\langle\Psi_{\varepsilon}\right|\mathbf{T}(\left|\Psi\right\rangle\otimes \left|\phi_{\varepsilon}\right\rangle)||_{E}^{2} \tag{10}\]
where \(||\left|\psi\right\rangle_{E}||_{E}=\sqrt{\left\langle\psi\right|\psi\right\rangle}\) is the usual norm on the Hilbert space \(\mathcal{H}_{E}\). Using that \(\left|\Psi\right\rangle_{AR}\) is the maximally entangled state, and that
\[\left|\Psi_{\varepsilon}\right\rangle_{AR}=\frac{1}{\sqrt{d_{A}}}\sum_{i=1}^{ d_{A}}\varepsilon_{i}\left|i\right\rangle_{A}\left|i\right\rangle_{R} \tag{11}\]
we obtain
\[p(\mathbf{T},\mathcal{E})=\frac{1}{d_{A}^{2}}\mathbb{E}_{\varepsilon} \sup_{\left|\phi_{\varepsilon}\right\rangle}\left|\left|\sum_{i=1}^{d_{A}} \varepsilon_{i}\left\langle i\right|_{A}\mathbf{T}(\left|i\right\rangle_{A} \otimes\left|\phi_{\varepsilon}\right\rangle_{P})\right|\right|_{E}^{2}. \tag{12}\]
Define \(\mathbf{T}_{i}\equiv\left\langle i\right|_{A}\mathbf{T}\left|i\right\rangle_{A}\), which is a linear map from \(P\) to \(E\). Then the above becomes
\[p(\mathbf{T},\mathcal{E})=\frac{1}{d_{A}^{2}}\mathbb{E}_{\varepsilon}\sup_{ \left|\phi_{\varepsilon}\right\rangle}\left|\left|\sum_{i=1}^{d_{A}} \varepsilon_{i}\mathbf{T}_{i}\left|\phi_{\varepsilon}\right\rangle_{P}\right| \right|_{E}^{2}=\frac{1}{d_{A}^{2}}\mathbb{E}_{\varepsilon}\left|\left|\sum_{ i=1}^{d_{A}}\varepsilon_{i}\mathbf{T}_{i}\right|\right|_{\infty}^{2}.\]
The last norm is on the Banach space of bounded linear operators from \(\mathcal{H}_{P}\) to \(\mathcal{H}_{E}\). Our choice of family of unitaries \(\mathcal{E}\) has lead conveniently to the final expression here being the sum appearing in the definition of the type 2 constant. Using the result 9 for the upper bound on the type 2 constant of this Banach space, we obtain
\[p(\mathbf{T})\leq C\frac{\log d_{P}}{d_{A}^{2}}\sum_{i=1}^{d_{A}} \left|\left|\mathbf{T}_{i}\right|\right|_{\infty}^{2}\leq C\frac{\log d_{P}}{ d_{A}} \tag{13}\]
where we used that \(||\mathbf{T}_{i}||_{\infty}\leq 1\) in the last inequality. This is exactly equation 5.
## 3 Forbidden computations for physical computers
In this section we relate bounds on programmable processors to computation in holographic spacetimes. Then, we comment on the interpretation of the resulting constraints from a bulk perspective. We begin however with a very brief review of some needed results in AdS/CFT related to reconstructing states in the bulk from the boundary.
### The reconstruction wedge
A basic element in the understanding of AdS/CFT is the Ryu-Takayanagi formula, and its various generalizations and restatements. One form of the modern statement reads [17]
\[S(A)=\min_{\gamma_{ext}}\text{ext}_{\gamma\in\text{Hom}(A)}\left(\frac{\text {area}(\gamma)}{4G_{N}}+S_{bulk}(E_{\gamma})\right). \tag{14}\]
The area plus entropy term inside the brackets is often called the generalized entropy. The extremization is over surfaces \(\gamma\) which are homologous to \(A\), which means that there exists a codimension 1 surface \(E_{\gamma}\) such that
\[\partial E_{\gamma}=A\cup\gamma. \tag{15}\]
The term \(S_{bulk}(E_{\gamma})\) counts the entropy inside the region \(E_{\gamma}\). When there are multiple candidate extremal surfaces homologous to \(A\), the final minimization picks out the one with least generalized entropy. The minimal extremal surface picked out by the optimization procedure in the RT formula will be labelled \(\gamma_{A}\). This formula receives leading order
corrections in some regimes, as understood in [18], but the form 3.1 will suffice for our application.6
Footnote 6: Very roughly, in [18] it was understood that this formula breaks down when there are bulk states whose smooth max or min entropy differs at \(O(1/G_{N})\) from the von Neumann entropy, which won’t occur here.
Given a subregion of the boundary \(A\), it is natural to ask if a subregion of the bulk is recorded into \(A\). To make this question more precise, we should introduce a choice of bulk subspace, which we refer to as the code-space and label \(\mathcal{H}_{code}\). The subspace \(\mathcal{H}_{code}\) might for instance be specified by a particular choice of bulk geometry, along with some qubits distributed spatially across the bulk. Then, assume we are told the bulk degrees of freedom are in a state within \(\mathcal{H}_{code}\), and we are given the degree's of freedom on subregion \(A\). What portion of the bulk degree's of freedom can we recover?
Answering this question is related closely to the RT formula. In particular, the portion of the bulk we can recover if we know the bulk state in \(\mathcal{H}_{code}\) is given by [19; 20]
\[E_{A}\equiv\bigcap_{\psi\in\mathcal{H}_{code}}E_{\gamma_{A}}. \tag{3.3}\]
That is, for each state in the code space we find where the RT surface \(\gamma_{A}\) sits, and define the corresponding bulk subregion \(E_{\gamma_{A}}\). Then, we define the intersection of all such surfaces, considering all states in the code-subspace. Note that in this procedure we should include mixed states of the code-space. The resulting region is the portion of the bulk degrees of freedom we can recover, if we know nothing about which state in the code-space the full bulk is in. This region is sometimes referred to as the _reconstruction wedge_ of region \(A\), defined relative to the code-space \(\mathcal{H}_{code}\).
Given that it is possible to recover information inside the reconstruction wedge, we can also ask what explicit operation recovers the code space from the CFT degrees of freedom. Given a global map from the bulk subspace \(\mathcal{H}_{code}\) to the boundary Hilbert space, it was understood in [21] how to construct such a recovery channel. Note that in this construction, a single choice of recovery channel works correctly for the entire code-space.
We will apply the notion of the reconstruction wedge with the region \(A\) taken to be the entire boundary CFT. In this setting we might expect the reconstruction wedge is always the entire bulk, but if we choose too large of a code space it is possible for this to break down. In particular the minimal extremal surface appearing in equation 3.1 can appear that cuts out a portion of the bulk. While this incurs an area term with a cost like \(L_{AdS}/G_{N}\) in the generalized entropy, if we take \(\mathcal{H}_{code}\) large enough this can reduce the generalized entropy and be favoured. For this reason it will be necessary to keep our code spaces sufficiently small.
### Holographic thought experiment with the game \(G_{\mathcal{E}}\)
Let's return to the setting of the thought experiment presented in the introduction. Our goal will be to construct a unitary acting on a small system that is forbidden from being completed in the black hole interior. It will also be important that the unitary have a short description: if even specifying the unitary requires an exponential number of bits,
bringing this description into the region may itself induce a large backreaction and cause the experiment to fail.
To make the notion of an efficient description more precise, we recall the definition of Komolgorov complexity, also known as _descriptive complexity._ Intuitively, the descriptive complexity counts the minimal number of bits needed to describe a given string. Somewhat more formally, we make the following definition, which follows [22].
**Definition 5**.: The **shortest description** of a string \(x\) is the shortest string \(\left\langle M,w\right\rangle\), where \(M\) is a Turing machine and \(w\) is an input string for that Turing machine, such that \(M(w)\) outputs \(x\). The **descriptive complexity** of \(x\), which we denote \(d(x)\) is the length of the shortest description.
Returning to holography, we consider two copies of a holographic CFT placed in the thermofield double state, so that the bulk description is a two sided black hole. We consider a one parameter family of black holes parameterized by their entropy \(S_{bh}\).7 We could realize by for instance considering a family of CFT's parameterized by the central charge \(c\), which is proportional to the black hole entropy. Our argument however is agnostic to how we realize this family, which we could also realize by adjusting the black hole temperature.
Footnote 7: In our asymptotic notation, e.g. \(o(\cdot)\), the asymptotic parameter will always be \(S_{bh}\).
We are interested in putting constraints on what can be computed within an AdS space dual to a holographic CFT. Before proceeding, we should make some comments on what is meant by having performed a computation. Given some input system \(\mathcal{H}_{a}\), we usually say that we have performed some computation \(\mathbf{U}_{a}^{\varepsilon}\) (which will here be unitary) if the state on \(\mathcal{H}_{a}\) transforms according to \(\left|\psi\right\rangle_{a}\rightarrow\mathbf{U}_{a}^{\varepsilon}\left|\psi \right\rangle_{a}\). In quantum mechanics this is unambiguous, since the Hilbert space \(\mathcal{H}_{a}\) is defined at all times. In field theory, we only have subregions of the spacetime, and a priori it's not clear what "the same" Hilbert space \(\mathcal{H}_{a}\) at different times means. Unlike in quantum mechanics, we have different Hilbert spaces \(\mathcal{H}_{a}\) and \(\mathcal{H}_{a^{\prime}}\) at early and late times, and some identification of bases in the two spaces. In practice we routinely identify persistent Hilbert spaces: for example we can track a given particle through spacetime, and call the Hilbert space describing its spin degree of freedom \(\mathcal{H}_{a}\), but implicitly we have some basis information we are identifying across the early and late times.
In our context it will suffice to say that computation \(\mathbf{U}^{\varepsilon}\) has been completed if we can identify in a "sufficiently simple" way a Hilbert space \(\mathcal{H}_{a^{\prime}}\) and identification of basis elements between \(\mathcal{H}_{a}\) and \(\mathcal{H}_{a^{\prime}}\) such that the transformation \(\left|\psi\right\rangle_{a}\rightarrow\mathbf{U}^{\varepsilon}\left|\psi \right\rangle_{a^{\prime}}\) has been implemented. For us "sufficiently simple" will mean that the \(\mathcal{H}_{a^{\prime}}\) and the identification of bases can be specified using a number of bits small compared to other parameters in the problem. This agrees with the usual setting in quantum mechanics where \(\mathcal{H}_{a^{\prime}}\) is trivial to identify, and avoids some trivial ways of "performing" an arbitrary highly complexity computation, by for instance absorbing the computation into the basis identification. As an example, considering our particle moving through spacetime, we might identify the early and late time Hilbert spaces by specifying the background metric and parallel transporting a set of axes along the particle trajectory.
With this background on what we mean by a computation happening in a spacetime, let's proceed to understand the claimed constraints. We consider three agents, whom we
call Alice, Bob, and the referee. The referee decides on some input size for the computation, call it \(n_{A}=\log d_{A}\). We then play the following game.
**Definition 6**.: **(Diagonal unitary game G\({}_{\mathcal{E}}\))**__
* Alice prepares a randomly chosen string \(\varepsilon\in\{\pm 1\}^{d_{A}}\).
* Based on the value of \(\varepsilon\), Alice prepares a state \(\ket{\phi_{\varepsilon}}_{P}\in\mathcal{H}_{P}\) and acts on CFT\({}_{R}\) so as to record the state on \(P\) into bulk degrees of freedom \(p\), and throws this state into the black hole.
* The referee prepares the state \(\ket{\Psi}_{\overline{A}A}=\frac{1}{\sqrt{d_{A}}}\sum_{i=1}^{d_{A}}\ket{i}_{ \overline{A}}\ket{i}_{A}\) and gives the \(A\) system to Bob. Bob acts on CFT\({}_{L}\) so as to record the state on \(A\) into bulk degrees of freedom \(a\), and throws this state into the black hole.
* Alice gives CFT\({}_{R}\) to the referee, Bob gives CFT\({}_{L}\) to the referee.
* The referee applies a global reconstruction procedure on \(\mathcal{H}_{L}\otimes\mathcal{H}_{R}\) to recover the state on the \(a^{\prime}\) system, which he records into \(\mathcal{H}_{A}\). The Hilbert spaces \(\mathcal{H}_{a}\) and \(\mathcal{H}_{a^{\prime}}\) should be identified as discussed above. The referee then measures the POVM \(\{\mathbf{U}_{A}^{\varepsilon}\ket{\Psi}\!\!\bra{\Psi}_{\overline{A}A}\left( \mathbf{U}_{A}^{\varepsilon}\right)^{\dagger}\!,\mathcal{I}-\mathbf{U}_{A}^{ \varepsilon}\ket{\Psi}\!\!\bra{\Psi}_{\overline{A}A}\left(\mathbf{U}_{A}^{ \varepsilon}\right)^{\dagger}\}\).
If the referee obtains the measurement outcome \(\mathbf{U}_{A}^{\varepsilon}\ket{\Psi}\!\!\bra{\Psi}_{\overline{A}A}\left( \mathbf{U}_{A}^{\varepsilon}\right)^{\dagger}\), we declare Alice and Bob to have won the diagonal unitary game.
The steps in this procedure are summarized in figure 2.
In the reconstruction step, the referee applies a map \(\mathbf{R}\) to the Hilbert space \(\mathcal{H}_{A}\otimes\mathcal{H}_{L}\otimes\mathcal{H}_{R}\). We claim this map can be made independent of \(\varepsilon\) and isometric. To understand why, recall from the last section that we can reconstruct \(\mathcal{H}_{a^{\prime}}\) in a state independent way if we take our code space to be the full Hilbert space of states that can depend on \(\varepsilon\), since then the reconstruction procedure is independent of \(\varepsilon\). Thus we should take \(\mathcal{H}_{code}\) to include all of those states obtained by inserting any state in \(\mathcal{H}_{p}\) and time evolving forward to the point where we do the reconstruction. If we would also like to reconstruct without holding the reference system \(\overline{A}\), which we will need to apply our processor bound,8 we should add in the \(n_{A}\) qubits worth of states. Thus state independent reconstruction is possible when \(n_{A}+n_{P}\) is much smaller than \(S_{bh}\), so that the bulk entropy term never competes with the area of the black hole in finding the minimal extremal surface in equation 3.1. Concretely, it suffices to impose that
Footnote 8: This is because our processor bound is proven in the setting where \(\mathbf{T}\) acts on \(A\) and \(P\) but not on the reference \(\overline{A}\).
\[n_{A}+n_{P}=o(S_{bh}). \tag{3.4}\]
We will need to ensure we work in this regime.
The claim that \(\mathbf{R}\) is isometric is easy to misunderstood in light of another set of ideas in AdS/CFT. Often it is useful to discuss the Hilbert space of an effective field theory that
lives on the bulk geometry. In the context of black holes, this EFT Hilbert space is thought to map non-isometrically into the CFT Hilbert space [8]. Said another way, the EFT Hilbert space of black holes is too big, and some of its states do not have corresponding states in the fundamental (CFT) description. In our context we never introduce the larger bulk EFT Hilbert space. Instead, we begin with some CFT state dual to the two sided black hole, then act on the CFT to introduce the inputs to our computation. Thus our bulk state is necessarily a state in the fundamental description.
If indeed we can ensure \(\mathbf{R}\) is state independent, we can notice that after the initial preparation of \(\varepsilon\) all the steps in the protocol are independent of \(\varepsilon\), and form an isometry. In fact, looking at the circuit diagram of figure 2 we see that the protocol is described by an isometry \(\mathbf{T}_{AP\to AE}\) and a state preparation of \(\ket{\phi_{\varepsilon}}_{P}\), which is then input to \(\mathbf{T}_{AP\to AE}\). Thus the overall action is described by a map
\[\mathbf{T}_{AP\to AE}(\ket{\Psi}_{\overline{A}A}\ket{\phi_{ \varepsilon}}_{P})=\ket{\Psi^{\prime}}_{\overline{A}A}\ket{\phi^{\prime}_{ \varepsilon}}_{E}. \tag{3.5}\]
This is exactly the action of a quantum programmable processor, so we have from theorem 3 that
\[p(\mathbf{T},\mathcal{E})\leq C\frac{n_{P}}{2^{n_{A}}}. \tag{3.6}\]
Figure 2: Circuit describing Alice and Bob’s procedure to carry out the diagonal unitary game. Unitary \(\mathbf{V}_{L}\) acts on \(AL\), and corresponds in the holographic picture to recording the state on the \(A\) system into bulk degrees of freedom \(a\) sitting in the left asymptotic region. Unitary \(\mathbf{V}_{R}\) acts on \(RP\) and in the bulk picture corresponds to recording \(P\) into degree’s of freedom \(p\) in the right asymptotic region. We allow the two CFT’s to time evolve, which we absorb into \(\mathbf{V}_{L}\) and \(\mathbf{V}_{R}\), which in the bulk picture allows \(a\) to interact with \(p\). The isometry \(\mathbf{R}\) extracts the \(a\) system from the bulk and records it back into \(A\). The state \(\ket{\phi_{\varepsilon}}_{P}\) is prepared based on the string \(\varepsilon\). The full circuit can be viewed as an isometry \(\mathbf{T}_{AP\to AE}\).
If we put appropriate constraints on \(n_{P}\), \(n_{A}\) this bound will lead to constraints on computation happening inside the black hole.
The value of \(n_{P}\) we would like to have constrained physically, rather than as a choice we put in -- \(n_{P}\) controls the size of the computer, and we want to allow Alice and Bob to exploit the action of any physically allowed computer. A natural constraint on \(n_{P}\) is given by the covariant entropy bound (CEB) [23; 24; 25], which applied to the black hole horizon reads
\[n_{P}\leq\frac{A_{bh}}{4G_{N}}=S_{bh}. \tag{3.7}\]
Unfortunately, at the upper limit of allowed values given by this bound we violate 3.4, and lose our guarantee of state independent recovery. To continue our argument in light of this, we introduce an assumption, which is that if a computation is forbidden inside of a black hole with entropy \(S_{bh}^{\prime}\), then it is also forbidden inside of a black hole with entropy \(S_{bh}\) with \(S_{bh}=o(S_{bh}^{\prime})\). That is, we will restore state independent recovery in the diagonal unitary game by allowing Alice and Bob an apparently more powerful resource, the geometry of a larger black hole, and assume this doesn't weaken their computational power.9
Footnote 9: One way to argue for this is to consider that if the computation can be run inside the smaller black hole, we could take that black hole and throw it into the larger black hole, apparently running the same computation in the larger hole.
Now with the diagonal unitary game in the larger black hole in mind, consider the value of \(n_{A}\). The value of \(n_{A}\) is something we choose: we can decide to ask for a unitary on \(n_{A}\) qubits to be applied inside the black hole, for whatever value of \(n_{A}\). We will choose \(n_{A}\) such that it is much smaller than \(S_{bh}\), and so can be brought into the original black hole. Further, we will need to make \(n_{A}\) large enough for equation 3.6 to be a meaningful constraint. Summarizing all the needed constraints, we consider running the diagonal unitary game inside of a black hole with entropy \(S_{bh}^{\prime}\), with \(n_{P}\), \(n_{A}\) satisfying
\[n_{P}\leq S_{bh}=o(S_{bh}^{\prime}),\qquad\log(CS_{bh})<n_{A} \leq S_{bh}. \tag{3.8}\]
In this regime, the constraint 3.4 is satisfied and the map \(\mathbf{T}\) (which acts on the CFT state describing the larger black hole) is a state independent isometry. Consequently, the bound 3.6 applies, and using that \(n_{P}\leq S_{bh}<2^{n_{A}}/C\) we have that the average success probability of the diagonal unitary game will be below 1.
Now revisit the bound 3.6. Define the success probability of the processor \(\mathbf{T}\) on value \(\varepsilon\) as
\[p(\mathbf{T},\mathcal{E}|\varepsilon)=\sup_{|\phi_{\varepsilon} }\mathrm{tr}\left[|\Psi_{\varepsilon}\rangle\!\langle\Psi_{\varepsilon}|\,( \mathbf{T}\,|\Psi\rangle\!\langle\Psi|\otimes|\phi_{\varepsilon}\rangle\! \langle\phi_{\varepsilon}|\,\mathbf{T}^{\dagger})\right] \tag{3.9}\]
so that the processor bound 2.5 is expressed as
\[p(\mathbf{T},\mathcal{E})=\mathbb{E}_{\varepsilon}\,p(\mathbf{T},\mathcal{E}|\varepsilon)\leq C\frac{n_{P}}{2^{n_{A}}}, \tag{3.10}\]
Setting some threshold probability \(\delta\) with \(Cn_{P}/2^{n_{A}}<\delta<1\), we define the set
\[\mathcal{P}(\mathbf{T},\mathcal{E})=\{\varepsilon:p(\mathbf{T}, \mathcal{E}|\varepsilon)\leq\delta\}. \tag{3.11}\]
We refer to elements in this set as _forbidden unitaries_. From 3.10, this set will be of size at least
\[|\mathcal{P}(\mathbf{T},\mathcal{E})|\geq 2^{2^{2n_{A}}}\left(1-\frac{Cn_{P}}{2^{n_ {A}}\delta}\right) \tag{3.12}\]
which is doubly exponentially large in our parameter regime.
To understand the meaning of these forbidden unitaries, first notice that \(n_{A}\) grows more slowly than \(S_{bh}\). This means applying the needed unitaries is not restricted because the CEB is limiting the size of the systems acted on by our unitary. Looking at \(n_{P}\) however, we can notice that since \(n_{P}\leq S_{bh}<2^{n_{A}}/C\), and \(\varepsilon\) consists of \(2^{n_{A}}\) bits, it is not possible to fit a complete description of an arbitrary \(\varepsilon\) into \(n_{P}\) qubits. If we can't even bring a specification of the unitary \(\mathbf{U}^{\varepsilon}\) into the black hole, there's no surprise we can't implement it there -- it's not possible to do so on information theoretic grounds. While this does explain why many unitaries are forbidden, we claim there are also some forbidden unitaries whose description can be compressed to fewer than \(n_{P}\) bits. Consequently information theoretic constraints don't suffice to explain why those unitaries are forbidden.
We now define a unitary which both cannot be implemented in the bulk region, and has a short description.
**Definition 7**.: Define the unitary \(\mathbf{U}^{\mathsf{S}^{0}}\) to be the first element of \(\mathcal{P}(\mathbf{T},\mathcal{E})\), where the ordering is the one induced by interpreting the string \(\varepsilon\) as a binary number.
Notice that from equation 3.12 the set \(\mathcal{P}(\mathbf{T},\mathcal{E})\) is non-empty and thus this unitary exists. Also observe that the above definition uniquely specifies this unitary.
We claim that \(\mathbf{U}^{\mathsf{S}^{0}}\) can be specified using \(n_{P}\) bits, with \(n_{P}\) inside of the regime 3.8. The definition above is an \(\Theta(1)\) length string, plus the descriptive lengths of \(\mathbf{T}_{AP\to AE}\) and \(\mathcal{E}\). Let's consider the length of a description of each of these objects in turn.
* To describe \(\mathcal{E}\), we need some \(\Theta(1)\) description plus the value of \(n_{A}\), which fixes the size of the unitaries in the set, which we can specify in \(O(\log n_{A})\) bits.
* To describe \(\mathbf{T}_{AP\to AE}\) we need to specify \(\mathbf{R}\) and the initial state in \(\mathcal{H}_{L}\otimes\mathcal{H}_{R}\) appearing in figure 2.
* To define the initial state of the two CFT's, we need to specify which CFT we are discussing, and the one parameter describing the black hole, for which we use the entropy \(S^{\prime}_{bh}\). Considering the description of the CFT, we assume there is a family of CFT's parameterized by the central charge \(c\). Then to describe the CFT requires some \(\Theta(1)\) data to specify which family we are considering, plus \(\Theta(\log c)=\Theta(\log S^{\prime}_{bh})\) bits to specify the member of that family. To specify \(S^{\prime}_{bh}\) requires at most \(\log S^{\prime}_{bh}\) bits.
* Consider the map \(\mathbf{R}\). This is fixed by defining the choice of CFT, the initial state of the CFT, and the choice of subspace \(\mathcal{H}_{a^{\prime}}\). The choice of CFT and initial state was already specified above. To specify the subspace \(\mathcal{H}_{a^{\prime}}\), recall that we defined having completed a computation to mean recording the output into a
Hilbert space that can be described in a small number of bits. In the black hole context, we take this as meaning that we need far fewer than \(S^{\prime}_{bh}\) bits. We will allow in particular \(\log S^{\prime}_{bh}\) bits to specify the subspace.
The last point regarding the number of bits to specify \(\mathcal{H}_{a^{\prime}}\) is worth a few more comments. While we allow for \(\log S^{\prime}_{bh}\) bits, in the argument below anything smaller than \(S^{\prime}_{bh}\) bits will lead to forbidden bulk computations. Our specific choice of \(\log S^{\prime}_{bh}\) bits is motivated by considering the setting where, at the time of recovery, the bulk is described geometrically, and the output is recorded into some localized degree's of freedom. In this case we can specify the subspace using \(O(\log S^{\prime}_{bh})\) bits, since \(S^{\prime}_{bh}\) controls the size of the black hole and we would need to specify where in the black hole those bits are stored.
The full accounting then is that the descriptive length \(d(\cdot)\) of \(\mathbf{U}^{\overline{\varepsilon}^{0}}\) is
\[d(\mathbf{U}^{\overline{\varepsilon}^{0}})=O(\log\bigl{(}S^{\prime}_{bh} \bigr{)}+\log n_{A})=O(\log S^{\prime}_{bh}). \tag{3.13}\]
The second equality follows from our choice of parameter regime. From this equation, we see that we can describe \(\overline{\varepsilon}^{0}\) using a state on \(n_{P}\) bits whenever \(\log S^{\prime}_{bh}<S_{bh}\), which we can easily take while being consistent with \(S_{bh}=o(S^{\prime}_{bh})\). Notice also that we can define \(\mathbf{U}^{\overline{\varepsilon}^{m}}\) as the \(m\)th element of \(\mathcal{P}(\mathbf{T},\mathcal{E})\), in which case we use \(k=\log m\) additional bits. So long as we keep \(k=o(S_{bh})\), this allows us to construct a family of unitaries of size \(2^{k}\) which are similarly describable inside the black hole but forbidden from being implemented by the processor \(\mathbf{T}\).
Let's summarize now our holographic thought experiment. On the right, \(\mathrm{Alice}_{R}\) prepares a randomly drawn string. Consider a case where she obtains a string describing a unitary in the set \(\{\mathbf{U}_{\overline{\varepsilon}^{m}}\}_{m\leq 2^{k}}\). In this case, she can record a description of the unitary \(\mathbf{U}^{\varepsilon}\) into no more than \(S_{bh}\) bits. Doing so, and sending these bits into the black hole with larger entropy \(S^{\prime}_{bh}\), a complete description of \(\mathbf{U}^{\overline{\varepsilon}^{0}}\) is inside the black hole. However, by construction these unitaries cannot be completed with probability more than \(\delta\) in our thought experiment. Thus performing these unitaries inside the black hole must be forbidden in the black hole of entropy \(S^{\prime}_{bh}\), and hence by our assumption forbidden inside the smaller black hole of entropy \(S_{bh}\). In that setting, \(n_{P}\) (the size of the computer) may be taken to be as large as the black hole entropy, \(n_{A}\) (the size of the inputs) is still much smaller than the black hole, and the description of the forbidden unitaries is much smaller than \(S_{bh}\), so can easily be brought into the black hole. Thus, the computation is forbidden from happening inside the smaller black hole using any physically allowed computer, even while the information needed to implement it is stored there -- these forbidden computations must then be computationally forbidden. Further, there are at least \(2^{k}\) such unitaries, with \(k=o(S_{bh})\).
### Bulk interpretation of forbidden unitaries
It is generally expected that the widely studied models of computation -- classical Turing machines or quantum circuits -- capture the power of physical computers. To make the connection between models of computation and physical computers, many authors have looked to gravitational constraints. This is because within quantum mechanics it does not
seem possible to find a fundamental unit of time, or fundamental constraint on the memory held in a physical region.
As one example, Lloyd [26] offered a plausible gravity argument that, considering a circuit model of computation, the number of gates that can be performed in a given time is limited by the available energy. He then argues the available energy should be bounded above by the energy of a black hole, putting an apparent speed limit on computation. However, working with a Hamiltonian description of the computation one can evade this bound [2], doing arbitrarily complex operations arbitrarily quickly, and at arbitrarily low energy. While the needed Hamiltonians are likely unphysical, this construction shows that it remains unclear how to obtain a precise bound on computation from a direct gravity perspective.
Our construction of forbidden unitaries gives a very preliminary step towards connecting physical computers and models of computation: it at least shows that some computations cannot happen in certain finite spacetime regions. A natural question is how high of complexity our forbidden computations are, and if this high complexity offers some plausible physical reason from a bulk perspective why these unitaries should be forbidden.
We can make a few comments about the complexity of our forbidden computations. The needed computation is to, given the compressed description of \(\overline{\varepsilon}^{0}\) and input system \(A\), apply \(\mathbf{U}^{\overline{\varepsilon}^{0}}\). One route to doing this is to first decompress \(\overline{\varepsilon}^{0}\), then apply \(\mathbf{U}^{\overline{\varepsilon}^{0}}\) based on the value of the uncompressed string. To decompress \(\overline{\varepsilon}^{0}\) from its compressed description, we need to find the first value \(\varepsilon\) where the function \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) is smaller than \(\delta\). A naive classical algorithm then to decompress the description of \(\overline{\varepsilon}^{0}\) is the following.
\(\varepsilon^{\prime}=0\)
While \(\varepsilon^{\prime}\leq 2^{2^{n_{A}}}\)
If \(p(\mathbf{T},\mathcal{E}|\varepsilon^{\prime})\leq\delta\),
Return \(\varepsilon^{\prime}\)
Else
\(\varepsilon^{\prime}=\varepsilon^{\prime}+1\)
Assuming computing \(p(\mathbf{T},\mathcal{E}|\varepsilon^{\prime})\) takes \(O(1)\) steps (it is likely longer) this runs in \(O(2^{2^{n_{A}}})\) steps. From 3.8 we see that this gives a number of steps in this algorithm of \(2^{CS_{bh}}\). Further, notice that the memory needed to run this algorithm is at least the memory needed to store \(\varepsilon^{\prime}\), which is length \(2^{n_{A}}\), so can be made as small as \(CS_{bh}\) bits. In appendix A, we give a heuristic argument that it is not possible to significantly improve on the memory usage and number of steps used in this algorithm, even using a quantum circuit model of computation.
The 'central dogma' of black hole physics states that black holes can be described as quantum mechanical systems with dimension \(2^{S_{bh}}\). If we assume this, and assume a quantum circuit model captures the power of the bulk computer, this provides one plausible explanation for why these unitaries are forbidden in the bulk: the best quantum algorithm seems to require memory \(CS_{bh}>S_{bh}\), so can't run inside the black hole.
We can also discuss the relationship between the number of computational steps needed to perform our unitary and the time available inside the black hole. Recall that we consid
ered running our diagonal unitary game in the larger black hole of entropy \(S^{\prime}_{bh}\), where we first showed the computation was forbidden, assuming
\[n_{P}\leq S_{bh}=o(S^{\prime}_{bh}). \tag{3.14}\]
Setting the above constraint amounts to a constraint on the choice of computing device thrown into the black hole, imposing that it is sufficiently small compared to the black hole entropy. Why does this constrained computer fail to implement the given computation in the larger black hole? Notice that the memory usage of the naive algorithm above is \(2^{n_{A}}=CS_{bh}=o(S^{\prime}_{bh})\), which is now much smaller than the black hole entropy. The number of computational steps of the naive algorithm now presents the most plausible computational restriction: the number of steps is \(2^{2^{n_{A}}}\) which is much larger than \(S^{\prime}_{bh}\) if \(\omega(\log S^{\prime}_{bh})=n_{A}\), which we are indeed guaranteed by our parameter regime 3.8. If we suppose a computational step takes some finite time, and that the naive algorithm above cannot be significantly improved in run time, this suffices as a bulk explanation for why our (restricted) computer cannot perform the needed computation. Because this seems to be the needed explanation in the context of the larger black hole, we might take this as evidence that the run time is also the relevant constraint in the (unconstrained) computer in the smaller black hole, although as noted above in that setting the memory is also larger than is available, again assuming a circuit model.
In fact, it is interesting to push this restriction on the size of the computer as far as possible and understand the number of computational steps needed in the resulting problem. Suppose we take \(n_{P}=\log S^{\prime}_{bh}\). This is the smallest we can take it while still allowing a description of \({\bf T}\) to be fit into \(n_{P}\) bits. Then, we can have \(n_{A}=\log(C\log S^{\prime}_{bh})\) and still get a non-trivial bound from our processor bound. This leads to unitaries that are forbidden from happening inside of the black hole using a computer built from \(n_{P}\) qubits. The memory then needed to run our naive algorithm is \(\log S^{\prime}_{bh}\), while the run-time is \(S^{\prime}_{bh}\). Thus the run-time of this small computer still seems to explain its inability to perform the computation inside the black hole.10
Footnote 10: That our lower bound on the number of computational steps is exactly linear in the black hole entropy shouldn’t be taken too seriously: since we are in an oracle model where evaluating \(p({\bf T},{\cal E}|\varepsilon)\) takes one step, we are probably underestimating the run-time.
If we are placing constraints on the size of the computer by hand, there is in fact no longer any need to consider the black hole setting, which was used to give a natural surface on which to invoke the CEB. In the next section we consider restrictions on small computers in more general settings.
## 4 Forbidden computations for small computers
Given our construction in section 3 of constrained computations, we should ask to what extent our argument can be generalized away from the black hole setting, and away from AdS/CFT.
Towards making a more general statement, consider the following setting. We have a quantum mechanical system described by Hilbert space \({\cal H}={\cal H}_{A}\otimes{\cal H}_{P}\otimes{\cal H}_{E}\) and evolving
under Hamiltonian \(\mathbf{H}\), where we refer to the \(A\) system as the data Hilbert space, the \(P\) system as the program space, and \(E\) as the environment. Given a unitary \(\mathbf{U}_{A}\), Alice prepares \(\mathcal{H}_{P}\) in a state recording the unitary to be applied, or description of a program to apply it, along with any computing device prepared to apply it. She may use arbitrarily complex computations in preparing this state. Then, Bob prepares some state on the \(A\) system. Further, the \(E\) system is put in an arbitrary state \(\left|\psi\right\rangle_{E}\) which we take to be initially pure, so that the environment is initially unentangled with the data and program spaces. The full Hilbert space is then allowed to evolve under time evolution given by the Hamiltonian \(\mathbf{H}\). After some amount of time \(t\), a measurement is made on the \(A\) subsystem testing if \(\mathbf{U}_{A}\) has been applied. This setting closely models the basic computational setting we find in the real world: we can prepare our computer which holds the program, insert the data, and then the computer runs -- it evolves in this case under the Hamiltonian describing our universe.
In the black hole setting there is a natural bound on \(n_{P}\), the number of qubits in the program space, which is imposed physically. In this scenario, we restrict \(n_{P}\) arbitrarily -- consequently, we are deriving here constraints on how fast _small_ computers can perform computations, but not on all physically allowed computers. Also, note that in that setting the role of the environment Hilbert space \(\mathcal{H}_{E}\) was played by the combined Hilbert spaces of the two CFT's.
Our processor bound 2.5 leads to a constraint on how quickly some unitaries can be performed in this scenario. In particular we have again that, after the system \(P\) is put into the program state, the remaining action of the computer is described by an isometry independent of the unitary. In particular, the remaining action is just time evolution under \(\mathbf{H}\). The description of \(\mathbf{H}\), initial state of the environment \(\left|\psi\right\rangle_{E}\), and amount of time we evolve for \(t\) then defines a processor, which we label \(\mathbf{T}\). Considering the family of unitaries 2.4, we can apply the processor bound 2.5, finding that
\[p(\mathbf{T},\mathcal{E})\leq C\frac{n_{P}}{2^{n_{A}}}. \tag{4.1}\]
Given an allowed program space of \(n_{P}\) qubits, we choose the family of computations \(\mathcal{E}\) such that \(n_{A}\) is large enough, satisfying in particular
\[n_{P}\leq 2^{n_{A}}/C \tag{4.2}\]
so that \(p(\mathbf{T},\mathcal{E})\) is less than 1. Given a value of \(t\) and choice of Hamiltonian \(\mathbf{H}\), we can then define a forbidden unitary in a way analogous to definition 7, which we do next.
Define the set of unitaries with low success probability
\[\mathcal{P}(\mathbf{T},\mathcal{E})=\{\varepsilon:p(\mathbf{T},\mathcal{E}| \varepsilon)\leq\delta\}. \tag{4.3}\]
and then define a unitary which has a short description and is forbidden.
**Definition 8**.: Let \(\mathbf{U}_{\mathbb{F}^{0}}\) be the first unitary in the set \(\mathcal{P}(\mathbf{T},\mathcal{E})\), where we order the set \(\mathcal{E}\) by interpreting the strings \(\varepsilon\) as binary numbers.
As before, we can also extend this to a family of unitaries.
How long is the description of \(\mathbf{U}_{\overline{\varepsilon}\circ}\)? Importantly, it must be short enough to be written into \(n_{P}\) qubits while maintaining \(n_{P}\leq 2^{n_{A}}/C\). Notice that this definition consists of the \(O(1)\) string given explicitly, plus a description of \(\mathbf{T}\) and the parameter \(n_{A}\) describing the set \(\mathcal{E}\). Thus, if we have
\[d(\mathbf{H})+d(\left|\psi\right\rangle_{E})+\log t+\log n_{A} \leq n_{P} \tag{4.4}\]
for \(d(\mathbf{H})\) and \(d(\left|\psi\right\rangle)\) the descriptive lengths of \(\mathbf{H}\) and \(\left|\psi\right\rangle\), there will exist forbidden unitaries which have descriptions fitting inside the program state, and hence must be computationally forbidden. We can always adjust our chosen value of \(n_{P}\) to ensure this is the case.
The requirement above is essential to the physical consistency of our construction. One way this manifests is that we have
\[\log t\leq n_{P}\leq 2^{n_{A}}/C \tag{4.5}\]
so that we cannot construct forbidden unitaries for arbitrarily small \(n_{A}\) compared to \(t\), which means the complexity of the computation cannot be made small compared to the time \(t\). As an interesting case consider the setting where \(\log t\) is much larger than the other parameters in the description of the isometry, in particular we allow a long enough time that
\[\log t\gg d(\mathbf{H}),\log n_{A},d(\left|\psi\right\rangle_{E}). \tag{4.6}\]
Going to this setting, and using 4.5, we see that forbidden unitaries occur only for times shorter than \(t\sim 2^{2^{n_{A}}}\). Recall that \(2^{2^{n_{A}}}\) is exactly the scaling of the number of steps needed to decompress the forbidden \(\varepsilon\). Thus our forbidden computations remain complex enough to ensure the number of steps it takes to implement them scales like the physical time needed to implement them on a computer.
Another comment is that we expect that for a given computation we can always find a \(t\) large enough that our dynamical evolution implements the computation. Indeed our construction doesn't violate this, as it requires we first choose \(t\), then can construct a unitary that cannot be implemented within time \(t\). In particular we emphasize that for larger \(t\) the value of \(n_{A}\) must be chosen suitably large. A similar comment arises in comparing to the construction of Jordan [2]. Given a unitary, Jordan constructs a Hamiltonian that completes the unitary in an arbitrarily short time. In contrast, our ordering is different: we fix a Hamiltonian and a choice of time \(t\) and then show there are computations that cannot be run by this Hamiltonian within that time. Since we expect there is ultimately one Hamiltonian describing our universe, this reversed statement seems sufficient to find physically unrealizable computations.
## 5 Discussion
In this work we have constructed computations which cannot be implemented inside of a black hole with entropy \(S_{bh}\), despite the inputs to these computations being small, and the
description of the computation being easily fit inside the black hole. We've argued that these computations are high complexity, which may explain why they are forbidden. Regardless of the explanation for why these computations are forbidden, our construction unambiguously establishes that at least some computations are forbidden from being implemented inside the black hole.
Moving forward, it would be interesting to understand general properties of unitaries that restrict their bulk implementation. To do this, we have two alternative approaches by which we can proceed. As we've done here, we can exploit the view of bulk computation in terms of programmable processors. Alternatively, following [9; 10; 27; 28], we can relate bulk computation to non-local quantum computation.11 So far, the constraints coming from non-local computation have been complimentary to the ones derived from programmable processors. Perhaps one of these techniques, or some synthesis of the two, will allow further progress in the understanding of the limits of computation in the presence of gravity.
Footnote 11: In an upcoming work [29], we adapt that discussion to the setting of two sided black holes. Doing so removes some assumptions made in [9], allowing constraints on non-local computation to be applied more rigorously to constrain the bulk. Another perspective on removing this assumption was presented in [11].
Before making a few comments on the connections between this work and others, we summarize the basic conceptual tension underlying our construction. A universal computer can follow instructions and, given an unbounded number of steps, perform any computation. Taking an outside view, and assuming our system is quantum mechanical, any computer evolves under the time evolution of some fixed Hamiltonian. This time evolution can be viewed as the action of a programmable quantum processor. Programmable processors are limited in the computations they can perform, while universal computers are apparently unrestricted, setting up a tension between the two perspectives. The naive resolution is that the programmable processor is only limited when the program states are small, restricting us from specifying most computations, thereby explaining on information theoretic grounds why the universal computer fails. For universal processors with simple descriptions however the tension becomes sharper -- the universal computer can be input a description of the processor, which allows efficient descriptions of programs the computer can't itself run. Now, the way out of the tension is a computational restriction on the universal computer.
Our construction is similar to the diagonalization technique as used in computer science, in that the universal computer is being fed a description of the dynamics which it is itself is governed by. A key new ingredient however is the universal processor bound, which ties our argument to a physical setting. In particular, the length of the description of the processor, which relates to physical parameters (e.g. the time or black hole entropy), constrains the \(n_{P}\) and \(n_{A}\) parameters which then enter the processor bound. In this way physical data is brought into the diagonalization argument.
We conclude with a few comments on related topics.
**Quantum extended Church Turing thesis**
The quantum extended Church Turing thesis states that any physically realizable computer can be efficiently simulated by a quantum Turing machine. Recently, Susskind [7] proposes an interesting tension with this thesis and a thought experiment in the setting of
a two sided black hole. He argues that an observer who jumps into the black hole can compute certain functions efficiently that an observer who instead holds the two CFT's cannot. We find this thought experiment suggestive that a notion of an observer is needed in the statement of the extended Church Turing thesis, and the statement should only apply when two observers may separate for a time and then meet again and compare the efficiency of their computations.
While broadly this work and ours are both interested in the computational abilities of computers in the presence of gravity, we should be careful to distinguish between the two settings. Note that we never compare observers outside and inside the black hole and ask about their relative ability to perform some computation. Instead, we ask only about the computational abilities of the observer inside the hole. The boundary perspective is exploited to relate bulk computation to quantum processors.
**Complexity of the AdS/CFT dictionary**
Recently, there have been discussions around the complexity of the operations needed to recover bulk data from the boundary [3, 6]. We emphasize that our argument does not rely on this map being low or high complexity. Instead, we only rely on this map being state independent within some appropriate, and small, subspace of states.
**Bulk computation as non-local computation**
Our results are interesting in light of a conjecture made in the context of non-local computation and its relationship to AdS/CFT. Non-local computation implements unitaries on two, separated, subsystems using an entangled resource state and a single round of communication. In [9], the authors state that at least one of the following must be true:
1. All computations can be performed with linear entanglement.
2. Gravity places constraints on bulk computation.
They also argue that not 1) implies 2). That work conjectured that 1) is false and consequently 2) is true. This work establishes that 2) is true in AdS/CFT, without resolving 1).
**Understanding of the black hole interior**
In [8], the authors discuss a puzzle in the physics of black holes. The central dogma of black hole physics states that a black hole can be described by a number of degrees of freedom given by its entropy. The description of the black hole using \(S_{bh}\) degrees of freedom is referred to as the fundamental description. Additionally, we can describe the black hole within effective field theory, within some background set by the appropriate solution to Einsteins equations. In the effective description, and at late times, the black hole interior volume can be very large. Thus the number of low energy degrees of freedom in the effective description will exceed \(S_{bh}\). A puzzle then is to understand how the effective description, with a large number of apparent degrees of freedom, is embedded into the fundamental description with fewer degrees of freedom. Necessarily many of the states in the effective
description will not be realizable states of the black hole, since most states cannot map to a state in the fundamental description.
To understand this, the authors of [8] argue that it is the low complexity states in the black hole interior that are mapped to the fundamental description. They show that even while the effective black hole interior is exponentially larger than the fundamental description, a subspace in the effective description large enough to contain all the low complexity states can be mapped to states in the fundamental description, and this map can approximately preserve orthogonality.
Our results support this perspective, in that they suggest high complexity unitaries are restricted in the bulk. In particular, the variation on our thought experiment most relevant to this discussion involves taking the computer to consist of \(n_{P}=o(S^{\prime}_{bh})\) qubits and considering the diagonal unitary game in the larger black hole, with entropy \(S^{\prime}_{bh}\). Then, the computer state is a state in the effective description of the black hole. Our argument then shows there are high complexity states the computer cannot evolve dynamically into, in line with the proposal of [8]. Said differently, our results support the idea that boundary time evolution, which must take fundamental states into fundamental states, also preserves a low-complexity set of states in the bulk.
**An end to time**
Among the strangest properties of black holes is that time in the interior comes to an apparent end at the singularity, at least within the classical description of the black hole. Understanding how this can arise from a quantum mechanical theory, in which time does not end, seems to be a basic challenge in understanding how gravitational physics can emerge from quantum mechanics. Our results support the idea that the finite bulk time corresponds, in some sense to be made precise, to limits on bulk complexity enforced by the boundary theory: the bulk geometrizes the limits on complexity enforced by the boundary by having an end to time at the singularity.12
Footnote 12: We thank Steve Shenker for making a similar remark to us
**Acknowledgements:** We thank Michelle Xu, Patrick Hayden, Shreya Vardhan, Jonathan Oppenheim, Chris Akers, Jinzhao Wang, Raghu Mahadjan, Arvin Shahbazi Moghaddam, Steve Shenker and Toby Cubitt for helpful discussions. AM is supported by the Simons Foundation It from Qubit collaboration, a PDF fellowship provided by Canada's National Science and Engineering Research council, and by Q-FARM. AMK and DPG are supported by the European Union (Horizon 2020 ERC Consolidator grant agreement No. 648913), the Spanish Ministry of Science and Innovation ("Severo Ochoa Programme for Centres of Excellence in R&D" CEX2019-000904-S, and grant PID2020-113523GB-I00), Comunidad de Madrid (QUITEMAD-CM P2018/TCS-4342), and the CSIC Quantum Technologies Platform PTI-001.
Optimality of the naive algorithm
Recall that the full computation we wish to perform inside of the black hole is to apply \(\mathbf{U}^{\overline{\epsilon}^{0}}\), given \(\mathcal{H}_{A}\) and the compressed description of \(\overline{\epsilon}^{0}\) as input. One method to do this is to first compute the full description of \(\overline{\epsilon}^{0}\), then use this to apply the unitary. We will focus on algorithms of this form. Notice that within algorithms of this form we will always need memory of at least \(2^{n_{A}}\), since that is the number of bits needed to store \(\overline{\epsilon}^{0}\). A lower bound on the number of computational steps needed to compute \(\overline{\epsilon}^{0}\) from its compressed description is less clear immediately, but we argue for one here.
The naive classical algorithm discussed in section 3.3 for decompressing the description of the forbidden unitary \(\mathbf{U}^{\overline{\epsilon}^{0}}\) is as follows.
\(\varepsilon^{\prime}=0\)
While \(\varepsilon^{\prime}\leq 2^{2^{n_{A}}}\)
If \(p(\mathbf{T},\mathcal{E}|\varepsilon^{\prime})\leq\delta\),
Return \(\varepsilon^{\prime}\)
Else
\(\varepsilon^{\prime}=\varepsilon^{\prime}+1\)
This uses \(\Omega(2^{2^{n_{A}}})\) steps, and memory \(\Omega(2^{n_{A}})\). One would additionally need the steps and memory necessary to then apply \(\mathbf{U}^{\overline{\epsilon}^{0}}\).
Can we improve on this naive number of computational steps needed? We've seen that the function \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) has at least some structure: equation 3.12 bounds how many input values on which \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) is less than \(\delta\), and that in fact the fraction of values where this function is less than \(\delta\) is nearly one, being \(1-\Theta(n_{P}/2^{n_{A}})\). Let's assume for a moment that this function has no additional structure aside from this condition. Thus, \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) is treated as an oracle, with the promise that some fraction of its inputs return a function value less than \(\delta\).13 We count one call to this oracle as a single computational step. How complex then is it to find the first input such that \(p(\mathbf{T},\mathcal{E}|\varepsilon)\leq\delta\)?
Footnote 13: Note that, by equation 3.9, \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) amounts to calculating the largest eigenvalue of the map \(\langle\Psi|\,\mathbf{T}^{\dagger}\,|\Psi_{\varepsilon}\rangle\!\langle\Psi_{ \varepsilon}|\,\mathbf{T}\,|\Psi\rangle:\mathcal{H}_{P}\to\mathcal{H}_{P}\). Hence, given access to the matrix elements \(\langle i|_{A}\,\langle j|_{P}\,\mathbf{T}^{\dagger}\,|k\rangle\!\langle\ell| _{A}\,\mathbf{T}\,|r\rangle_{A}\,|s\rangle_{P}\) one could calculate \(p(\mathbf{T},\mathcal{E}|\varepsilon)\) in time polynomial in \(d_{A}\) and \(d_{P}\). Those matrix elements may be more costly to evaluate however.
To study this, define the Boolean function
\[f(\varepsilon)=\begin{cases}1&\text{ if }\,p(\mathbf{T},\mathcal{E}|\varepsilon)< \delta\\ 0&\text{ if }\,p(\mathbf{T},\mathcal{E}|\varepsilon)\geq\delta\end{cases}\] (A.1)
Let the number of inputs where \(p(\mathbf{T},\mathcal{E}|\varepsilon)\geq\delta\) be \(N_{s}\). This can be as large as
\[N_{s}=2^{2^{n_{A}}}\frac{n_{P}}{2^{n_{A}}}\] (A.2)
while maintaining consistency with equation 3.12, so let's assume this equality holds and that this is the only structure present in \(f\). That is, the set of Boolean functions given by
equation 16 is assumed to be the set of all Boolean functions with exactly \(N_{s}\) satisfying assignments. Restrict the function \(f(\varepsilon)\) to its first \(N_{s}\) possible inputs, so that now it is a function on \(\log N_{s}\) bits, and call this function \(\hat{f}(\varepsilon)\). Notice that finding the first \(\varepsilon\) where \(p(\mathbf{T},\mathcal{E}|\varepsilon)<\delta\) is at least as hard as finding a satisfying solution to \(\hat{f}(\varepsilon)\). But now the set of functions \(\hat{f}\) is precisely the set of all possible Boolean functions on \(\log N_{s}\) bits. Note that, unlike \(f(\varepsilon)\), the function \(\hat{f}(\varepsilon)\) has no restrictions on the number of inputs where it is \(1\) -- it could have any number of satisfying inputs, including zero. Thus finding the first \(\varepsilon\) such that \(p(\mathbf{T},\mathcal{E}|\varepsilon)<\delta\) is at least as hard the unstructured search problem with the oracle defined by \(\hat{f}(\varepsilon)\). Using a quantum computer one can do no better than making \(\sqrt{N_{s}}=\Omega(2^{2^{n_{A}-1}})\) oracle calls [30].
We can also briefly comment further on the memory usage needed in this problem. As mentioned at the beginning of this section, any algorithm where we first compute \(\overline{\varepsilon}^{0}\) then apply \(\mathbf{U}^{\overline{\varepsilon}^{0}}\) we need \(2^{n_{A}}\) bits of memory. However, we can also consider strategies that use some algorithm for applying \(\mathbf{U}^{\overline{\varepsilon}^{0}}\) that computes each bit of \(\overline{\varepsilon}^{0}\) as it needs them, and erases each computed bit after it is used. Typically, finding such memory efficient algorithms comes at a cost to the number of steps, since many bits may need to be re-computed several times. Notice that in our setting to improve on the \(2^{n_{A}}\) memory cost one would actually also have to do better in computational steps: any algorithm using \(2^{2^{n_{A}}}\) steps will need \(2^{n_{A}}\) bits of memory, since otherwise the algorithm will revisit one of its previous configurations and the computational will fall into an infinite loop. If we believe we cannot improve on the number of steps used in the naive algorithm then, we also cannot improve on the memory.
|
2303.13727 | A Survey on Secure and Private Federated Learning Using Blockchain:
Theory and Application in Resource-constrained Computing | Federated Learning (FL) has gained widespread popularity in recent years due
to the fast booming of advanced machine learning and artificial intelligence
along with emerging security and privacy threats. FL enables efficient model
generation from local data storage of the edge devices without revealing the
sensitive data to any entities. While this paradigm partly mitigates the
privacy issues of users' sensitive data, the performance of the FL process can
be threatened and reached a bottleneck due to the growing cyber threats and
privacy violation techniques. To expedite the proliferation of FL process, the
integration of blockchain for FL environments has drawn prolific attention from
the people of academia and industry. Blockchain has the potential to prevent
security and privacy threats with its decentralization, immutability,
consensus, and transparency characteristic. However, if the blockchain
mechanism requires costly computational resources, then the
resource-constrained FL clients cannot be involved in the training. Considering
that, this survey focuses on reviewing the challenges, solutions, and future
directions for the successful deployment of blockchain in resource-constrained
FL environments. We comprehensively review variant blockchain mechanisms that
are suitable for FL process and discuss their trade-offs for a limited resource
budget. Further, we extensively analyze the cyber threats that could be
observed in a resource-constrained FL environment, and how blockchain can play
a key role to block those cyber attacks. To this end, we highlight some
potential solutions towards the coupling of blockchain and federated learning
that can offer high levels of reliability, data privacy, and distributed
computing performance. | Ervin Moore, Ahmed Imteaj, Shabnam Rezapour, M. Hadi Amini | 2023-03-24T00:40:08Z | http://arxiv.org/abs/2303.13727v1 | A Survey on Secure and Private Federated Learning Using Blockchain: Theory and Application in Resource-constrained Computing
###### Abstract
Federated Learning (FL) has gained widespread popularity in recent years due to the fast booming of advanced machine learning and artificial intelligence along with emerging security and privacy threats. FL enables efficient model generation from local data storage of the edge devices without revealing the sensitive data to any entities. While this paradigm partly mitigates the privacy issues of users' sensitive data, the performance of the FL process can be threatened and reached a bottleneck due to the growing cyber threats and privacy violation techniques. To expedite the proliferation of FL process, the integration of blockchain for FL environments has drawn prolific attention from the people of academia and industry. Blockchain has the potential to prevent security and privacy threats with its decentralization, immutability, consensus, and transparency characteristic. However, if the blockchain mechanism requires costly computational resources, then the resource-constrained FL clients cannot be involved in the training. Considering that, this survey focuses on reviewing the challenges, solutions, and future directions for the successful deployment of blockchain in resource-constrained FL environments. We comprehensively review variant blockchain mechanisms that are suitable for FL process and discuss their trade-offs for a limited resource budget. Further, we extensively analyze the cyber threats that could be observed in a resource-constrained FL environment, and how blockchain can play a key role to block those cyber attacks. To this end, we highlight some potential solutions towards the coupling of blockchain and federated learning that can offer high levels of reliability, data privacy, and distributed computing performance.
Federated Learning, Blockchain, Security, Privacy, Resource Limitations, Optimization.
## I Introduction
### _Motivation, Comparison, and Contributions_
Data-driven technologies can be limited by factors such as limited computing resources or the need for large amounts of quality data. Earlier online methodologies transferred raw data which created potential data privacy risks. Data silos are formed in organizations that are inaccessible to other departments and often incompatible with datasets within the same organization. FL is introduced as a solution for privacy preservation due to FL applying differential privacy to communicated data. In FL machine learning parameters are exchanged instead of raw data. Centralized computing architectures can be directly targeted and denied because of the single-point-of-failure vulnerability. Blockchains being a decentralized approach can improve system resilience from adversaries. A combination of FL and blockchain have increased robustness in comparison to earlier methodologies that had diminishing performance, due to features such as lack of privacy preservation. This survey examines blockchain-based FL as a solution for privacy preservation and secure resource management. Recent blockchain and FL surveys did not comprehensively analyze combining the two to provide a secure learning setting for a resource-constrained environment.
### _Organization of the Survey_
The rest of this paper is organized as follows. In Section **II**, we present an overview and taxonomy of FL with a comprehensive list of existing studies. In Section **III**, we review distributed optimization and ML approaches. Section **IV** presents a detailed analysis of the major challenges of FL while applying on resource-constrained devices, which is followed by Section **V**, where we discuss the potential solutions of those emerging challenges. In Section **VI**, we present the existing FL applications, and in Section **VII**, we highlight the future research direction in the FL-based IoT domain. Finally, Section **VIII** concludes the paper.
## II Background Study
Online computing environments can put private information at risk when communicating online. Ensuring data privacy throughout communications is important and can be approached by FL. FL transmits user data through machine learning updates instead of raw data. Classical data-sharing techniques can be improved by FL and its privacy protection mechanisms. Blockchain, a decentralized approach, offers additional security and resources for applications such as FL. BCFL is an appropriate solution for resource-constrained computing environments due to: peer-to-peer computing, network participation incentives, validation protocols, and other resource-preserving mechanisms. The following section details mechanisms within FL and blockchain.
### _Brief Introduction to Federated Learning_
FL proposed by Google in 2016, protects users' privacy by allowing devices to train a machine learning model using local data collaboratively. FL can process model updates synchronously or asynchronously. FL is a collaborative machine-learning architecture that stores local data on a device, this local data reduces the need for data to be stored online in a cloud. Sensitive user data is protected by adding noise to personally identifiable information. The goal is for data to be unidentifiable while preserving good qualities for machine learning model optimization. Finding the balance of data retention improves performance. FL, as a manner of distributed machine learning, can significantly preserve clients' private data from being exposed to external adversaries [1].
Data structures encourage distinct FL implementations for optimal data management. Each FL structure looks at feature spaces and datasets differently. Three different types of FL implementations include Horizontal FL, Vertical FL, and Federated Transfer Learning.
* Data distribution contains a consistent set of features and different samples. For example, a database containing predefined rows of relevant user data has different entries for each unique user. Some examples of Horizontal FL are predicting smartphone user behavior, personalized recommendations, identifying risk factors or predicting patient diseases, optimizing manufacturing product outcomes, and recognizing fraudulent transactions [2].
* Data distribution contains recurring samples with different features. For instance, consider a bank and a superstore in the same area. Most of their customers may be the same, but their business structure, i.e., the feature space, is different, and thus the user-space intersection is quite large [3]. Some examples of vertical FL are predicting the likelihood of patient admission considering different types of health records, lab results in hospitals, customer shopping behavior, and likelihood to make purchases analyzing heterogeneous data types of multiple retail stores [4].
* Combination of horizontal and vertical FL that contains different features and samples. The knowledge of an existing machine learning model is transferred to another model for improved performance. Transfer learning reuses learned lessons and re-purposes information towards a related problem. FTL currently has applications in wearable healthcare [5], autonomous driving, classification of EEG signals [6], industrial fault diagnostics [7], and image steganalysis [8, 9].
Traditional FL has a single point of failure limitation. Figure 1 showcases an uninterrupted online learning environment, although a denial of service could halt the repeated process. Internet of Things (IoT) devices differ in performance and reliability. The FL training process could be computationally expensive depending on available resources, which may require devices to drop out from the learning process. Devices may need the motivation to participate in FL training, which can be encouraged by incentives. Many of the pitfalls of FL can be augmented by blockchain mechanisms.
### _Brief Introduction to Blockchain_
Blockchain technology connects data blocks into a digital ledger similar to a database. The blockchain ledger is distributed throughout the network and considered decentralized; thus, third-party intermediaries are not required for blockchain processes. Each block in a blockchain contains relevant transaction history, such as unique identifiers, block computing costs, and other network-related information. Network transactions are directly recorded into the blockchain once authenticated. Peer-to-peer transactions allow workloads to share resources and information. Peers can participate as suppliers and consumers of resources. The authors in [10] proposed that the nodes or agents involved within a blockchain network are called participants and miners. The participants are the agents who perform any transaction, and the miners are responsible for validating or rejecting a block [10]. In a blockchain, network processing happens simultaneously, creating a large optimization surface area. Resources such as processing power, storage space, and available network bandwidth all have to be considered for resource-constrained computing environments.
Many processes operate at the same time within a blockchain. When blocks are being generated at a high rate, performance can be lower. Merge issues can appear when two or more blocks with the same hash information are successfully mined simultaneously. Merge issues can create forks in the blockchain, causing alternative chains to emerge until solved by consensus. Having a proper consensus protocol can discourage multiple versions of a blockchain. A rule of thumb is to reference the largest chain of blocks as the main blockchain to deter conflicts. Protecting the integrity of blockchain allows historical data to sync efficiently.
Blockchain architectures contain several layers:
* The Hardware layer is necessary for hosting a blockchain architecture. The core infrastructure requires computers, graphics processing units (for miners), and miscellaneous data storage. Computing resources can be emulated through virtual machines as a process.
* The Network layer is responsible for peer-to-peer communication. Block transaction data is publicly visible in a public ledger. As a result, public blockchain architectures
Fig. 1: Introduction to FL cycle.
can share processes with unknown participants that can directly interact with one another.
* The Consensus layer authenticates transactions based on agreed-upon protocols. This layer is responsible for validating transactions. If a fork in the blockchain emerges, the consensus layer handles disputes using protocol logic.
* The Data layer includes blocks and block transactional records. Blocks specifically contain information about: the previous block hash, the timestamp of the block creation, the generated block hash, and the transaction nonce, which is used for verifying transactions.
* The Application layer contains the front end of the blockchain and is considered the visible layer for the user. This layer contains an application programming interface (API), displays smart contracts, and includes basic configuration for quickly exchanging information with other blockchains. Blockchain interoperability allows similar blockchain structures to communicate efficiently.
#### Ii-B1 Blockchain Fundamentals
Blockchain architectures use cryptographic hash functions for transactions. Miners solve cryptographic puzzles to generate blocks and may form mining pools to boost performance. Nonce (numbers only used once) are numbers that miners must calculate before solving a cryptographic block. Cryptographic hash functions are encrypted, requiring efficient computing techniques to reasonably solve. The consensus layer authenticates that block conditions are met before integrating blocks into the blockchain. Blockchain agreements are transacted through smart contracts, rule-based digital agreements between two parties (i.e., miners and participants) that automatically execute when conditions are met. Once a block is authenticated, a new block is sought for mining. The authors in [10] proposed that typically in a blockchain environment, constant creation of new blocks, even with no transaction, is crucial for maintaining security as it prevents malevolent users from creating longer, tampered blockchain [10].
#### Ii-B2 Blockchain Categories
Generally, there are three types of blockchains: public, private, and consortium. Each type of blockchain involves different permissions and participation specifics. Different types of consensus and authority mechanisms are included in each blockchain type.
* The most well-known blockchain is the public blockchain which is permissionless and open to all participants. Public blockchain ledgers are publicly available. Each participant has equal permissions in this peer-to-peer networking environment. Open participation in blockchains has been known to affect the number of resources in many ways. Cases, where open participation leads to a large number of dishonest participants can cause blockchain performance to diminish. Examples of popular public blockchains include cryptocurrency platforms such as Ethererun or Bitcoin.
* Private blockchains are commonly referred to as permissioned blockchains. Only authorized participants are allowed to join and view private blockchain ledgers. The central entity of the private blockchain has permission to alter and manage protocols. The authors in [10] mention that the central entity has the power of validation and can change any rule of the blockchain (e.g., block consensus) [10].
* Consortium blockchains include public and private blockchain qualities. Consortium blockchains are partially decentralized, permissioned and transparent. Selected authorities have increased permissions compared to the public. Increased permissions for consortium authorities include the ability to participate in block validation. The authors in [11] utilized a consortium blockchain for pre-selecting a limited number of trusted miners to maintain the distributed ledger of an intelligent transportation system.
Consortium or permissioned blockchains are suitable when the integrity of participants is in question. Dishonest participants may sabotage others to compete for incentives or resources. Table I summarizes various papers that consider par
Fig. 2: Blockchain architecture layers.
ticipation incentives and resource constraints. The following section examines potential attacks, vulnerabilities, and threats found in BCFL environments.
### _Threat Models and Integration Motivation_
FL includes privacy protection mechanisms that deter unwanted access to data. Meticulous attack attempts can breach FL security in different ways. Attacks can occur during device training, parameter upload and download, central aggregation, and post-aggregation phases. A range of various attacks can exploit FL vulnerabilities during each stage:
#### Ii-C1 Inference attacks
During inference attacks, an adversarial agent attempts to capture sensitive information throughout training. Training data, participant data, and label data are recorded without permission in hopes of correlating encrypted information with real values. Generative Adversarial Networks (GANs) can create powerful inference attacks. Inference attacks fall into five general categories:
1. Membership inference attack - Attackers determine if an individual was present in the training dataset. The authors in [26] constructed a membership inference attack that exploits the observation that machine learning models often behave differently on the data they were trained on versus the data they see for the first time [26]. Adversaries attempt to infer from specific samples that can be captured from model outputs. The objective is for adversaries to determine if a specific record exists in the model training dataset.
2. Data properties inference attack - Data properties of a machine learning model are inferred using the parameters shared during model training. The particular data property is fully investigated to learn the frequency of the property within training data. For example, the authors in [27] demonstrated that a classifier that recognizes smiling faces also leaks information about the relative attractiveness of the individuals in its training set [27]. For example, a data properties inference attack on a model trained to screen FL participants by threshold may mistakenly reveal hardware or host types within the FL architecture.
3. Data samples and labels inference attack - An adversary uses inference to capture targeted data samples and labels. FL model classes and participant training input labels can be reconstructed for inference. A malicious participant calculates distributed gradients to infer private information about other participants or model parameters. The authors in [28] demonstrated a label-only inference attack that could capture private information from machine learning models without access to confidence scores.
4. Model inversion attack - Model inversion attacks aim to reconstruct private information from training data. Attackers with access to model parameters can look at the model's confidence score of predicted classes for inference. Successful model inversion attacks infer realistic representations from training data. The authors in [29] demonstrated the applicability of model inversion attacks on decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition [29].
5. Model extraction attacks - Adversary attack that aims to steal exact model parameters. Hu and Pang [30], mention how model extraction attacks aim to duplicate a machine learning model through query access to a target model. Hu and Pang studied two attacks on GANs: fidelity extraction attacks and accuracy extraction attacks [30]. A strategic position can be extracted from learned functionality. Transfer learning attacks can occur when attacks are trained on vulnerabilities of similar models with the same framework, then transferred to attack the target model. Extracting knowledge of the target models' training data and functionality increases future attack effectiveness.
#### Ii-C2 Poisoning attacks
Poisoning attacks include misleading data injected by adversaries as inputs. Poisoning attacks can be classified into two categories: (1) model poisoning and (2) data poisoning attacks.
1. Model poisoning attacks - In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks (e.g., classifying planes as birds) by uploading "poisoned" updates [31]. Poisoning attacks change the model's weights and biases, leading to misclassifying data. The model's confidence decreases when interacting with poisoned data.
* Transmitted gradients and parameters are manipulated throughout the FL training process to reduce global model performance. As a result, global model accuracy is reduced by adversaries injecting malicious updates. In addition, misclassification can result from gradient manipulation, allowing adversaries to increase the success rate of their attacks.
* An adversary attempts to minimize the difference between correct and incorrect training updates during training. The strength of this attack increases when multiple participants become dishonest and poison their parameters. Additionally, adversaries may bribe or collude
Fig. 3: Classifying FL threats and attacks.
with multiple participants to increase misclassification and lower detection mechanisms. The objective is for eventual harmful updates to go undetected based on the similarity of correct and incorrect training updates.
2. Data poisoning attacks - Training data is altered to impact the FL model negatively. During training, contaminated data is introduced to corrupt the central aggregator. Data poisoning attacks inject malicious data into the training dataset before the learning process starts [32]. Such an attack allows adversaries to disguise harmful data, and eventually, the large amounts of poisoned data increase training times.
* Label manipulation attacks cause the model to mislabel training data. Label flipping includes training sample labels being flipped around to lower the judgment of the model. Classification errors can result from label flipping. Label flipping is an example of dirty-label attacks. In contrast, clean-label attacks are created when an adversarial makes poisoned training data indistinguishable from non-poisoned training data. Clean-label attacks are difficult but extremely powerful.
* A backdoor attack tricks the model into associating a backdoor pattern with a specific target label so that, whenever this pattern appears, the model predicts the target label, otherwise, behaves normally [33]. Backdoor attacks are also referred to as Trojan attacks. Adversaries inject clean or dirty backdoor updates into data samples in the training phase. When the global model performs aggregation, the model performance on the specific input is reduced depending on the number of undetected backdoors. When the backdoor is triggered, incorrect predictions occur on the targeted task while the main task performance appears untampered. Backdoor attacks compromise a subset of samples exchanged with adversaries during local training.
#### Ii-C3 Evasion attacks
Evasion attacks - An adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples [34]. Evasion attacks can bypass detection during training time with carefully manipulated attack samples. The input looks unaltered to humans but deceiving to the machine learning model. Evasion attacks contaminate training data during aggregation with undetectable misleading data.
#### Ii-C4 Byzantine attacks
Byzantine attacks - In a Byzantine attack, a malicious device gets processed with honest devices. Byzantine attacks aim to harm consensus and decrease model performance. In FL, a malicious attacker may control multiple clients, known as Byzantine users [35]. Byzantine users can upload fake data due to unreliable communication channels, corrupted hardware, or malicious attacks. This leads to the global model being manipulated by attackers and cannot be converged [35]. Larger amounts of Byzantine users increase the effectiveness of Byzantine attacks. In the worst case, a 51% attack could occur when the majority of participants are dishonest.
#### Ii-C5 Free-riding attacks
Free-riding attacks - Within FL environments, the global model is distributed to all participants regardless of their contributions. "Free-riders" may emerge that do not contribute their fair share during the training process. Free-riders create free-rider attacks that lower model performance due to fake model updates being exchanged. A dishonest participant may become a free rider to withhold private information, reduce resource costs, or receive participation rewards while lacking participation requirements. The authors in [36] mention two types of free-riders. Plain free-riders, which do not update the local parameters during the iterative federated optimization, and disguised free-riders, which employ sophisticated disguising techniques relying on stochastic perturbations of the parameters [36].
### _Threats during FL phases_
A range of various attacks can compromise FL security. FL attacks can occur during four distinct FL phases: the training phase, parameters exchange phase, parameters aggregation phase, and prediction phase. Each phase has a different attack surface, most susceptible to model poisoning and inference attacks. While FL performs continuous learning, each phase is repeated in multiple iterations. As a result, undetected attacks strengthen as resources are drained from the environment. Four distinct FL phases and potential threats:
1. Training phase - During the training phase, training data is sent to participants for learning. Local training data is not validated during this phase, allowing adversarial attacks. Early attacks in the training phase occur when the participant's reputation is unavailable. Malicious participants can pollute the training phase by conducting poisoning attacks. Adversaries may target the model performance for misclassification or overall training times through misleading data. Increasing amounts of adversarial data negatively impact model effectiveness.
2. Parameters exchange phase - The parameter exchange phase consists of A) participants downloading the global model parameters or B) participants uploading the local model parameters. External adversaries may attempt to capture sensitive information during parameter exchange through inference attacks. Inference attacks during this phase can steal parameters (model extraction), combine data to reconstruct the dataset, and estimate the distribution of continuously exchanged data. In addition, eavesdropping attacks can occur when communications are intercepted and potentially compromised online.
3. Parameters aggregation phase - During the parameters aggregation phase, the central server performs a weighted
average of participant parameters. Misleading parameters during this phase affect system-wide performance. If the central model architecture is well understood, adversaries can target the central server directly. The central server coordinates the aggregation of online learning and may learn to become malicious due to negative influences such as poisoning attacks. Fig. 4 showcases the danger of adversarial updates being successfully processed into the parameter aggregation phase, thus lowering central server performance. Preserving central server effectiveness is important for the health of the FL architecture. Honest-but-curious central servers attempt to learn all possible information, overstepping boundaries through conducting inference or model poisoning attacks. Inefficient central servers incorrectly redistribute information to participants.
4. Prediction phase - The prediction phase occurs after the global model has been deployed to devices. Adversaries may continually obtain the deployed global model to perform a combination of evasion and inference attacks. Evasion attacks deceive end devices, causing predictions to be inaccurate. Inference attacks on deployed predictions attempt to extract private information such as participant data, model parameter data, and training data.
### _Frog-boiling attacks in online environments_
Frog-boiling attacks reveal the limitations of anomaly detection in online environments. The authors in [38] proposed the frog-boiling attack, where an adversary disrupts the network while consistently operating within the threshold of rejection [38]. Frog-boiling refers to the phenomenon that a frog placed in hot water will instantly jump out. In contrast, a frog placed in slightly warm water that gradually heats up will remain in the water and eventually boil to death. Adversaries in online environments may continually attempt smaller attacks to learn the system's state and eventually disrupt the network. The authors in [38] mention three variants to the frog-boiling attack: the basic-targeted attack, the network-partition attack, and the closest-node attack [38]. Frog-boiling attacks can avoid outlier detection and increase network-wide latency.
#### Iii-E1 Insider and outsider attacks
FL environments can be threatened by the insider and outsider attacks. Insider attacks can be launched by either the FL server or the participants in the FL system [39]. Insider attacks affect global model aggregation performance and can infer real values from noisy data. On the inside, a dishonest participant may attempt to decode FL updates through malicious tactics. Honest participants on the inside may be impersonated or bribed by external adversaries. In comparison, outsider attacks do not have direct access to internal communications of the network. From the outside, adversaries may eavesdrop on communications between the client and the server. Transferring data wirelessly can be intercepted, leading to compromised FL updates.
#### Iii-E2 Device latency
IoT devices are heterogeneous and fluctuate in resource constraints. Each device may have different amounts of noise depending on the environment. An FL central server waiting for participants to complete the training phase can be delayed by slower devices. Devices may have unreliable memory bandwidth or unsatisfactory hardware. Delays at the device level can cause delays throughout a synchronous FL architecture. According to Issa et al. [17], the speed of rounds in synchronous FL is restricted to the speed of the slowest device. This causes a "straggler effect", which can cause inefficient processing [17].
#### Iii-E3 Consensus and dishonesty
Consensus performance is affected by the number of dishonest participants. The authors in [40] stated that the blockchain network security level is directly proportional to the amount of hash computing power that supports the blockchain. As the miners increase in the mining process, it becomes more difficult for an attacker to attack the blockchain [40]. The amount of honest and dishonest participants directly influences consensus mechanisms in FL and blockchain. Adversaries may employ Sybil attacks, where multiple fake identities are created to throw off consensus mechanisms. The weight of dishonest participants can cause consensus to malfunction. In the worst case, a 51% attack can control network consensus.
#### Iii-E4 Additional noise against adversaries
Additional noise creates a trade-off between levels of security and central server performance. The authors in [13] proposed that Federated learning intrinsically protects the data stored on each device by sharing model updates, e.g., gradient information, instead of the original data. However, model updates, which are based on original data, can reveal sensitive information [13]. Adding additional noise can increase the uncertainty of adversaries and reduce adversarial attack effectiveness. Jia et al. [41] proposed to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier [41].
Table III contains details on how columns in the expanded table of FL attacks and costs were evaluated and scored. Interoperability was evaluated as the compatibility of attacks in an FL environment. In most cases, adversaries use high amounts of resources to conduct attacks. Some attacks explicitly drain the resources of the central server, while other attacks avoid server resource consumption to evade detection. The prevention column compares detection levels and proposed defenses for specified attacks. The impact column considers
Fig. 4: Potential threat observations within the four phases of federated learning.
the importance of privacy preservation and architecture robustness. Well-planned combinations of attacks increase attack effectiveness.
### _Vulnerabilities of FL and BCFL_
The attack surface of FL can be reduced by combining architectures with blockchains. FL has a large attack surface due to the single point of failure component of the FL system. An adversary could directly attack the central server to cause instability throughout the FL network. System-wide latency occurs when communications between devices and the central server are unreliable. Blockchain FL offers devices with distributed reliability for computing performance. When a device signals availability, the application layer of the blockchain automates a smart contract for a conditional agreement. Smart contracts on blockchains are immutable, so once created, the smart contract cannot be modified. Imperfect smart contracts include unwanted loopholes or vagueness.
Distributed machine learning architectures such as BCFL are vulnerable to outsider attacks while communicating over internet connections. Communication attacks such as eavesdropping attacks, man-in-the-middle attacks, and spoofing attacks are potential threats to the exchange of private information. A denial of service on network communications can be highly disruptive. Flooding a network with malicious data is resource-draining for distributed computing architectures.
Adversaries can commit poisoning attacks that reduce performance by sending deceptive inputs to mislead calculations. Dishonest participants redirect optimal resource allocations by providing inaccurate data. Byzantine users who upload fake data can purposely reduce global model performance. A 51% attack occurs when the number of dishonest participants outweighs the number of honest participants; thus, most dishonest participants can gain control of an unbalanced network.
The authors in [42] mention the cost of creating a 51% attack is surprisingly low if hash power is abundantly available [42]. In FL, a 51% attack can mislead FL training when the majority of participant data is untrustworthy. In blockchain, a 51% attack can control the consensus mechanisms that intervene with transaction authentication. Blockchain transaction authentication can be delayed when conflicting transactions are completed at the same time. A race attack creates a potential fork in the blockchain within the short window of time required to authenticate transactions. Selfish mining pools may purposely generate or withhold blocks simultaneously to gain an advantage over other miners. Transaction reverse is a dishonest mining tactic where the processing of potentially generated blocks is reversed to cause Blockchain delays.
### _Integration Motivation of FL and Blockchain_
Blockchains have inherent interoperability, allowing blockchains to communicate with other blockchains efficiently. Blockchain interoperability increases robustness when useful information is transferred between blockchains. For example, if a blockchain layer undergoes a denial of service attack, a similar blockchain could offer alternative services to supplement security. Similarly, participant data such as reputation can be communicated between blockchains to increase scalability. Blockchain can solve the problem of trust establishment among distributed systems through distributed node verification, and consensus mechanisms [43][44].
Optimal device selection reduces the dangers of adversaries. A device selection phase can reduce resource costs associated with unfavorable behavior. Ideally, the behavioral patterns between an honest and dishonest participant are largely different. Behavioral auditing of malicious patterns from questionable participants can increase security. The authors in [45] proposed the Reject On Negative Impact (RONI) defense to determine whether a candidate training instance is malicious. [45] states if adding the candidate instance to a training set causes the resulting classifier to produce substantially more classification errors, reject the instance as detrimental in its effect.
## III Learning on Resource-Constrained Devices
Latency on IoT devices can increase based on the distance between data centers. Solutions to device latency include moving cloud resources closer to devices, which is seen in Mobile edge computing (MEC). The authors in [21] determined that MEC servers are now becoming a weak point due to data privacy concerns and high data communication overheads. Mobile devices may be subject to attack if not properly secured, and may require additional security when deploying important communications in MEC. The authors in [43] mention how blockchain can solve the security problem of edge computing.
As we know, IoT devices may have limited resources, and quantifying the expected resource usage can help reduce excess resource expenses [46]. If a device is showing
Fig. 5: Various vulnerabilities found in FL and BCFL.
signs of being a straggler, device dropout can be applied to reduce communication costs. However, if we have a large number of stragglers, then simply dropping them can put a negative impact on the overall learning process. The authors in [47] developed FedAR algorithm that can select only the effective and trustworthy agents for learning process. Besides, FedPARL framework is specially designed for resource-constrained FL devices that can prune large model and allow feasible local task allocation for the edge devices [48]. To deal with heterogeneous resources of the agents, the authors in [49] proposed an FL model that can generate multiple global models considering the resource status of the agents and accelerate the learning process. Moving forward, increasing device participation can cause diminishing returns if devices
perform poorly during training. Adaptive methodologies, such as adaptive FL can help balance resource costs. The authors in [24] proposed an adaptive FL control algorithm based on gradient-descent for reducing computational resource budgets. The proposed control algorithm determined the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget [24]. The authors in [12] proposed AFAFed, an Adaptive FL solution for IoT devices affected by packet-loss communication. One of the key features of AFAFed is the implementation of distributed sets of local adaptive tolerance thresholds and global centralized adaptive fairness coefficients. AFAFed's features allow the algorithm to calculate the right personalization vs. fairness trade-off in various resource-constrained computing environments.
### _Verification of Local Model Updates_
Verifying local model updates helps prevent resource-consuming1data from being aggregated with the global model. The authors in [1] found that blockchain-assisted decentralized FL frameworks can prevent malicious clients from poisoning the learning process and thus provides a self-motivated and reliable learning environment for clients [1]. Blockchain consensus mechanisms boost network agreement, thus deterring unreasonable data from being aggregated. The authors in [50] mentions four different blockchain consensus protocols for confirming the correctness of a global model and executing global model aggregation. Blockchain consensus protocols include: Proof of Work (PoW), Proof of Stake (PoS), Raft, and practical Byzantine fault tolerance [50].
Footnote 1: Resource-consuming can be considered negative, harmful, or malicious data.
* Proof of Work (PoW) discourages harmful blocks from being added to the network by requiring the completion of a cryptographic puzzle by miners before verification. The authors in [51] proposed that PoW is the most famous algorithm for Bitcoin. Although it works effectively for protection from Sybil attacks and data manipulation, it is expensive because of the required hash power and long block interval [51].
* Proof of Stake (PoS) randomly selects data validators based on a total stake or overall impact on the network. Individuals with high amounts of holdings are given the opportunity to validate and approve new blocks being added. Validators in PoS create validator nodes to assist in network consensus. The validator's assets are locked while staking to help support network consensus. Validators receive incentives for vouching for the legitimacy of transactions.
* Raft supports consensus within decentralized blockchain environments. A raft can improve the understandability of the verification by breaking consensus problems down using decomposition. According to Ongaro and Ousterhout [52], Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered [52].
* Practical Byzantine fault tolerance (PBFT) ensures a decentralized network continues operating even if a portion of nodes fail or act maliciously. The authors in [53] mention that PBFT is an optional consensus protocol for consortium blockchains scenarios, where strong consistency is required [53]. Consortium BCFL can prevent the straggler effect in FL, where the speed of the slowest device causes network-wide delays.
### _Global Model Aggregation_
Global model aggregation continually occurs online. Blockchains can provide decentralized applications, such as smart contracts, to achieve global model aggregation without a central server. The distributed training of the global model can encourage dishonest participants to collude with others and launch a coordinated attack. The authors in [54] suggested that past clients can coordinate with current or future participants to participate in attacks against current or future updates to the global model [54]. Global model updates may not change much depending on context. While global model parameters are not changing much, parameters can be frozen to reduce communication costs. The authors in [55] found that it is unnecessary to always synchronize the full FL model in the entire training process because many parameters gradually stabilize prior to the ultimate model convergence [55]. Devices with low bandwidth benefit from frozen parameters to reduce device resource costs. Yang et al. [50] proposes a decentralized
Fig. 6: Condensed workflow with integrated blockchain-based FL.
blockchain-based FL architecture that can resist failures or attacks of servers and devices by building trustworthy global model aggregation with secure model aggregation based on blockchain consensus protocol among multiple servers.
### _Incentive Mechanism_
Federated learning does not reward local device participation. Blockchain can offer an incentive mechanism to improve participation. The authors in [56] mention that there could be heterogeneous devices with different computational and data resources in an FL system. Therefore, participants with better resources must have extra benefits compared to participants with little contribution [56]. Honest contribution is advantageous to all participants and should be rewarded appropriately. Besides, the authors in [57] mention how the blockchain's incentive mechanism can track the contribution of each data provider towards the globally optimized model so that participants can be treated fairly, thereby attracting more data sharers. Blockchain-based federated learning architectures such as BlockFL [19], enable exchanging devices' local model updates while verifying and providing their corresponding rewards [19]. While the authors in [58] found that BCFL enables all clients to verify the learning results that are recorded on the blockchain, whereby distributed clients can be rewarded incentives to participate, and untrusted learning models can be detected.
### _Privacy and Security Protection_
Adding noise to data can reduce overall efficiency. Too much noise and performance are diminished, while not enough noise and the state is potentially insecure. Levels of noise carry a trade-off that should be considered. FL can ensure data privacy by only exchanging model parameters with devices. Local device data is never communicated online, boosting the security of data. Multiple approaches examine noise trade-off. Truex et al. [23] proposed an FL approach that utilizes both differential privacy and secure multiparty computation (SMC). [23] combined differential privacy with secure multiparty computation to enable the growth of noise injection to be reduced as the number of parties increases without sacrificing privacy.
Security measures include anticipating anomalous behavior. Typical technologies for defending FL security include malicious participant detection, and malicious impact mitigation [32]. Irregular behavior can be indicative of an adversary or potential machine learning attack. The authors in [59] mention six anomaly detection models for effectively detecting anomalous behavior: generative architectures, classification-based models, clustering-based models, nearest neighbor models, statistical & analytical models, and reinforcement learning-based models [59]. In addition, participant contribution can be measured to classify behavior between honest vs. dishonest participation.
## IV Existing Research Challenges and Future Research Directions
### _Potential Solutions of Existing Research Challenges_
Prior works demanded high resource consumption for large scale data-driven environments. FL and its form of distributed computing can process participation from different IoT devices, thus allowing a variety of resourceful participation. Resourceful participation has three main challenges:
1. High communication costs can be discouraged based on network reliability. Communication delays from service providers may require a device to drop out at slower speeds. The authors in [15] examine compressed communication to reduce communication overheads in BCFL. Cui et al., [15] created a communication-efficient framework that could reduce the training time by about 95% without compromising model accuracy. Approaches that reduce communication overheads are advantageous for lowering participation requirements.
2. Trade-offs between security and performance exist. Increasing noise against participants boosts security but also lowers performance. Choosing the best devices for participation can reduce the dependence on trade-offs between security and performance. Consortium blockchains are present solutions for the possible diminishing returns of open participation. Consortium blockchains screen participating devices to encourage optimal participation selection of trusted devices. The authors in [60] propose a budgeted number of candidate clients chosen from the best candidate clients in terms of test accuracy to participate in the training process. Optimal selection strategies can verify participants are resourceful and honest before inviting these participants into training.
3. Dishonest participation records are not shared between blockchains. Communicating participation records between blockchains can improve scalability. Records of dishonest behavior can warn similar blockchains of a participant's integrity. For example a dishonest mining pool with evidence of performing mining attacks against a blockchain can be blacklisted. Blockchains interoperability can allow participants to be chosen thoughtfully.
BCFL can improve resource-constrained environments, although being considerate of optimal participation further increases overall resource effectiveness. Li et al., [20] suggests that blockchain can further improve FL security and performance, besides increasing its scope of applications.
Fig. 7: Example BCFL figure with example incentive mechanism for residential buildings.
### _Future Research Directions_
Future research can be conducted in the areas of BCFL scalability, quantum resilience and AI alignment. The aforementioned directions have challenges that may need to be addressed in the future. Figure 8 displays a timeline of important BCFL research directions. Each direction can disrupt resource management, requiring revolutionary system designs:
* Scalability can cause diminishing returns when resource requirements exceed reasonable blockchain computing limits. The authors in [10] states that as the number of transactions increases, it requires a larger blockchain size to store those transactions. However, mining a large size of blockchain may require more resources, which would be difficult for IoT devices [10]. Dimensional reduction for unreasonably large blockchains can prevent future mining participants from requiring supercomputers, compared to current GPUs.
* Post-quantum cryptography algorithms may be required for resilience from quantum attacks. The authors in [61] mention how the fast progress of quantum computing has opened the possibility of performing attacks based on Grover's and Shor's algorithms. Such algorithms threaten public-key cryptography, and hash functions [61]. Blockchain may require a cryptographic redesign to combat quantum attacks. Quantum computers bring unique threats to classical data management techniques.
* AI alignment is important for building a favorable machine learning model and central server. Corrigibility, the capability of being reparable, can be considered when the machine learning model learns unfavorable behavior. The model may seize to optimize due to model convergence. Further research could evaluate a models curiosity and the relation to model convergence, when significant performance changes are relatively stable. An honest-but-curious central server may or may not be reparable.
## V Conclusion
In this paper, we present legible insights of blockchain-based FL from crucial and time-demanding perspectives. Blockchain provides additional security in a decentralized format that protects FL from various real-world security and privacy issues. We categorize various threat models and point-out the leading vulnerabilities that could be observed during various stages of the FL process and in online environments of blockchain-enabled FL settings. To this end, we show a clear direction on how we can leverage secure and private learning on resource-constrained devices, covering the feasible Verification of local model updates, secure global model aggregation, designing fair incentive scheme, and upgrading security and protection. Finally, we present the existing blockchain-based FL applications and highlight the potential solutions of the existing research challenges in the relevant domains. We anticipate that this survey will be helpful for researchers, practitioners, and scientists, developing robust blockchain-enabled FL systems choosing suitable consensus mechanism, identifying security and privacy pitfalls, motivating coherent formations, and following the promising future directions presented in this paper.
## VI acknowledgment
This material is based upon Ervin Moore's work supported by the U.S. Department of Homeland Security under Grant Award Number, 2017-ST-062-000002. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
|
2306.16081 | Graph neural networks for sound source localization on distributed
microphone networks | Distributed Microphone Arrays (DMAs) present many challenges with respect to
centralized microphone arrays. An important requirement of applications on
these arrays is handling a variable number of input channels. We consider the
use of Graph Neural Networks (GNNs) as a solution to this challenge. We present
a localization method using the Relation Network GNN, which we show shares many
similarities to classical signal processing algorithms for Sound Source
Localization (SSL). We apply our method for the task of SSL and validate it
experimentally using an unseen number of microphones. We test different feature
extractors and show that our approach significantly outperforms classical
baselines. | Eric Grinstein, Mike Brookes, Patrick A. Naylor | 2023-06-28T10:27:53Z | http://arxiv.org/abs/2306.16081v1 | # Graph Neural Networks for Sound Source Localization on Distributed Microphone Networks
###### Abstract
Distributed Microphone Arrays (DMAs) present many challenges with respect to centralized microphone arrays. An important requirement of applications on these arrays is handling a variable number of input channels. We consider the use of Graph Neural Networks (GNNs) as a solution to this challenge. We present a localization method using the Relation Network GNN, which we show shares many similarities to classical signal processing algorithms for Sound Source Localization (SSL). We apply our method for the task of SSL and validate it experimentally using an unseen number of microphones. We test different feature extractors and show that our approach significantly outperforms classical baselines.
Eric Grinstein 1, Mike Brookes, Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College London, U.K.
Footnote 1: Contact: [email protected]
Footnote 2: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Słodowska-Curie grant agreement No 956369
## 1 Introduction
Distributed Microphone Array (DMA) signal processing [1] is an active field in the acoustic signal processing community, with important applications in speech enhancement, noise reduction and Sound Source Localization (SSL) [1, 2, 3]. In contrast to centralized microphone arrays [4], DMAs may be created through the wireless connection of multiple distributed devices such as cell phones, laptops and virtual assistants. In this context, they are also frequently referred to as Ad-hoc microphone arrays, or Wireless Acoustic Sensor Networks (WASNs).
Although DMAs bring advantages in terms of acoustic coverage with respect to centralized arrays, they also bring challenges. One such challenge forms the focus of this paper, namely, having a dynamic number of input microphone channels, as a DMA may be created using the devices present in a dynamic scene. This number may change in runtime due to many reasons, including software or hardware failures of individual devices, battery depletion, or the device being removed from the scene. This restricts the application of many of the deep learning methods that have been successfully applied to centralized microphone networks such as [5, 6], which require a static input size. Conversely, classical SSL approaches such as [7] are able to function on an arbitrary number of microphones.
In this work, we propose the use of Graph Neural Networks (GNNs) [8, 9, 10] as a suitable way of processing DMA signals for the task of SSL. We adopt a GNN variant called the Relation network (RelNet) [10]. We validate our approach for the task of localizing a single static source in multiple scenarios, showing it to outperform the baselines. The main contribution of this work is the first application of GNNs for the task of SSL, allowing our method to handle a variable number of microphone channels. Furthermore, our approach can work on unseen microphone coordinates and room dimensions through a metadata fusion procedure.
This paper continues by providing a problem statement in Sec. 2. Sec. 3 includes a review of related work on DMAs using deep learning, as well as a review of classical SSL methods and the RelNet GNN, which serve as building blocks for our model. In Sec. 4, we describe our proposed approach. Sec. 5 describes our experimental validation, and Sec. 6 presents the results and Sec. 7 concludes the paper.
Figure 1: (a): Example of a graph of distributed microphones. (b): Representation of the GNN-SLF model for three microphones. The computation of the heatmaps is described in Sec. 4.
## 2 Problem Statement
Our goal is to estimate the 2D coordinates \(\hat{\mathbf{p}}_{s}\) of a sound source located at \(\mathbf{p}_{s}=[p_{s}^{x}\,p_{s}^{y}]^{T}\) within a reverberant room of known dimensions \(\mathbf{d}=[d^{x}\,d^{y}\,d^{z}]^{T}\). The source emits a special signal \(s(t)\) at instant \(t\). Besides the source, \(M\) microphones are present in the room, where microphone \(m\) has a known position \(\mathbf{p}_{m}=[p_{m}^{x}\,p_{m}^{y}\,p_{m}^{z}]^{T}\), and receives a signal \(x_{m}(t)\) modeled as
\[x_{m}(t)=a_{m}s(t-\tau_{m})+\epsilon_{m}(t), \tag{1}\]
where \(a_{m}\) is a scaling factor representing the attenuation suffered by the wave propagating from \(\mathbf{p}_{s}\) to \(\mathbf{p}_{m}\). \(\tau_{m}\) represents the time delay taken for a sound wave to propagate from the source to the microphone, and \(\epsilon_{m}\) models the noise and reverberation. We assume \(\tau_{m}\) to be equal to \(c^{-1}\|\mathbf{p}_{m}-\mathbf{p}_{s}\|_{2}\), the distance between the source and the microphone divided by the speed of sound \(c\).
In our method and baselines, the microphone signals are sampled and processed in frames of size \(L\), defined as \(\mathbf{x}_{m}(t)=[x_{m}(t-(L-1)T_{s})...x_{m}(t)]^{T}\), where \(T_{s}\) is the sample period. Finally, we also define a metadata vector \(\mathbf{\phi}\) as
\[\mathbf{\phi}=[p_{1}^{x}\,p_{1}^{y}...d^{y}\,d^{z}]^{T}, \tag{2}\]
which serves as a secondary input to our method, allowing it to function on any room dimensions and microphone coordinates.
## 3 Related Work
### Classical SSL methods
Our proposed method can be seen as a generalization of classical grid-based SSL methods such as the Time-Difference-of-Arrival (TDOA) [11, 12], Spatial Likelihood Function (SLF) [7, 13] and energy-based [14] approaches. These approaches share many similarities, which are summarized by their shared behaviour described in Alg. 1.
```
functionestimate_source_location(\(\mathbf{X},\mathbf{\phi}\)) \(\mathbf{u}\leftarrow\mathbf{0}\) for each\(i\in[1..M]\)do for each\(j\in[(i+1)..M]\)do \(\mathbf{u}\leftarrow\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{\phi}(i,j))\) return\(\mathcal{G}(\mathbf{u})\)
```
**Algorithm 1** Classical SSL methods
Alg. 1 starts with the creation of an empty grid \(\mathbf{u}\), which we assume to be a flattened 2D for our applications. The next step consists of computing a _relation_\(\mathcal{F}\) between each pair of microphones \((i,j)\) available, using their signals \((\mathbf{x}_{i},\mathbf{x}_{j})\) as well as the _metadata_ available \(\mathbf{\phi}\), consisting of the microphone and room dimensions and the speed of sound. These relations consist of assigning, for each cell within the grid, a value expressing how likely a source is to be in a particular grid cell.
The relations between all pairs are aggregated through summation (or multiplication, see [13]) to generate a heatmap gathering all pairwise information. Depending on whether the problem is formulated using a Least-Squares (LS) or Maximum Likelihood (ML) approach, the minimum or maximum value of the grid will respectively correspond to the location of the source [11]. \(\mathcal{G}\) is therefore a peak-picking function, whose goal is to select the grid cell where the source is located.
The TDOA, SLF and energy-based methods differ mainly by the function \(\mathcal{F}\) computed. Each cell within the grid represents a candidate source location which has a theoretical TDOA between the two microphones. In the TDOA method, each grid cell is assigned the distance between its theoretical TDOA and the measured TDOA, computed by picking the peak of the generalized cross-correlation function between the microphones' signals, typically computed using the Generalized Cross-Correlation with Phase Transform (GCC-PHAT) [15].
In the SLF method, each cell receives the cross-correlation value at the lag corresponding to its TDOA. SLF is shown to be equivalent to Steered Response Power (SRP) [16]. Finally, the energy-based method uses a metric based on the ratio of the two microphone signals' energies. In Fig. 0(a), the edges of the graph represent maps computed using the SLF method.
### Neural network methods for DMA signal processing
Classical SSL methods normally do not account for room reverberation, which may divert the heatmap's peak from the true source location, or reduce its sharpness. Neural networks can become robust to reverberation if trained on suitable scenarios. Here we review works on neural networks for DMAs.
In [17], an attention-based neural network capable of handling connection failures is proposed for the task of speech enhancement. Unlike our method, this network is limited to a maximum number of input microphones channels. In [18] and [19], variable-input processing is achieved through a global average pooling scheme.
Two works have explored GNNs for acoustic signal processing. In [20], a GNN is used to profile noise within a railway setting. However, their work the source signal to be known beforehand, limiting its application in many scenarios. This restriction is not present in our proposed approach. In [2], a Graph Convolutional Network (GCN) [21] is used in con
junction with an encoder-decoder network for the task of speech enhancement. Conversely, we do not use an encoder-decoder and explore the Relation Network GNN, which we show to be well suited for the task of SSL.
### Relation Networks
We choose the Relation network (RelNet) [10] as our graph network architecture due its conceptual similarities to classical SSL methods. RelNets were introduced in the context of visual question answering. The input of the network consists of a set of _nodes_, represented by feature vectors \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{M}\}\). The network \(\mathcal{RN}\) may be summarized as
\[\hat{\mathbf{y}}=\mathcal{RN}(\mathbf{X})=\mathcal{G}\bigg{(}\sum_{i\neq j}\mathcal{F }(\mathbf{x}_{i},\mathbf{x}_{j})\bigg{)}, \tag{3}\]
where (3), \(\mathcal{F}\) generates a _relation_ between nodes \((i,j)\). These relations are summed together, and this sum is the input to \(\mathcal{G}\), which produces the answer \(\hat{\mathbf{y}}\) to the target question. The nodes \(\mathbf{x}_{i}\) and the relations \(\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j})\) can be seen as a complete undirected graph \(\mathbf{G}=(\{\mathbf{x}_{i}\},\{\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j})\})\). As in [10], we implement both \(\mathcal{F}\) and \(\mathcal{G}\) as Multi-layer Perceptrons (MLPs), trained jointly using backpropagation.
## 4 Method
A diagram of our proposed network is shown in Fig. 1. Using a RelNet allows our approach to first process pairs of microphone signals into features, and later combine them through summation. This allows it to function on a variable number of input microphones. Furthermore, our method can operate on unknown room dimensions and microphone coordinates by combining this metadata \(\mathbf{\phi}\) before estimating the source location.
The input to our method consists of the set of \(M\) microphone signal frames \(\{\mathbf{x}_{m}\}\), where \(\mathbf{x}_{m}\) is a vector of size \(L\) representing a frame of recordings, and a metadata vector \(\mathbf{\phi}\) containing relevant information such as the microphone coordinates and room dimensions. We define the relation function \(\mathcal{F}\) as
\[\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{\phi})=\text{MLP}(\mathcal{H}(\mathbf{x}_{i },\mathbf{x}_{j};\mathbf{\phi})), \tag{4}\]
Where MLP is a multi-layer perceptron and \(\mathcal{H}\) is a preprocessing or feature extraction function. The inclusion of a preprocessing function allows us to use the classical features such as GCC-PHAT or SLF. Conversely, post-processing these functions using a MLP allows us to improve these features by introducing learned rules, as we will show for the application of SSL.
In turn, the relation fusion function is chosen as \(\mathcal{G}(\mathbf{u})=\text{MLP}(\mathbf{u})\), where \(\mathbf{u}\) represents the sum of all pairs of relations as in Alg. 1. This function is a substitution of the peak-picking algorithm in Alg. 1, expanding its functionality for other possible applications.
As in [10], we train the weights \(\mathbf{w}_{\mathcal{F}}\) and \(\mathbf{w}_{\mathcal{G}}\) of the MLPs in \(\mathcal{F}\) and \(\mathcal{G}\) jointly through a gradient-based procedure, by minimizing an application-specific loss function \(\mathcal{L}(y,\hat{y})\) between the network output \(\hat{y}\) and target \(y\):
\[\begin{split}\mathbf{w}_{\mathcal{F}}&=\mathbf{w}_{ \mathcal{F}}-\lambda_{\mathcal{F}}\frac{\partial\mathcal{L}(\mathbf{y},\hat{\mathbf{y} })}{\partial\mathbf{w}_{\mathcal{F}}}\\ \mathbf{w}_{\mathcal{G}}&=\mathbf{w}_{\mathcal{G}}-\lambda_{ \mathcal{G}}\frac{\partial\mathcal{L}(\mathbf{y},\hat{\mathbf{y}})}{\partial\mathbf{w}_{ \mathcal{G}}},\end{split} \tag{5}\]
Where \((\lambda_{\mathcal{F}},\lambda_{\mathcal{G}})\) are the learning rates, usually defined by the optimizer used, such as Adam [22].
We experiment with two preprocessing functions \(\mathcal{H}\) for our relation function \(\mathcal{F}\). The first is the cross-correlation between the two microphones, computed using the GCC-PHAT method. In this case, the network needs to learn to map time lags into space. As an alternative, we project the cross-correlation into space using the SLF method. The output of this method is a flattened \(N\times N\) grid or a \(N^{2}\) vector. In this case, the network needs to learn to denoise the maps which may have been corrupted by reverberation.
A final step in the feature extraction step is concatenating the microphone coordinates of the pair as well as its room dimensions into the features. This is especially important for the GCC-PHAT feature extractor, as the network must learn how to project the temporal information into space.
The target of the MLP of function \(\mathcal{G}\) is to further enhance the summed maps produced by \(\mathcal{F}\). Its output has the same size as \(\mathcal{F}\), representing a flattened \(N\times N\) grid of cells centered at coordinates \(\{\mathbf{p}_{u,v}\}\) within the room. The target value \(y(u,v)\) of each grid cell \((u,v)\) is computed as
\[y(u,v)=e^{-\|\mathbf{p}_{u,v}-\mathbf{p}_{s}\|_{2}}, \tag{6}\]
where \(\mathbf{p}_{s}\) is the target source location. Note the maximum value of 1 occurs when \(\mathbf{p}_{u,v}=\mathbf{p}_{s}\) and approaches 0 exponentially as the distance between \(\mathbf{p}_{u,v}\) and \(\mathbf{p}_{s}\) increases. We use the mean absolute error between the network output and target as our loss function. This formulation allows for detection of multiple sources, which can be extracted through peak-picking. However, in this work, we focus on the detection of a single source.
## 5 Experimentation
This section describes our experiments with our proposed network for SSL described in the previous section. We refer to our proposed methods as GNN-GCC for the network using the GCC-PHAT feature
extractor and GNN-SLF for the one using the SLF extractor. We compare our approach with two baselines, the classical Time-Difference-of-Arrival (TDOA)-based and Spatial Likelihood Function (SLF)-based approaches, as described in Sec. 3. We provide a public repository containing all methods on Github 1
Footnote 1: [https://github.com/egrinstein/gnn_ssl](https://github.com/egrinstein/gnn_ssl)
### Dataset
We test our approach using synthetically generated data using the image source method [23], generated using the Pyroomacoustics library [24]. To demonstrate that our approach is able to operate with a different number of microphones than it was trained on, the training set for our GNN uses training examples containing \(\{5,7\}\) microphones, while the test set examples contain \(\{4,5,6,7\}\) microphones.
For each dataset sample, we randomly select two numbers from a uniform distribution in the interval [3, 6] m representing the room's width and length. The room's height is uniformly selected from the interval [2, 4] m. The room's reverberation time is sampled uniformly from the interval [0.3, 0.6] s using Eyring's formula [25]. We place the microphones and source randomly within the room, with the restriction of each device being at least 0.5 m from each other and the room's walls. Each source is set to play a speech sample from the VCTK corpus [26]. The Signal-to-Noise Ratio (SNR) in each microphone is set at 30 dB, simulated by adding White Gaussian Noise (WGN) independently to each channel to the auralizations generated using the image source method. The training, validation and test datasets contain respectively 15,000, 5000 and 10,000 examples.
### Method hyperparameters
We train the networks for a maximum of 100 epochs with early stopping if the validation loss stops increasing after 3 epochs. We employ a learning rate of 0.0005 using the Adam optimizer [22]. We use a batch size of 32. These parameters were chosen empirically. All grids used are of dimensions \(25\times 25\). Our input frame size used is L=500 ms. For the GCC-PHAT method, we use a Discrete Fourier Transform (DFT) of \(1,024\) samples. Since the maximum TDOA value is bounded by the room's diagonal, we only select the central 200 correlation bins, similar to [27]. In our proposed method, our relation function's MLP contains 3 layers, each of output size 625. The function \(\mathcal{G}\)'s MLP consists of 3 layers, all with an output size of 625 neurons. We use a ReLU activation function for all layers except for the output, which uses no activation.
The grids computed in the SLF and TDOA baselines as well as the feature extractor in the GNN-SLF method have a size of \(25\times 25\). The source estimation procedure in the baselines and proposed methods consists of picking the location of the highest value in the SLF method, and the lowest on in the SLF method.
## 6 Results
The metric used to evaluate the methods consists of the mean euclidean distance between the estimated and true source location on the test set. The results are shown in Fig. 2. Note that although we test all methods on unseen simulations containing \(\{4,5,6,7\}\) microphones, our method was only trained using examples containing \(\{5,7\}\) microphones. To ensure a fair comparison, the networks were trained multiple times. The black bars show their standard deviation.
We can see that the GNN-SLF method outperforms all others, demonstrating the effectiveness of the approach. The biggest relative improvement of 29% with respect to classical SLF is observed for four microphones. An explanation is that when there are fewer measurements available improving or discarding them becomes crucial, which may be the operation being performed by the network. We also see that GNN-GCC performed poorly, only surpassing the TDOA baseline. This indicates that requiring the network to learn to map time delays to spatial position is a more demanding task than dealing with the already spatialized information.
## 7 Conclusion and Future Work
We applied the RelNet, a type of GNN for the task of SSL on distributed microphone arrays. Our results show the RelNet is able to significantly improve the localization performance over classical localization algorithms, achieving a 29% improvement in the case of 4 microphones. We also show the method generalizing to an unseen number of microphones. Future directions include testing approach for localizing multiple sources and learning graph topologies different than the complete graph.
Figure 2: Localization error for our proposed methods and baselines. |
2302.04172 | Non Fermi liquid behavior at flat hot spots from quantum critical
fluctuations at the onset of charge- or spin-density wave order | We analyze quantum fluctuation effects at the onset of charge or spin density
wave order with a $2k_F$ wave vector $\mathbf{Q}$ in two-dimensional metals --
for the special case where $\mathbf{Q}$ connects a pair of hot spots situated
at high symmetry points of the Fermi surface with a vanishing Fermi surface
curvature. We compute the order parameter susceptibility and the fermion
self-energy in one-loop approximation. The susceptibility has a pronounced peak
at $\mathbf{Q}$, and the self-energy displays non-Fermi liquid behavior at the
hot spots, with a linear frequency dependence of its imaginary part. The real
part of the one-loop self-energy exhibits logarithmic divergences with
universal prefactors as a function of both frequency and momentum, which may be
interpreted as perturbative signatures of power laws with universal anomalous
dimensions. As a result, one obtains a non-Fermi liquid metal with a vanishing
quasiparticle weight at the hot spots, and a renormalized dispersion relation
with anomalous algebraic momentum dependencies near the hot spots. | Lukas Debbeler, Walter Metzner | 2023-02-08T16:23:14Z | http://arxiv.org/abs/2302.04172v1 | Non Fermi liquid behavior at flat hot spots from quantum critical fluctuations at the onset of charge- or spin-density wave order
###### Abstract
We analyze quantum fluctuation effects at the onset of charge or spin density wave order with a \(2k_{F}\) wave vector \(\mathbf{Q}\) in two-dimensional metals - for the special case where \(\mathbf{Q}\) connects a pair of hot spots situated at high symmetry points of the Fermi surface with a vanishing Fermi surface curvature. We compute the order parameter susceptibility and the fermion self-energy in one-loop approximation. The susceptibility has a pronounced peak at \(\mathbf{Q}\), and the self-energy displays non-Fermi liquid behavior at the hot spots, with a linear frequency dependence of its imaginary part. The real part of the one-loop self-energy exhibits logarithmic divergences with universal prefactors as a function of both frequency and momentum, which may be interpreted as perturbative signatures of power laws with universal anomalous dimensions. As a result, one obtains a non-Fermi liquid metal with a vanishing quasiparticle weight at the hot spots, and a renormalized dispersion relation with anomalous algebraic momentum dependencies near the hot spots.
## I Introduction
Quantum fluctuations at and near quantum critical points (QCP) in metallic electron systems naturally lead to non-Fermi liquid behavior with unconventional temperature, momentum, and frequency dependencies of physical observables [1]. The fluctuation effects are most pronounced in low dimensional systems. In view of non-Fermi or "strange metal" behavior observed in various layered compounds, such as the high-\(T_{c}\) cuprates, two-dimensional systems have attracted particular interest. Due to the complex interplay of critical order parameter fluctuations with gapless electronic excitations, the theory of such systems is notoriously difficult.
Metals at the onset of charge or spin-density wave order provide a vast playground of
quantum critical non-Fermi liquids with many distinct universality classes. The most intensively studied case of a Neel antiferromagnet is just one example [2; 3; 4; 5]. A particularly intriguing situation arises when the wave vector \({\bf Q}\) of the density wave is a _nesting vector_ of the Fermi surface, that is, when it connects Fermi points with collinear Fermi velocities [6]. Charge and spin susceptibilities exhibit a singularity at such wave vectors due to an enhanced phase space for low-energy particle-hole excitations. Since the nesting vectors in continuum systems are related to the Fermi momentum \(k_{F}\) by the simple relation \(|{\bf Q}|=2k_{F}\), one may refer to such nesting vectors also as "\(2k_{F}\)" vectors [7]. The wave vector \((\pi,\pi)\) of a Neel state (in two dimensions) is a nesting vector only for special electron densities. Quantum critical fluctuations lead to an enhanced quasiparticle decay rate in this case, but not to a breakdown of Fermi liquid theory [8; 9].
Recently, non-Fermi liquid behavior at the onset of charge- or spin-density wave order with _incommensurate_[10] nesting wave vectors \({\bf Q}\) in two-dimensional metals has been analyzed in a series of papers. In a one-loop calculation of the fermionic self-energy, a breakdown of Fermi liquid behavior was obtained at the _hot spots_ on the Fermi surface connected by the ordering wave vector [11]. If the ordering wave vector \({\bf Q}\) connects only a single pair of hot spots, in axial or diagonal direction, the frequency dependence of the one-loop self-energy at the hot spots obeys a power-law with exponent \(\frac{2}{3}\). If \({\bf Q}\) connects two pairs of hot spots, the imaginary part of the real frequency one-loop self-energy exhibits a linear frequency dependence. In none of these two cases the perturbative solution is self-consistent, and the feedback of the non-Fermi liquid self-energy seems to push the ordering wave vector away from the nesting point [12; 13]. Actually it was argued already long ago, for the case of a single hot spot pair, that quantum fluctuations spoil the QCP in favor of a first order transition [7]. However, a flattening of the Fermi surface at the hot spots might save the QCP [12], and this scenario was supported by a systematic \(\epsilon\)-expansion around the critical dimension \(\frac{5}{2}\)[14]. For the two hot-spot pair case, a self-consistent solution with a stable QCP was found numerically [13]. While fluctuations are naturally stronger in two dimensional systems, quantum fluctuation effects at the onset of density wave order with a \(2k_{F}\) wave vector are special and intriguing also in three dimensions [15].
In this paper we analyze quantum fluctuations and non-Fermi liquid behavior at the onset of density wave order in a two-dimensional system for a case where the nesting vector connects _flat_ hot spots on a mirror symmetry axis, where the Fermi surface curvature vanishes
already in the non-interacting reference system, that is, before fluctuations are taken into account. Such a situation may arise at special electron filling factors. For example, for a tight-binding model with nearest and next-nearest neighbor hopping on a square lattice, the Fermi surface exhibits zero curvature points along the Brillouin zone diagonal for a specific choice of the Fermi level (corresponding to a special filling factor), as illustrated in Fig. 1. Using relative momentum coordinates \(k_{r}\) and \(k_{t}\) in normal and tangential directions, respectively, with respect to the Fermi surface at a hot spot, the dispersion relation near the hot spot has the form
\[\xi_{\bf k}=\epsilon_{\bf k}-\epsilon_{F}=v_{F}k_{r}+bk_{t}^{4}\,, \tag{1}\]
to leading order in \(k_{r}\) and \(k_{t}\). Here \(v_{F}\) is the Fermi velocity at the hot spot, and \(b\) is a real constant, which is positive (negative) if the Fermi surface is convex (concave) at the hot spot. Due to the mirror symmetry with respect to the Brillouin zone diagonal there is no term of order \(k_{t}^{3}\). Hence, this case differs from inflection points on the Fermi surface, where the curvature vanishes, too, but the leading tangential momentum dependence is of cubic order.
Figure 1: Hot spots with vanishing curvature on the Fermi surface for a tight-binding model with nearest and next-nearest neighbor hopping amplitudes \(t\) and \(t^{\prime}\), respectively, on a square lattice. The ratio of hopping amplitudes has been choosen as \(t^{\prime}/t=-0.35\), and the Fermi level leading to flat hot spots is \(\epsilon_{F}=8t^{\prime}[1-2(t^{\prime}/t)^{2}]=-2.114t\).
As in the above-mentioned case of Neel order with nested hot spots, a QCP with flat hot spots connected by an incommensurate wave vector requires tuning to a specific particle density. In addition, another parameter must be tuned such that the system is situated at the onset of charge or spin density wave order. In solids, the density of electrons in layered compounds can be varied over a broad range by chemical substitution or gate potentials. The tuning of other parameters such as the ratio of Coulomb to kinetic energy is in principle possible by pressure, but in practice limited to a narrow regime. Alternatively, a QCP with flat hot spots may be realized by cold fermionic atoms in optical lattices, where the particle density, interaction strength, and hopping amplitudes can be tuned at will [16; 17].
We compute the order parameter susceptibility, the effective interaction, and the fermion self-energy at the onset of incommensurate charge or spin density wave order with flat nested hot spots in a one-loop approximation. The susceptibility and the effective interaction exhibit pronounced peaks at the nesting vector. Both the momentum and frequency dependencies of the self-energy develop logarithmic divergencies, signalling non-Fermi liquid power-laws with universal critical exponents.
The remainder of the paper is structured as follows. In Sec. II we compute the order parameter susceptibility and the effective interaction at the QCP. The momentum and frequency dependence of the fermion self-energy is evaluated in Sec. III. A conclusion in Sec. IV closes the presentation.
## II Susceptibility and Effective Interaction
We consider a one-band system of interacting fermions with a bare single-particle energy-momentum relation \(\epsilon_{\mathbf{k}}\). We are dealing exclusively with ground state properties, that is, the temperature is fixed to \(T=0\). The bare fermion propagator has the form
\[G_{0}(\mathbf{k},ik_{0})=\frac{1}{ik_{0}-\xi_{\mathbf{k}}}\,, \tag{2}\]
where \(k_{0}\) denotes the (continuous) imaginary frequency, and \(\xi_{\mathbf{k}}=\epsilon_{\mathbf{k}}-\mu\). At zero temperature, the chemical potential \(\mu\) is equal to the Fermi level \(\epsilon_{F}\). We assume that, in mean-field theory, the system undergoes a charge or spin-density wave transition with an incommensurate and nested wave vector \(\mathbf{Q}\), which connects a pair of hot spots on the Fermi surface. We further assume that the dispersion relation in the vicinity of the hot spots has a quartic
tangential momentum dependence of the from Eq. (1).
In random phase approximation (RPA, that is, at one-loop level), the order parameter susceptibility has the form
\[\chi(\mathbf{q},iq_{0})=\frac{\chi_{0}(\mathbf{q},iq_{0})}{1+g\chi_{0}(\mathbf{ q},iq_{0})}\,, \tag{3}\]
where \(g<0\) is the coupling constant parametrizing the interaction in the instability channel. The bare charge or spin susceptibility \(\chi_{0}\) is related to the particle-hole bubble \(\Pi_{0}\) by \(\chi_{0}(\mathbf{q},iq_{0})=-N\Pi_{0}(\mathbf{q},iq_{0})\), where \(N\) is the spin multiplicity, and [18]
\[\Pi_{0}(\mathbf{q},iq_{0})=\int_{\mathbf{k}}\int_{k_{0}}G_{0}(\mathbf{k},ik_{0 })\,G_{0}(\mathbf{k}-\mathbf{q},ik_{0}-iq_{0})\,. \tag{4}\]
Here and in the following \(\int_{\mathbf{k}}\) is a shorthand notation for \(\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}\), and \(\int_{k_{0}}\) for \(\int\frac{dk_{0}}{2\pi}\). While keeping the spin multiplicity \(N\) as a general parameter in our equations, we choose \(N=2\), corresponding to spin-\(\frac{1}{2}\) fermions, in all numerical results. Continuing \(\Pi_{0}(\mathbf{q},iq_{0})\) analytically to the real frequency axis from the upper complex frequency half-plane yields the retarded polarization function \(\Pi_{0}(\mathbf{q},\omega)\). In Fig. 2 we show the static (zero frequency) bare susceptibility \(\chi_{0}(\mathbf{q},0)=-2\Pi_{0}(\mathbf{q},0)\) for a tight-binding model with parameters as in Fig. 1[19]. Pronounced peaks are visible at the nesting vectors connecting the four flat hot spots on the Fermi surface.
The RPA susceptibility diverges when \(g\chi_{0}(\mathbf{Q},0)=-1\), signalling the onset of charge or spin density wave order with the nesting wave vector \(\mathbf{Q}\). To analyze the behavior of the
Figure 2: Static bare susceptibility \(\chi_{0}(\mathbf{q},0)\) as a function of \(\mathbf{q}\) for a tight-binding model of spin-\(\frac{1}{2}\) fermions (\(N=2\)) with parameters as in Fig. 1.
susceptibility near the singularity, we expand
\[\delta\Pi_{0}({\bf q},\omega)=\Pi_{0}({\bf q},\omega)-\Pi_{0}({\bf Q},0)\,. \tag{5}\]
for \({\bf q}\) near \({\bf Q}\) and small \(\omega\). Momenta near \({\bf Q}\) are parametrized by relative momentum coordinates \(q_{r}\) and \(q_{t}\), parallel and perpendicular to \({\bf Q}\), respectively. The leading contributions to \(\delta\Pi_{0}({\bf q},\omega)\) come from fermionic momenta near the hot spots connected by \({\bf Q}\), where the dispersion relations in Eq. (4) can be expanded as in Eq. (1), that is, \(\xi_{\bf k}=v_{F}k_{r}+bk_{t}^{4}\) and \(\xi_{\bf k-q}=-v_{F}(k_{r}-q_{r})+b(k_{t}-q_{t})^{4}\). In the following we assume that \(b\) is positive. Our derivations and results can be easily adapted to negative \(b\). The integrals over \(k_{r}\), \(k_{t}\), and \(k_{0}\) are evaluated in Appendix A.
For \(q_{t}=0\), the integral in Eq. (4) is elementary, yielding
\[\delta\Pi_{0}(q_{r},0,\omega)=\frac{1}{4\pi v_{F}(2b)^{1/4}}\left[(1-i)\sqrt[4 ]{\omega+i0^{+}-v_{F}q_{r}}+(1+i)\sqrt[4]{-\omega-i0^{+}-v_{F}q_{r}}\,\right]\,. \tag{6}\]
In the static limit \(\omega\to 0\), one obtains
\[\delta\Pi_{0}(q_{r},0,0)=\left\{\begin{array}{ll}(v_{F}|q_{r}|)^{1/4}/[2\pi v _{F}(2b)^{1/4}]&\mbox{for $q_{r}<0$}\,,\\ (v_{F}q_{r})^{1/4}/[\sqrt{2}\pi v_{F}(2b)^{1/4}]&\mbox{for $q_{r}>0$}\,. \end{array}\right. \tag{7}\]
The static particle-hole bubble thus exhibits a cusp with infinite slope as a function of \(q_{r}\) at \(q_{r}=q_{t}=0\).
For \(q_{t}\neq 0\), the particle-hole bubble can be expressed in a scaling form as (see Appendix A)
\[\delta\Pi_{0}(q_{r},q_{t},\omega)=\frac{|q_{t}|}{4v_{F}}\left[I\Big{(}\frac{ \omega-v_{F}q_{r}}{bq_{t}^{4}}\Big{)}+I^{*}\Big{(}\frac{-\omega-v_{F}q_{r}}{bq _{t}^{4}}\Big{)}\right], \tag{8}\]
Figure 3: Real and imaginary parts of the scaling function \(I(x)\).
where
\[I(x)=\int_{0}^{\infty}\frac{d\tilde{k}_{0}}{2\pi}\left[\frac{1+i}{2(i\tilde{k}_{0 })^{3/4}}-\frac{1}{\sqrt{-\frac{3}{4}+\sqrt{\frac{1}{2}(1+x+2i\tilde{k}_{0})}} +\frac{1}{\sqrt{-\frac{3}{4}-\sqrt{\frac{1}{2}(1+x+2i\tilde{k}_{0})}}}}\right] \tag{9}\]
is a dimensionless scaling function - shown graphically in Fig. 3. The graph of \(I(x)\) exhibits a cusp at \(x=\frac{1}{8}\), and the imaginary part of \(I(x)\) vanishes for all \(x\leq\frac{1}{8}\). While \(I(x)\) cannot be expressed by elementary functions, \(I(0)\) and \(I\left(\frac{1}{8}\right)\) are given by the simple numbers \(\sqrt{2}/\pi\) and \(\sqrt{\frac{3}{2}}/\pi\), respectively. Since \(I(0)\) is finite, \(\delta\Pi_{0}(0,q_{t},0)\) is linear in \(|q_{t}|\). For \(|x|\to\infty\), the scaling function behaves asymptotically as
\[I(x)\sim\left\{\begin{array}{ll}\frac{1-i}{2^{1/4}\pi}\,x^{1/4}\ +\frac{3\cdot 2 ^{1/4}(1+i)}{8\pi}\,x^{-1/4}\ \ \mbox{for}\ x>0\,,\\ \frac{2^{1/4}}{\pi}\,|x|^{1/4}\ +\frac{3}{4\cdot 2^{1/4}\pi}\,|x|^{-1/4}\ \ \ \mbox{ for}\ x<0\,.\end{array}\right. \tag{10}\]
Inserting the leading asymptotic behavior into Eq. (8) one recovers the result Eq. (6) for \(q_{t}=0\). The next to leading order yields the leading \(q_{t}\) dependence for \(b|q_{t}|^{4}\ll|\pm\omega-v_{F}q_{r}|\),
\[\Pi_{0}(q_{r},q_{t},\omega)-\Pi_{0}(q_{r},0,\omega)=\frac{3(2b)^{1/4}q_{t}^{2 }}{32\pi v_{F}}\left(\frac{1+i}{\sqrt[4]{\omega+i0^{+}-v_{F}q_{r}}}+\frac{1-i }{\sqrt[4]{-\omega-i0^{+}-v_{F}q_{r}}}\right)+\mathcal{O}(q_{t}^{4})\,. \tag{11}\]
Hence, for \(\omega\neq\pm v_{F}q_{r}\), the leading \(q_{t}\) dependence is quadratic in \(q_{t}\).
The RPA effective interaction is given by
\[D({\bf q},iq_{0})=\frac{g}{1+g\chi_{0}({\bf q},iq_{0})} \tag{12}\]
on the imaginary frequency axis, and by the same expression with \(iq_{0}\to\omega\) on the real frequency axis. At the QCP, \(g\chi_{0}({\bf Q},0)\) is equal to minus one, so that
\[D({\bf q},\omega)=-\frac{1}{N\delta\Pi_{0}({\bf q},\omega)}\,. \tag{13}\]
Hence, the effective interaction at the QCP does not depend on the coupling constant \(g\).
## III Fermion self-energy
To leading order in the effective interaction \(D\), the fermion self-energy is given by the one-loop integral
\[\Sigma({\bf k},ik_{0})=-M\int_{\bf q}\int_{q_{0}}D({\bf q},iq_{0})\,G_{0}({\bf k }-{\bf q},ik_{0}-iq_{0})\,, \tag{14}\]
with \(M=1\) for a charge density and \(M=3\) for a spin-density instability [12]. Analytic continuation of this expression to the real frequency axis yields [20]
\[\Sigma({\bf k},\omega+i0^{+}) = -\frac{M}{\pi}\int d\nu\int_{\bf q}\left[b(\nu){\rm Im}D({\bf q}, \nu+i0^{+})\,G_{0}({\bf k}-{\bf q},\nu+\omega+i0^{+})\right. \tag{15}\] \[\left.-\,f(\nu)\,D({\bf q},\nu-\omega-i0^{+})\,{\rm Im}G_{0}({\bf k }-{\bf q},\nu+i0^{+})\right],\]
where \(b(\nu)=[e^{\beta\nu}-1]^{-1}\) and \(f(\nu)=[e^{\beta\nu}+1]^{-1}\) are the Bose and Fermi functions, respectively. At zero temperature (\(\beta=\infty\)) these functions become step functions \(b(\nu)=-\Theta(-\nu)\) and \(f(\nu)=\Theta(-\nu)\). In the following we denote \(\Sigma({\bf k},\omega+i0^{+})\), \(G({\bf k},\omega+i0^{+})\), and \(D({\bf q},\nu+i0^{+})\) by \(\Sigma({\bf k},\omega)\), \(G({\bf k},\omega)\), and \(D({\bf q},\nu)\), respectively. Note, however, the negative infinitesimal imaginary part in one of the frequency arguments in Eq. (15).
We analyze \(\Sigma({\bf k},\omega)\) at the QCP for low frequencies \(\omega\) and momenta \({\bf k}\) near one of the hot spots on the Fermi surface, which we denote as \({\bf k}_{H}\). The effective interaction \(D\) at the QCP is given by Eq. (13) with \(\delta\Pi_{0}\) from Eq. (8). The dominant contributions come from momentum transfers \({\bf q}\) near \({\bf Q}\), so that \({\bf k}-{\bf q}\) is situated near the antipodal hot spot \(-{\bf k}_{H}\). Using relative momentum variables as above, the dispersion relation in the fermion propagator can be expanded as \(\xi_{{\bf k}-{\bf q}}=-v_{F}(k_{r}-q_{r})+b(k_{t}-q_{t})^{4}\).
To evaluate the self-energy, it is convenient to first consider its imaginary part, and then compute the real part from a Kramers-Kronig relation. The imaginary part of Eq. (15) reads
\[{\rm Im}\Sigma({\bf k},\omega)=-\frac{M}{\pi}\int d\nu\int_{\bf q}\left[b(\nu )+f(\nu+\omega)\right]\,{\rm Im}D({\bf q},\nu)\,{\rm Im}G_{0}({\bf k}-{\bf q},\omega+\nu)\,. \tag{16}\]
Note that \({\rm Im}D({\bf q},\nu-i0^{+})=-{\rm Im}D({\bf q},\nu+i0^{+})\). Using the Dirac identity \({\rm Im}G_{0}({\bf k},\omega)=-\pi\delta(\omega-\xi_{\bf k})\), the frequency integral in Eq. (16) can be easily carried out, yielding
\[{\rm Im}\Sigma({\bf k},\omega)=M\int_{\bf q}\left[b(\xi_{{\bf k}-{\bf q}}- \omega)+f(\xi_{{\bf k}-{\bf q}})\right]\,{\rm Im}D({\bf q},\xi_{{\bf k}-{\bf q }}-\omega)\,. \tag{17}\]
At zero temperature, the sum of Bose and Fermi functions in Eq. (17) is given by
\[b(\xi_{{\bf k}-{\bf q}}-\omega)+f(\xi_{{\bf k}-{\bf q}})=\left\{\begin{array} []{rl}-1&{\rm for}\;0<\xi_{{\bf k}-{\bf q}}<\omega\,,\\ &1&{\rm for}\;\omega<\xi_{{\bf k}-{\bf q}}<0\,,\\ &0&{\rm else}\,,\end{array}\right. \tag{18}\]
restricting thus the contributing momentum region. The integral in Eq. (17) is convergent even if the momentum integration over \(q_{r}\) and \(q_{t}\) is extended to infinity.
The real part of the self-energy can be obtained from the Kramers-Kronig-type relation
\[\Sigma({\bf k},\omega)=-\frac{1}{\pi}\int_{-\infty}^{\infty}d\omega^{\prime}\, \frac{{\rm Im}\Sigma({\bf k},\omega^{\prime})}{\omega-\omega^{\prime}+i0^{+}}+{ \rm const}\,. \tag{19}\]
The last term in this relation is a real constant.
### Frequency dependence at hot spot
The frequency dependence at the hot spot (for \({\bf k}={\bf k}_{H}\)) can be derived by a simple rescaling of the integration variables in Eq. (17). Substituting \(q_{r}=|\omega/v_{F}|\tilde{q}_{r}\) and \(q_{t}=|\omega/b|^{1/4}\tilde{q}_{t}\), one obtains
\[{\rm Im}\Sigma({\bf k}_{H},\omega)=-\frac{M}{N}\,A_{s(\omega)}|\omega|\,, \tag{20}\]
where \(A_{+}\) and \(A_{-}\) are two positive dimensionless numbers depending on the sign of \(\omega\). These numbers are determined by the integral
\[A_{s(\omega)}=-\int_{\tilde{\bf q}}^{\prime}{\rm Im}\frac{4s(\omega)}{|\tilde {q}_{t}|\left[I\left(\frac{\tilde{q}_{t}^{4}-s(\omega)}{\tilde{q}_{t}^{4}} \right)+I^{*}\left(\frac{-2\tilde{q}_{r}-\tilde{q}_{t}^{4}+s(\omega)}{\tilde{ q}_{t}^{4}}\right)\right]}\,, \tag{21}\]
where the prime at the integral sign indicates a restriction of the integration region to \(0<\tilde{q}_{r}+\tilde{q}_{t}^{4}<1\) for \(\omega>0\), and to \(-1<\tilde{q}_{r}+\tilde{q}_{t}^{4}<0\) for \(\omega<0\). Note that the frequency dependence of the self-energy at the hot spot depends neither on \(v_{F}\), nor on \(b\). A numerical evaluation of the integral in Eq. (21) yields \(A_{+}\approx 0.049\) and \(A_{-}\approx 0.072\). Hence, \({\rm Im}\Sigma({\bf k}_{H},\omega)\) is slightly asymmetric in \(\omega\). For the generic situation with a finite Fermi surface curvature, a linear frequency dependence of \({\rm Im}\Sigma({\bf k}_{H},\omega)\) with asymmetric coefficients has also been found in a one-loop calculation of the self-energy at the onset of charge or spin-density wave order with _two_ pairs of nested hot spots connected by the same wave vector \({\bf Q}\)[11; 13]. In that case, however, the coefficients are not universal - they depend on a dimensionless combination of model parameters.
The real part of the self-energy can be obtained from the Kramers-Kronig relation Eq. (19). With \({\rm Im}\Sigma({\bf k}_{H},\omega)\) as in Eq. (20), the integral in Eq. (19) is logarithmically divergent at large frequencies \(\omega^{\prime}\). This is due to the fact that the linear frequency dependence has been obtained from an expansion that captures only the asymptotic low frequency behavior, which cannot be extended to all frequencies. The imaginary part of the exact self-energy of any physical system has to vanish in the high-frequency limit. To compute the low-frequency
behavior of \({\rm Re}\Sigma\), we mimic the high-frequency decay of \({\rm Im}\Sigma\) by imposing an ultraviolet frequency cutoff \(\Lambda_{\omega}\), so that the frequency integration in Eq. (19) is restricted to \(|\omega^{\prime}|<\Lambda_{\omega}\). Defining \(\delta\Sigma({\bf k},\omega)=\Sigma({\bf k},\omega)-\Sigma({\bf k}_{H},0)\), we then obtain
\[{\rm Re}\,\delta\Sigma({\bf k}_{H},\omega)=-\frac{M}{N}\,\frac{A_{+}+A_{-}}{ \pi}\,\omega\ln\frac{\Lambda_{\omega}}{|\omega|}\,, \tag{22}\]
for \(|\omega|\ll\Lambda_{\omega}\).
The logarithm in Eq. (22) implies a logarithmic divergence of the inverse quasiparticle weight [18], \(Z^{-1}=1-\partial\Sigma({\bf k}_{H},\omega)/\partial\omega\sim\ln(\Lambda_{ \omega}/|\omega|)\). Hence, Landau quasiparticles do not exist at the hot spots, and Fermi liquid theory breaks down.
Logarithmic divergences are frequently a perturbative manifestation of power-law behavior, especially in (quantum) critical systems. Assuming that the one-loop result in Eq. (22) reflects the leading order of an expansion of a power-law, one obtains
\[\omega-\delta\Sigma({\bf k}_{H},\omega)\propto\left(|\omega|/\Lambda_{\omega} \right)^{-\eta_{\omega}}\omega \tag{23}\]
at low frequencies, with the anomalous dimension
\[\eta_{\omega}=\frac{M}{N}\,\frac{A_{+}+A_{-}}{\pi}\approx\frac{M}{N}\,0.039\,. \tag{24}\]
Hence, the quasiparticle weight \(Z\) vanishes as \(|\omega|^{\eta_{\omega}}\) in the low-energy limit. Note that Kramers-Kronig consistency requires that the real and imaginary parts of the self-energy obey the same power-law if \(\eta_{\omega}>0\). We emphasize that the power-law in Eq. (23) is only an educated guess. The actual behavior might be more complicated. To clarify the role of higher order contributions, a renormalization group analysis might be a useful next step.
### Frequency and momentum dependencies near hot spot
We now analyze the momentum and frequency dependence of the self-energy in the vicinity of a hot spot. We consider radial and tangential momentum dependencies separately.
For \(k_{t}=0\), we can express \({\rm Im}\Sigma({\bf k},\omega)\) from Eq. (17) in the scaling form
\[{\rm Im}\Sigma({\bf k},\omega)=-\frac{M}{N}\,A^{(r)}_{s(\omega)}(\tilde{k}_{r })\,|\omega|\,, \tag{25}\]
with the dimensionless scaling functions
\[A^{(r)}_{s(\omega)}(\tilde{k}_{r})=-\int_{\tilde{\bf q}}^{\prime}{\rm Im} \frac{4s(\omega)}{|\tilde{q}_{t}|\left[I\left(\frac{-\tilde{k}_{r}+\tilde{q} _{t}^{4}-s(\omega)}{\tilde{q}_{t}^{4}}\right)+I^{*}\left(\frac{\tilde{k}_{r}-2 \tilde{q}_{r}-\tilde{q}_{t}^{4}+s(\omega)}{\tilde{q}_{t}^{4}}\right)\right]}\,, \tag{26}\]
where the integration region is restricted to \(0<-\tilde{k}_{r}+\tilde{q}_{r}+\tilde{q}_{t}^{4}<1\) for \(\omega>0\), and to \(-1<-\tilde{k}_{r}+\tilde{q}_{r}+\tilde{q}_{t}^{4}<0\) for \(\omega<0\). The rescaled variables are defined by \(q_{r}=|\omega/v_{F}|\tilde{q}_{r}\), \(q_{t}=|\omega/b|^{1/4}\tilde{q}_{t}\), and \(k_{r}=|\omega/v_{F}|\tilde{k}_{r}\). The scaling functions \(A_{\pm}^{(r)}\) are shown graphically in Fig. 4. For \(\tilde{k}_{r}=0\) we recover Eq. (20), since \(A_{\pm}^{(r)}(0)=A_{\pm}\) from Eq. (21). For small finite \(\tilde{k}_{r}\), the leading \(\tilde{k}_{r}\) dependence of \(A_{s(\omega)}^{(r)}(\tilde{k}_{r})\) is linear,
\[A_{\pm}^{(r)}(\tilde{k}_{r})=A_{\pm}+B_{\pm}\tilde{k}_{r}+{\cal O}(\tilde{k}_{ r}^{2})\,, \tag{27}\]
with \(B_{+}\approx-0.050\) and \(B_{-}\approx 0.027\).
In Fig. 5 we show the frequency dependence of \(\text{Im}\Sigma({\bf k},\omega)\) for various choices of \(k_{r}\). For small \(|\omega|\), the leading frequency dependence is quadratic. For \(|\omega|\gg v_{F}|k_{r}|\), the curves
Figure 5: Imaginary part of the one-loop self-energy as a function of frequency for various choices of the radial momentum variable \(k_{r}\). Here \(M=1\) and \(N=2\).
rapidly approach the asymptotic behavior
\[{\rm Im}\Sigma({\bf k},\omega)\sim-\frac{M}{N}\left[A_{s(\omega)}|\omega|+B_{s( \omega)}v_{F}k_{r}\right]\,, \tag{28}\]
which follows from Eq. (27). Inserting this asymptotic dependence in the Kramers-Kronig relation Eq. (19), one obtains the leading \(k_{r}\) dependence of the real part of the self-energy at zero frequency as
\[\delta\Sigma({\bf k},0)\sim-\,\frac{M}{N}\,\frac{B_{+}-B_{-}}{\pi}\,v_{F}k_{r} \,\ln\frac{\Lambda_{\omega}}{v_{F}|k_{r}|}\,. \tag{29}\]
Assuming, as before, that the logarithm reflects the leading contribution from a power-law, we expect a momentum dependence of the form
\[v_{F}k_{r}+\delta\Sigma({\bf k},0)\propto\left(v_{F}|k_{r}|/\Lambda_{\omega} \right)^{-\eta_{r}}v_{F}k_{r} \tag{30}\]
for small \(k_{r}\), with the anomalous dimension
\[\eta_{r}=\frac{M}{N}\,\frac{B_{-}-B_{+}}{\pi}\approx\frac{M}{N}\,0.025\,. \tag{31}\]
The renormalized Fermi velocity [18] given by \(\bar{v}_{F}(k_{r})=Z\left[1+\partial\Sigma/\partial k_{r}\right]v_{F}\) is proportional to \(|\omega|^{\eta_{\omega}}|k_{r}|^{-\eta_{r}}\) with \(\omega=v_{F}k_{r}\), and thus \(\bar{v}_{F}(k_{r})\propto|k_{r}|^{\eta_{\omega}-\eta_{r}}\). This quantity vanishes for \(k_{r}\to 0\), albeit very slowly, since \(\eta_{\omega}>\eta_{r}\).
We now discuss the tangential momentum dependence of the self-energy. For \(k_{r}=0\), we can express \({\rm Im}\Sigma({\bf k},\omega)\) from Eq. (17) in the scaling form
\[{\rm Im}\Sigma({\bf k},\omega)=-\frac{M}{N}\,A^{(t)}_{s(\omega)}(\tilde{k}_{t })\,|\omega|\,, \tag{32}\]
with the dimensionless scaling functions (see also Fig. 6)
\[A_{s(\omega)}^{(t)}(\tilde{k}_{t})=-\int_{\tilde{\bf q}}^{\prime}{\rm Im}\frac{4s (\omega)}{|\tilde{q}_{t}|\left[I\left(\frac{(\tilde{k}_{t}-\tilde{q}_{t})^{4}-s (\omega)}{\tilde{q}_{t}^{4}}\right)+I^{*}\left(\frac{-2\tilde{q}_{r}-(\tilde{k }_{t}-\tilde{q}_{t})^{4}+s(\omega)}{\tilde{q}_{t}^{4}}\right)\right]}\,, \tag{33}\]
where the integration region is restricted to \(0<\tilde{q}_{r}+(\tilde{k}_{t}-\tilde{q}_{t})^{4}<1\) for \(\omega>0\), and to \(-1<\tilde{q}_{r}+(\tilde{k}_{t}-\tilde{q}_{t})^{4}<0\) for \(\omega<0\). The rescaled variables are defined by \(q_{r}=|\omega/v_{F}|\tilde{q}_{r}\), \(q_{t}=|\omega/b|^{1/4}\tilde{q}_{t}\), and \(k_{t}=|\omega/b|^{1/4}\tilde{k}_{t}\). The scaling functions \(A_{\pm}^{(t)}(\tilde{k}_{t})\) are symmetric under \(\tilde{k}_{t}\mapsto-\tilde{k}_{t}\). Hence, their Taylor expansion for small \(\tilde{k}_{t}\) contains only even powers of \(\tilde{k}_{t}\),
\[A_{\pm}^{(t)}(\tilde{k}_{t})=A_{\pm}+C_{\pm}\tilde{k}_{t}^{2}+D_{\pm}\tilde{k}_ {t}^{4}+{\cal O}(\tilde{k}_{t}^{6})\,. \tag{34}\]
A numerical evaluation of Eq. (33) yields \(C_{+}=0\), \(C_{-}\approx-0.023\), and \(D_{+}\approx-0.008\). It is difficult to extract the quartic coefficient \(D_{-}\) numerically, because the quartic contribution is superseded by the much larger (for small \(\tilde{k}_{t}\)) quadratic term.
In Fig. 7 we show the frequency dependence of \({\rm Im}\Sigma({\bf k},\omega)\) for various choices of \(k_{t}\). For small \(|\omega|\), the leading frequency dependence is quadratic. For \(|\omega|\gg b|k_{t}|^{4}\), the curves approach the asymptotic behavior
\[{\rm Im}\Sigma({\bf k},\omega)\sim-\frac{M}{N}\left[A_{s(\omega)}|\omega|+C_{ s(\omega)}\sqrt{b|\omega|}k_{t}^{2}+D_{s(\omega)}bk_{t}^{4}\right]\,. \tag{35}\]
Inserting this expansion into the Kramers-Kronig relation Eq. (19), we obtain
\[\delta\Sigma({\bf k},0)=-\frac{2}{\pi}(C_{+}-C_{-})\sqrt{b\Lambda_{\omega}} \,k_{t}^{2}-\frac{D_{+}-D_{-}}{\pi}\,bk_{t}^{4}\ln\frac{\Lambda_{\omega}}{bk_ {t}^{4}}+{\cal O}(k_{t}^{4})\,, \tag{36}\]
Figure 7: Imaginary part of the one-loop self-energy as a function of frequency for various choices of the tangential momentum variable \(k_{t}\). Here \(M=1\) and \(N=2\).
for small \(k_{t}\). The leading term is quadratic in \(k_{t}\) and depends strongly on the ultraviolet cutoff \(\Lambda_{\omega}\). This term, along with generic regular many-body contributions of the same order, leads to a renormalized dispersion relation \(\bar{\xi}_{\bf k}\) with a quadratic tangential momentum dependence, in conflict with our original assumption. However, the case of a dispersion with a vanishing quadratic dependence on \(k_{t}\) can be restored by slightly shifting the bare parameters of our system such that the self-energy corrections cancel the quadratic \(k_{t}\) dependence of the bare dispersion relation. Between parameter regimes with a locally convex and a locally concave Fermi surface (at the hot spots), there must be a transition point where the Fermi surface is flat, in the interacting system as well as in the non-interacting reference model. Hence, we are left with the second term in Eq. (36) only, which can be promoted to a power-law of the form
\[bk_{t}^{4}+\delta\Sigma({\bf k},0)\propto\left(bk_{t}^{4}/\Lambda_{\omega} \right)^{-\eta_{t}}bk_{t}^{4} \tag{37}\]
for small \(k_{t}\), with the anomalous dimension
\[\eta_{t}=\frac{M}{N}\,\frac{D_{-}-D_{+}}{\pi}\,. \tag{38}\]
We have no estimate of the number for \(D_{-}\), but from the numerical data for \(A_{-}(\tilde{k}_{t})\) we can see that it must be very small, such that the anomalous dimension \(\eta_{t}\) is also small, in line with the other anomalous dimensions \(\eta_{\omega}\) and \(\eta_{r}\).
Combining the results on the frequency dependence of the quasiparticle weight \(Z\) with the results on the radial and tangential momentum dependence of the self-energy, we obtain a renormalized dispersion relation of the form
\[\bar{\xi}_{\bf k}=a_{r}\,{\rm sgn}(k_{r})|k_{r}|^{\alpha_{r}}+a_{t}|k_{t}|^{ \alpha_{t}}\,, \tag{39}\]
where \(\alpha_{r}=1+\eta_{\omega}-\eta_{r}\) and \(\alpha_{t}=4(1+\eta_{\omega}-\eta_{t})\), while \(a_{r}\) and \(a_{t}\) are non-universal coefficients. The anomalous dimensions are relatively small, so that \(\alpha_{r}\) and \(\alpha_{t}\) remain close to the bare values one and four, respectively. A renormalized dispersion of the form Eq. (39) has been obtained earlier in an \(\epsilon\)-expansion [14] for the standard case of a bare dispersion with a quadratic tangential momentum dependence. In that case, the anomalous dimensions turned out to be quite large in the physical dimension two (corresponding to \(\epsilon=\frac{1}{2}\)), leading to a strongly flattened Fermi surface with an almost quartic shape \(k_{r}\propto|k_{t}|^{3.85}\).
Conclusion
We have analyzed quantum fluctuation effects at the onset of charge or spin density wave order with an incommensurate nesting (\(2k_{F}\)) wave vector \(\mathbf{Q}\) in two-dimensional metals - for the special case where \(\mathbf{Q}\) connects a pair of hot spots situated at flat high symmetry points of the Fermi surface with a vanishing Fermi surface curvature. The leading tangential momentum dependence of the bare dispersion is quartic at these points. The charge or spin susceptibilities form a pronounced peak at \(\mathbf{Q}\).
We have computed the fermion self-energy \(\Sigma(\mathbf{k},\omega)\) at the QCP as a function of (real) frequency and momentum near the hot spots in RPA, that is, in one-loop approximation. At the hot spots, the frequency dependence of \(\mathrm{Im}\Sigma\) is linear and slightly asymmetric, while the quasiparticle weight and the momentum dependence of the self-energy exhibit logarithmic divergences with universal prefactors. Hence, there are no Landau quasiparticles at the hot spots, giving rise to non-Fermi liquid behavior.
A tentative resummation of the logarithms leads to power-laws with small universal anomalous dimensions. The quasiparticle weight vanishes with a small power of frequency. The renormalized dispersion relation has the form \(\bar{\xi}_{\mathbf{k}}=a_{r}\,\mathrm{sgn}(k_{r})|k_{r}|^{\alpha_{r}}+a_{t}|k_ {t}|^{\alpha_{t}}\) near the hot spots, where \(k_{r}\) and \(k_{t}\) are radial and tangential relative momentum variables, respectively. The exponents \(\alpha_{r}\) and \(\alpha_{t}\) are close to the corresponding exponents of the bare dispersion relation one and four, respectively. Since the renormalized dispersion relation has almost the same form as the bare one, and the quasiparticle weight vanishes only slowly, the self-energy corrections do not destroy the peak at the nesting vector in the susceptibility. The \(2k_{F}\) QCP is thus stable. In particular, there are no indications that it might be replaced by a first order transition, in contrast to the more delicate situation for a quadratic dispersion [7].
The QCP with flat hot spots could be realized experimentally in suitable layered compounds or cold atom systems by fine tuning two parameters, for example, density and interaction strength. Moreover, our model is an instructive prototype of a larger class of systems with a _bare_ dispersion of the form \(\xi_{\mathbf{k}}=a_{r}\,\mathrm{sgn}(k_{r})|k_{r}|^{\alpha_{r}^{(0)}}+a_{t}|k _{t}|^{\alpha_{t}^{(0)}}\). In our case, where \(\alpha_{r}^{(0)}=1\) and \(\alpha_{t}^{(0)}=4\), the analysis is comparatively simple, since several integrations can be performed analytically, and the renormalized dispersion remains close to the bare one. From a theoretical point of view it would be interesting to extend the analysis to more general bare exponents \(\alpha_{r}^{(0)}\) and \(\alpha_{t}^{(0)}\), and to see whether the renormalized dispersion has
universal exponents. In particular, it remains to be clarified whether in the conventional but delicate case \(\alpha_{r}^{(0)}=1\) and \(\alpha_{t}^{(0)}=2\) the mean-field QCP survives in the presence of fluctuations, with a renormalized dispersion of the above form and a flattened Fermi surface, as suggested by the one-loop \(\epsilon\)-expansion by Halbinger et al. [14].
###### Acknowledgements.
We are grateful to Walter Hofstetter, Thomas Schafer, and Jachym Sykora for valuable discussions, and to Pietro Bonetti for providing the data for the bare susceptibility shown in Fig. 2.
## Appendix A Evaluation of particle-hole bubble
The \(k_{r}\) integral in Eq. (4) can be easily carried out by using the residue theorem. Shifting the remaining integration variables as \(k_{t}\to k_{t}+q_{t}/2\) and \(k_{0}\to k_{0}+q_{0}/2\) to symmetrize the integrand, one obtains
\[\Pi_{0}(\mathbf{q},iq_{0})=\frac{i}{v_{F}}\int_{-\infty}^{\infty}\frac{dk_{0}} {2\pi}\int_{-\infty}^{\infty}\frac{dk_{t}}{2\pi}\,\frac{\Theta(-k_{0}-q_{0}/2 )-\Theta(k_{0}-q_{0}/2)}{2ik_{0}-v_{F}q_{r}-b(k_{t}+q_{t}/2)^{4}-b(k_{t}-q_{t}/ 2)^{4}} \tag{10}\]
The \(k_{t}\) integration can also be performed via residues. The denominator in Eq. (10) has four poles in the complex \(k_{t}\) plane, namely
\[k_{t}^{ss^{\prime}}=s\sqrt{-\frac{3}{4}q_{t}^{2}+\frac{s^{\prime}}{\sqrt{2}} \sqrt{q_{t}^{4}+(2ik_{0}-v_{F}q_{r})/b}}\,, \tag{11}\]
where \(s,s^{\prime}\in\{+,-\}\). Closing the integration contour in the upper complex half-plane, only the poles in the upper half-plane contribute. For \(k_{0}>0\), these are the poles \(k_{t}^{++}\) and \(k_{t}^{--}\), and the corresponding residues are
\[R^{++} = \frac{1}{(k_{t}^{++}-k_{t}^{+-})(k_{t}^{++}-k_{t}^{-+})(k_{t}^{++ }-k_{t}^{--})}=\frac{1}{2\sqrt{2}k_{t}^{++}\sqrt{q_{t}^{4}+(2ik_{0}-v_{F}q_{r} )/b}}\,, \tag{12a}\] \[R^{--} = \frac{1}{(k_{t}^{--}-k_{t}^{++})(k_{t}^{--}-k_{t}^{+-})(k_{t}^{-- }-k_{t}^{-+})}=\frac{-1}{2\sqrt{2}k_{t}^{--}\sqrt{q_{t}^{4}+(2ik_{0}-v_{F}q_{r} )/b}}\,. \tag{12b}\]
For \(k_{0}<0\), the poles \(k_{t}^{+-}\) and \(k_{t}^{-+}\) are situated in the upper half-plane, and the corresponding residues are
\[R^{+-} =\frac{1}{(k_{t}^{+-}-k_{t}^{++})(k_{t}^{+-}-k_{t}^{-+})(k_{t}^{+-} -k_{t}^{--})}=-R^{--}\,, \tag{10a}\] \[R^{-+} =\frac{1}{(k_{t}^{-+}-k_{t}^{++})(k_{t}^{-+}-k_{t}^{+-})(k_{t}^{- +}-k_{t}^{--})}=-R^{++}\,, \tag{10b}\]
where \(R^{++}\) and \(R^{--}\) are defined by the expressions on the right hand sides of Eq. (10), but now for \(k_{0}<0\). The numerator in Eq. (11) partitions the \(k_{0}\) axis in three regions,
\[\Theta(-k_{0}-q_{0}/2)-\Theta(k_{0}-q_{0}/2)=\left\{\begin{array}{rl}&1\, \,\,\mbox{for}\,\,\,k_{0}<-|q_{0}|/2\\ &0\,\,\,\mbox{for}\,\,\,-|q_{0}|/2<k_{0}<|q_{0}|/2\\ &-1\,\,\,\mbox{for}\,\,\,k_{0}>|q_{0}|/2\end{array}\right. \tag{11}\]
Performing the \(k_{t}\) integral in Eq. (11) by using the residue theorem, one thus obtains
\[\Pi_{0}({\bf q},iq_{0})=-\frac{1}{2bv_{F}}\int_{-\infty}^{\infty}\frac{dk_{0} }{2\pi}\,\Theta(|k_{0}|-|q_{0}|/2)\left(R^{++}+R^{--}\right)\,. \tag{12}\]
The integral diverges in the ultraviolet, but the integral for \(\delta\Pi_{0}({\bf q},iq_{0})=\Pi_{0}({\bf q},iq_{0})-\Pi_{0}({\bf Q},0)\) is finite.
In the special case \(q_{t}=0\), the \(k_{0}\) integral in Eq. (12) is elementary. Setting \(q_{t}=0\), one obtains
\[k_{t}^{++}(q_{t}=0) =\left(\frac{2ik_{0}-v_{F}q_{r}}{2b}\right)^{1/4}\,, \tag{13a}\] \[k_{t}^{--}(q_{t}=0) =\left(\frac{2ik_{0}-v_{F}q_{r}}{2b}\right)^{1/4}is(k_{0})\,, \tag{13b}\]
where \(s(k_{0})\) is the sign of \(k_{0}\). The residues simplify to
\[R^{++}(q_{t}=0) =\frac{1}{4}\left(\frac{2ik_{0}-v_{F}q_{r}}{2b}\right)^{-3/4}\,, \tag{14a}\] \[R^{--}(q_{t}=0) =\frac{i}{4}\left(\frac{2ik_{0}-v_{F}q_{r}}{2b}\right)^{-3/4}s(k_{ 0})\,. \tag{14b}\]
Inserting this into Eq. (12) and introducing an ultraviolet cutoff \(\Lambda\), we obtain
\[\Pi_{0}(q_{r},0,iq_{0})=-\frac{1}{8bv_{F}}\int_{-\Lambda}^{\Lambda}\frac{dk_{0 }}{2\pi}\,\Theta(|k_{0}|-|q_{0}|/2)\left(\frac{2ik_{0}-v_{F}q_{r}}{2b}\right)^ {-3/4}[1+s(k_{0})]\,. \tag{15}\]
The frequency integration is obviously elementary. Subtracting \(\Pi_{0}(0,0,0)\), we can take the limit \(\Lambda\to\infty\), yielding
\[\delta\Pi_{0}(q_{r},0,iq_{0})=\frac{1}{4\pi v_{F}(2b)^{1/4}}\left[(1-i)\sqrt[4 ]{i|q_{0}|-v_{F}q_{r}}+(1+i)\sqrt[4]{-i|q_{0}|-v_{F}q_{r}}\,\right]\,. \tag{16}\]
Analytic continuation of this expression in the upper complex frequency half-plane to real frequencies yields Eq. (6).
For \(q_{t}\neq 0\) we write \(\Pi_{0}\) as a sum of two terms, \(\Pi_{0}=\Pi_{0}^{+}+\Pi_{0}^{-}\), where \(\Pi_{0}^{+}\) and \(\Pi_{0}^{-}\) are obtained from the contributions with \(k_{0}>0\) and \(k_{0}<0\) to the integral in Eq. (10), respectively. Shifting the integration variable by \(\pm|q_{0}|/2\), one obtains
\[\Pi_{0}^{-}({\bf q},iq_{0}) =-\frac{1}{4bv_{F}}\int_{-\infty}^{0}\frac{dk_{0}}{2\pi}\,\frac{1 }{\sqrt{2}\sqrt{q_{t}^{4}+\frac{2ik_{0}-i|q_{0}|-v_{F}q_{r}}{b}}}\sum_{s=\pm 1 }\frac{1}{\sqrt{-\frac{3}{4}q_{t}^{2}+\frac{s}{\sqrt{2}}\sqrt{q_{t}^{4}+\frac{2 ik_{0}-i|q_{0}|-v_{F}q_{r}}{b}}}}\,,\] \[\Pi_{0}^{+}({\bf q},iq_{0}) =-\frac{1}{4bv_{F}}\int_{0}^{\infty}\frac{dk_{0}}{2\pi}\,\frac{1 }{\sqrt{2}\sqrt{q_{t}^{4}+\frac{2ik_{0}+i|q_{0}|-v_{F}q_{r}}{b}}}\sum_{s=\pm 1 }\frac{1}{\sqrt{-\frac{3}{4}q_{t}^{2}+\frac{s}{\sqrt{2}}\sqrt{q_{t}^{4}+\frac{2 ik_{0}+i|q_{0}|-v_{F}q_{r}}{b}}}}\,. \tag{11}\]
The integrals in Eq. (11) are UV divergent. Subtracting \(\Pi_{0}^{\pm}({\bf Q},0)\), one obtains finite expressions for \(\delta\Pi_{0}^{\pm}({\bf q},iq_{0})=\Pi_{0}^{\pm}({\bf q},iq_{0})-\Pi_{0}^{\pm }({\bf Q},0)\).
Eq. (11) can be continued analytically to the entire upper complex frequency half-plane by simply extending \(i|q_{0}|\to z\) with \({\rm Im}z>0\) (note that \(|q_{0}|=q_{0}\) in the upper frequency plane). One can easily check that the integrands encounter no poles or branch cuts for \({\rm Im}z>0\), for any \(k_{0}\). Hence, the continuation to real frequencies is obtained by substituting \(i|q_{0}|\to\omega+i0^{+}\). Moreover, since \(k_{0}\) and \(q_{0}\) enter via the linear combinations \(2ik_{0}-i|q_{0}|\) for \(k_{0}<0\) and \(2ik_{0}+i|q_{0}|\) for \(k_{0}>0\), the infinitesimal imaginary part \(i0^{+}\) is redundant and can be dropped. At this point it is clear that \(\delta\Pi_{0}^{-}\) and \(\delta\Pi_{0}^{+}\) depend on \(q_{r}\) and \(\omega\) only via the linear combinations \(-\omega-v_{F}q_{r}\) and \(\omega-v_{F}q_{r}\), respectively. Substituting \(k_{0}\) by \(-k_{0}\) in the integral for \(\Pi_{0}^{-}\), the analytic continuation of Eq. (11) to real frequencies can be written as
\[\Pi_{0}^{\pm}({\bf q},\omega)=-\frac{1}{4bv_{F}}\int_{0}^{\infty}\frac{dk_{0}} {2\pi}\,\frac{1}{\sqrt{2}\sqrt{q_{t}^{4}+\frac{\pm 2ik_{0}\pm\omega-v_{F}q_{r}}{b}}} \sum_{s=\pm 1}\frac{1}{\sqrt{-\frac{3}{4}q_{t}^{2}+\frac{s}{\sqrt{2}}\sqrt{q_{t }^{4}+\frac{\pm 2ik_{0}\pm\omega-v_{F}q_{r}}{b}}}}\,. \tag{12}\]
To obtain the scaling form Eq. (8), we introduce a dimensionless integration variable \(\tilde{k}_{0}\) defined by \(k_{0}=b|q_{t}|^{4}\tilde{k}_{0}\), yielding
\[\Pi_{0}^{\pm}({\bf q},\omega)=-\frac{|q_{t}|}{4bv_{F}}\int_{0}^{\infty}\frac{ d\tilde{k}_{0}}{2\pi}\,\frac{1}{\sqrt{2}\sqrt{1\pm 2i\tilde{k}_{0}+\frac{\pm \omega-v_{F}q_{r}}{bq_{t}^{4}}}}\sum_{s=\pm 1}\frac{1}{\sqrt{-\frac{3}{4}+ \frac{s}{\sqrt{2}}\sqrt{1\pm 2i\tilde{k}_{0}+\frac{\pm\omega-v_{F}q_{r}}{bq_{t}^{4}}}}}\,. \tag{13}\]
Subtracting \(\Pi_{0}^{\pm}({\bf Q},0)\) with the same substitution we obtain
\[\delta\Pi_{0}^{\pm}({\bf q},\omega)=\frac{|q_{t}|}{4v_{F}}\,I^{\pm}\Big{(}\frac{ \pm\omega-v_{F}q_{r}}{bq_{t}^{4}}\Big{)}\,, \tag{100}\]
with the dimensionless scaling functions
\[I^{\pm}(x)=\int_{0}^{\infty}\frac{d\tilde{k}_{0}}{2\pi}\left[\frac{1\pm i}{2( \pm i\tilde{k}_{0})^{3/4}}-\frac{\frac{1}{\sqrt{-\frac{3}{4}+\sqrt{\frac{1}{2 }(1+x\pm 2i\tilde{k}_{0})}}}+\frac{1}{\sqrt{-\frac{3}{4}-\sqrt{\frac{1}{2}(1+ x\pm 2i\tilde{k}_{0})}}}}{\sqrt{2(1+x\pm 2i\tilde{k}_{0})}}\right]\,. \tag{101}\]
Obviously \(I^{+}(x)\) and \(I^{-}(x)\) are related by complex conjugation, that is, \(I^{+}(x)=[I^{-}(x)]^{*}\). Denoting \(I^{+}(x)\) as \(I(x)\), we obtain Eqs. (8) and (9).
|
2308.11242 | Faster Optimization in S-Graphs Exploiting Hierarchy | 3D scene graphs hierarchically represent the environment appropriately
organizing different environmental entities in various layers. Our previous
work on situational graphs extends the concept of 3D scene graph to SLAM by
tightly coupling the robot poses with the scene graph entities, achieving
state-of-the-art results. Though, one of the limitations of S-Graphs is
scalability in really large environments due to the increased graph size over
time, increasing the computational complexity.
To overcome this limitation in this work we present an initial research of an
improved version of S-Graphs exploiting the hierarchy to reduce the graph size
by marginalizing redundant robot poses and their connections to the
observations of the same structural entities. Firstly, we propose the
generation and optimization of room-local graphs encompassing all graph
entities within a room-like structure. These room-local graphs are used to
compress the S-Graphs marginalizing the redundant robot keyframes within the
given room. We then perform windowed local optimization of the compressed graph
at regular time-distance intervals. A global optimization of the compressed
graph is performed every time a loop closure is detected. We show similar
accuracy compared to the baseline while showing a 39.81% reduction in the
computation time with respect to the baseline. | Hriday Bavle, Jose Luis Sanchez-Lopez, Javier Civera, Holger Voos | 2023-08-22T07:35:15Z | http://arxiv.org/abs/2308.11242v1 | # Faster Optimization in S-Graphs Exploiting Hierarchy
###### Abstract
3D scene graphs hierarchically represent the environment appropriately organizing different environmental entities in various layers. Our previous work on situational graphs [1, 2] extends the concept of 3D scene graph to SLAM by tightly coupling the robot poses with the scene graph entities, achieving state-of-the-art results. Though, one of the limitations of _S-Graphs_ is scalability in really large environments due to the increased graph size over time, increasing the computational complexity.
To overcome this limitation in this work we present an initial research of an improved version of _S-Graphs_ exploiting the hierarchy to reduce the graph size by marginalizing redundant robot poses and their connections to the observations of the same structural entities. Firstly, we propose the generation and optimization of room-local graphs encompassing all graph entities within a room-like structure. These room-local graphs are used to compress the _S-Graphs_ marginalizing the redundant robot keyframes within the given room. We then perform windowed local optimization of the compressed graph at regular time-distance intervals. A global optimization of the compressed graph is performed every time a loop closure is detected. We show similar accuracy compared to the baseline while showing a 39.81% reduction in the computation time with respect to the baseline.
## I Introduction
_S-Graphs_[1, 2] enable mobile robots with a deep understanding of their situation through the real-time generation of a 3D scene graph tightly coupled to the robot poses in the form of a four-layered optimizable graph. _S-Graphs_ consists of a keyframes layer comprising the robot poses, a walls layer encompassing the wall-plane entities, a rooms layer constraining the wall-planes of appropriate rooms, and a floors layer constraining all the rooms within a given floor level. As the robot explores large-scale environments, this optimizable graph increases in the number of keyframes and semantic elements thus increasing the computational complexity and time.
To overcome the mentioned limitation, in this preliminary work, we present an approach for exploiting the inherent hierarchy extracted by the baseline _S-Graphs_ to reduce the computational complexity while maintaining the robot pose and its map accuracy. Every time a room is detected all the enclosed robot keyframes, along with its attached wall-planes, and the room node form a room-local graph which is locally optimized and then utilized to marginalize the redundant keyframe nodes. In this current work, each room is kept with only one robot pose, the rest are set to marginalized. This marginalization is propagated to the entire _S-Graph_ to generate a compressed version of the graph. Furthermore, we perform optimization of the compressed graph in two steps, local and global optimization. The local optimization is performed over a window of keyframes as the robot
Fig. 1: Generation of _S-Graph_ utilizing the improved optimization technique. The redundant keyframes within a given room are marginalized (red spheres) after performing room-local optimization. The remaining keyframes (black spheres) along with their connected walls-rooms-floors, undergo local and global optimization steps. The orange boxes over the keyframes highlight the _S-Graph_ undergoing the global optimization step where all the keyframes are utilized except the marginalized ones.
explores the environment while the global optimization over the complete compressed graph is performed only on loop closures. Initial experimentation performed over simulated and real data show reduced computational complexity of our proposed optimization framework while maintaining the robot pose and map accuracy. Fig. 1 highlights the presented approach where the underlying _S-Graph_ undergoes a global optimization step that excludes the redundant keyframes (colored in red) within a given room structure.
## II Related Work
Several methods exist in the literature for performing factor graph compression maintaining the map accuracy while bonding the computational complexity. [3] presents an incremental soothing and mapping approach considering the information gain of measurement to the state estimates, thus instead of solving the entire graph only solving parts of the graph affected by the new measurements. [4] presents a hierarchical optimization technique grouping nodes into different subgraphs at a lower level based on simple distance-based criteria. These subgraphs form a reduced higher-level graph which is then optimized quickly and efficiently. Authors [5] present an information-theoretic approach for factor graph compression where laser scans measurements and their corresponding robot poses are removed such that the expected loss of information with respect to the current map is minimized.
In recent years, 3D scene graphs have emerged as an efficient way to represent the environment, [2, 6, 7]. Although these works efficiently represent the environment in different hierarchical levels, there can be redundant information stored at different levels which is neither removed nor compressed, which can lead to increased computational complexity while exploring very large-scale environments. Inspired by factor graph compression techniques, in this work, we present an ongoing research work of marginalizing information from 3D scene graphs exploiting its underlying hierarchy.
## III Proposed Approach
Fig. 2 present an overview of the proposed work. The robot odometry and 3D LiDAR information are utilized for generating the four-layered _S-Graph_. The point cloud information mapped at each robot keyframe is utilized to extract wall planes, creating the walls layer. The extracted walls are utilized by the room detector to extract rooms, and by the floor detector to extract floors. During the creation of the _S-Graph_ the underlying optimization step can be divided into three different stages namely 1. Room-Local Optimization 2. Local Optimization 3. Global Optimization, all explained in the following sections.
### _Room-Local Optimization_
Room-Local optimization generates a local _S-Graph_ lying within the given structure of a room, whenever a room is detected. To generate a room-local _S-Graph_ we first look for a subset keyframes \(K_{s}\subset K\) which are bounded by the walls of a four-wall room \(R_{j}\) which satisfy the following condition:
\[\forall\mathbf{n}_{i}\in R_{j},\quad\mathbf{n}_{i}\cdot(\mathbf{k}_{i}-\mathbf{{}^{M}}\mathbf{\Pi} _{i})<0 \tag{1}\]
where \(\mathbf{{}^{M}}\mathbf{\Pi}_{i}=\mathbf{n}_{i}\cdot d_{i}\) and \(\mathbf{n}\), \(d\) are the normal and the distance of wall-plane \(i\). \(\mathbf{k}_{i}\) is the translation of the keyframe \(K_{i}\). Furthermore, we add in the room-local _S-Graph_ all the wall-planes \(\mathbf{\pi}_{i}\) connected keyframe subset \(K_{s}\). Lastly, we incorporate the room vertex \(R_{j}\) and the floor \(F_{l}\). Inspired from [8], all the keyframes outside the room but connected to the wall-planes \(\mathbf{\pi}_{i}\) are incorporated in room-local graph but kept constant during optimization.
The room-local _S-Graph_ is then optimized to perform a room-local optimization step which optimizes only a subset of the entire graph. Since all the robot poses within a given
Fig. 2: System architecture of the presented approach, divided into front-end and back-end. The front-end comprises of modules for generating the four-layered _S-Graph_. In the back-end, the generated _S-Graph_ undergoes different optimization steps. In this example keyframe \(X_{t+1}\) belonging to the room is marginalized out to compress the graph.
room observe the same structure, only the first robot pose within is room maintained. Other robot poses can be safely marginalized from the graph along with their observations. This optimized and marginalized information is used to create a compressed version of the _S-Graph_.
### _Local Optimization_
The marginalized information from the room-local optimization step is included to generate a compressed _S-Graph_ excluding the marginalized keyframes and their edges. Marginalization leads to a generation of a disconnected graph. These disconnections are between the marginalized keyframes and their non-marginalized neighbors. To obtain a connected graph, we check for the \(k\) edges \(e_{k}\), \(k=\{1,\dots,k\}\) of the \(i^{\text{th}}\) marginalized keyframe \(K_{i}\) with \(n\) non-marginalized neighbors \(K_{n}\), \(n=\{1,\dots,n\}\) and connect the closest non-marginalized neighbors \(K_{1}\) and \(K_{2}\) with a new edge \(e_{f}\). The information matrix of the new edge is the summation of the information matrices of the eliminated edges in \(e_{k}\).
After the generation of the compressed _S-Graph_, the local optimization performs a windowed optimization over the last \(N\) subset of the keyframes \(K_{N}\) and their connected walls, rooms, and floors within the compressed graph. The additional keyframes not in \(K_{N}\) that observe the walls and rooms are included in the optimization but kept constant.
### _Global Optimization_
Global optimization considers the optimization of the entire compressed _S-Graph_ without the marginalized keyframes and their edges, if a loop closure is detected between the robot keyframes. In global optimization, only the first keyframe is fixed for optimization stability.
## IV Experimental Validation
We validate our approach on different simulated and real datasets captured using a velodyne 3D LiDAR and a legged robotic platform. Currently, we compare our approach with the baseline _S-Graphs+_[2]. In both the simulated and real experiments, the robot performs several random and repeated trajectories in indoor environments with different room configurations. For the simulated experiments, we report the ATE of the robot pose along with the computation time required during the optimization steps. For real experiments, we provide qualitative results in terms of map quality generated by our approach and the baseline while comparing its computation time.
Table. I shows the results obtained on two simulated datasets when comparing them against the baseline _S-Graphs+_[2]. As can be seen from the table, our proposed method is able to provide better or similar accuracy in terms of the ATE when compared with the baseline, but given our proposed method is able to improve the computation time of the global optimization with respect to the baseline by \(33\%\).
Figure. 3 shows qualitative results of the obtained map during the execution of the real experiment. As can be appreciated from the figure our proposed approach is able to provide the same map quality when comparing it with the baseline and the ground truth map. But in this experiment as well, the average computation of _S-Graphs+_ is \(129\) ms while our proposed approach has an average computation time of \(60\) ms.
## V Conclusion
In this paper, we present an initial research strategy for compression and faster optimization for the real-time generation of situational graphs. We exploit the inherent hierarchical representations generated by _S-Graph_ to generate room-local optimizable graphs, which are used to generate a compressed version of the underlying _S-Graph_. We also propose to break down the optimization of the compressed graph into two steps namely, local (windowed) optimization and global optimization. We perform preliminary experiments to compare our approach to the baseline _S-Graphs+_ achieving similar results in terms of accuracy but improved computation time. As for future works, we
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{3}{c|}{**ATG** (cm) \(\downarrow\)} & \multicolumn{3}{c}{**Computation Time** (mean) \(\downarrow\)} \\ & & \multicolumn{3}{c}{[ms]} \\ \cline{3-5} & \multicolumn{3}{c}{**Dataset**} \\ \hline
**Method** & _C1F1_ & _C1F2_ & _C1F1_ & _C1F2_ \\ \hline _S-Graphs+_[2] & 4.20 & **1.59** & 71.0 & 69.0 \\ Proposed (_ours_) & **4.09** & 1.73 & **53.0** & **41.0** \\ \hline \hline \end{tabular}
\end{table} TABLE I: ATE (cm) and the computation time [ms] of _S-Graphs+_ and our proposed method on simulated datasets. Best results are boldfaced.
Fig. 3: Qualitative results of a real-world experiment showing the map accuracy by our proposed work and baseline _S-Graphs+_ with respect to the ground truth map. The black spheres are the keyframes while the red spheres in our proposed work are the marginalized keyframes belonging to the rooms.
would further validate our presented approach over large-scale indoor environments with multiple floors incorporating the floor-local optimization step in the graph in addition to the room-local optimization step.
|
2310.04363 | Amortizing intractable inference in large language models | Autoregressive large language models (LLMs) compress knowledge from their
training data through next-token conditional distributions. This limits
tractable querying of this knowledge to start-to-end autoregressive sampling.
However, many tasks of interest -- including sequence continuation, infilling,
and other forms of constrained generation -- involve sampling from intractable
posterior distributions. We address this limitation by using amortized Bayesian
inference to sample from these intractable posteriors. Such amortization is
algorithmically achieved by fine-tuning LLMs via diversity-seeking
reinforcement learning algorithms: generative flow networks (GFlowNets). We
empirically demonstrate that this distribution-matching paradigm of LLM
fine-tuning can serve as an effective alternative to maximum-likelihood
training and reward-maximizing policy optimization. As an important
application, we interpret chain-of-thought reasoning as a latent variable
modeling problem and demonstrate that our approach enables data-efficient
adaptation of LLMs to tasks that require multi-step rationalization and tool
use. | Edward J. Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, Nikolay Malkin | 2023-10-06T16:36:08Z | http://arxiv.org/abs/2310.04363v2 | # Amortizing intractable inference
###### Abstract
Autoregressive large language models (LLMs) compress knowledge from their training data through next-token conditional distributions. This limits tractable querying of this knowledge to start-to-end autoregressive sampling. However, many tasks of interest--including sequence continuation, infilling, and other forms of constrained generation--involve sampling from intractable posterior distributions. We address this limitation by using amortized Bayesian inference to sample from these intractable posteriors. Such amortization is algorithmically achieved by fine-tuning LLMs via diversity-seeking reinforcement learning algorithms: generative flow networks (GFlowNets). We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training and reward-maximizing policy optimization. As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem and demonstrate that our approach enables data-efficient adaptation of LLMs to tasks that require multi-step rationalization and tool use.
Code: [https://github.com/GFNOrg/gfn-lm-tuning](https://github.com/GFNOrg/gfn-lm-tuning).
## 1 Introduction
Autoregressive large language models (LLMs) trained on general-domain data are vast stores of world knowledge (Petroni et al., 2019). They are typically optimized by predicting a token given its preceding context; therefore, tractable inference over this knowledge is limited to sampling conditioned on a prefix. Many useful tasks, such as infilling (Zhu et al., 2019; Liu et al., 2019), generating text conditioned on length or lexical constraints (Hokamp and Liu, 2017; Hu et al., 2019), and finding the most likely sequence continuation, involve intractable inference in LLMs.
Such tasks are related to the problem of reasoning, which has been framed as one of probabilistic inference (Gershman and Goodman, 2014). Correspondingly, the linguistic expression of reasoning can be seen as inference over language. For example, we can interpret chain-of-thought reasoning (Wei et al., 2022; Kojima et al., 2022), a paradigm of reasoning in language models, as a problem of intractable posterior inference. Given a question-answer pair \((X,Y)\), we are interested in finding latent chains of thought - token sequences \(Z\) that contribute the most to the conditional likelihood
\[p(Y\mid X)=\sum_{Z}p_{\text{LM}}(ZY\mid X)=\sum_{Z}p_{\text{LM}}(Y\mid XZ)p_{ \text{LM}}(Z\mid X), \tag{1}\]
where \(p_{\text{LM}}\) denotes the likelihood assigned to a sequence by a language model and apposition of variables (_e.g._, \(XZY\)) denotes the concatenation of the token sequences.
While past work has relied on prompting and in-context learning to produce \(Z\)'s that lead to the correct \(Y\), treating \(Z\) as a hidden variable in a latent variable model (LVM) renders chain-of-thought reasoning a Bayesian inference problem (Fig. 1). For this LVM, the distribution we must sample from is the posterior \(p_{\text{LM}}(Z\mid X,Y)=\frac{p_{\text{LM}}(XZY)}{\sum_{Z^{\prime}}p_{\text{LM }}(XZY^{\prime})}\). Such sampling is intractable: while it is easy
to evaluate \(p_{\text{LM}}(XZY)\), the conditional distributions needed to sample \(Z\) from \(p_{\text{LM}}(Z\mid X,Y)\) one token at a time are not easy to compute.
A standard method to sample approximately from intractable posterior distributions is Markov chain Monte Carlo (MCMC), but it is difficult to craft good proposal distributions that would mix between the modes quickly for language data (Miao et al., 2019; Zhang et al., 2020; Lew et al., 2023), and inference on a new input may be prohibitively slow. Alternatively, one can turn to reinforcement learning (RL) approaches such as proximal policy optimization (PPO; Schulman et al., 2017), where the language model is treated as a policy to be fine-tuned. However, these do not aim to model the full diversity of the distribution; instead, learned policies settle around a small number of modes. In both cases, issues with this mode collapse are exacerbated when the target distribution is misspecified, leading to the undesirable behavior of overoptimized samplers (Gao et al., 2022).
Amortized probabilistic inference - that is, training a model to approximate a distribution of interest - provides a principled, efficient, and potentially scalable way to draw samples from the distribution (Beal, 2003). One way to implement amortized inference for high-dimensional discrete data such as text is using generative flow networks (GFlowNets; Bengio et al., 2021), which are diversity-seeking reinforcement learning algorithms that train policies to sample objects (such as a token sequence \(Z\)) with probability proportional to a given reward function, such as the joint \(p_{\text{LM}}(XZY)\).
In this work, we present a method that initializes the GFlowNet policy with a pretrained LLM and continues to train it with a reward objective that can be evaluated with the same LLM. The result is a different type of fine-tuning (FT) procedure for text generation that has a number of advantages, including improved sample diversity, data efficiency, and out-of-distribution generalization. GFlowNet fine-tuning makes the language model sample from the target distribution, enabling amortized inference in a number of applications (Fig. 1).
Leveraging this approach, we empirically demonstrate the possibilities and benefits of learning to sample from intractable distributions over text continuations, latent reasoning chains, and tool use sequences using GFlowNet fine-tuning. Notably, the diversity of samples from the models trained with GFlowNet fine-tuning is beneficial in Bayesian model averaging settings, such as when aggregating answers to questions obtained via multiple reasoning chains. For example, using a pretrained language model with 6B parameters, our method shows an absolute improvement of 10.9% over supervised fine-tuning on subjectivity classification with only 10 labeled examples (SS4.3) and outperforms supervised fine-tuning and PPO by 63% on integer arithmetic with 50 demonstrations, with notable improvements in out-of-distribution generalization (SS4.4). Moreover, the benefits of amortized inference allow us to efficiently sample from the fine-tuned model at scale. Our contributions include:
1. A general algorithm for amortized sampling from intractable LLM posteriors.
2. A probabilistic approach to fine-tuning LLMs to perform chain-of-thought reasoning.
3. Empirical results on sequence continuation, natural language reasoning, integer arithmetic with tool use, and story infilling.
## 2 Motivating example: Generating random numbers with LLMs
We consider a simple task that highlights the limitations of reward-maximizing reinforcement learning (RL) methods in fine-tuning LLMs: generating random numbers from a given distribution when
Figure 1: **Left:** Three problems of reasoning in language – sentence infilling, chain-of-thought reasoning, and problem-solving with external tool use – can all be seen as instances of the latent variable model at the top left, where an input (\(X\)) generates the output (\(Y\)) via a latent variable (\(Z\)). **Right:** We fine-tune an LLM to sample from the Bayesian posterior over \(Z\), conditioning on \(X\) and optionally on \(Y\). If conditioned on \(Y\), the trained policy can be used to sample diverse latent sequences (_e.g._, for infilling, §4.2). If not conditioned on \(Y\), the policy can sample \(Z\), and thus predict \(Y\), for inputs \(X\) not seen during training (_e.g._, for classification and multi-step reasoning, §4.3, 4.4). As shown in §4.4, modeling the full diversity of the posterior aids generalization.
prompted, _e.g._, with _'The following is a random integer drawn uniformly between 0 and 100:'_. For applications such as data simulation and probabilistic programming, we require the LLM to sample numbers from the given distribution faithfully. While this is a simple task with straightforward solutions - since the target distribution is tractable - it is useful for illustrating the behaviors of different fine-tuning methods.
Renda et al. (2023) found that pretrained LLMs perform quite poorly on this task: the distribution of numbers generated with the above prompt will be far from uniform (Fig. 1(a) shows an example using an instruction fine-tuned GPT-J 6B (Wang & Komatsuzaki, 2021)1). There may be many reasons for this, among them the effects of instruction fine-tuning and the LLM's possible bias towards numbers that are more frequent in the training data (_e.g._, numbers starting with '1' are more frequent due to the properties of many natural data-generating processes (Benford, 1938)).
Footnote 1: We use the Instruct-GPT-J model available at @l.co/nlpcloud/instruct-gpt-j-fp16.
While reward-maximizing RL can teach the model to generate valid numbers (by penalizing outputs that are not numbers from 1 to 100), it would not resolve the distribution skew introduced during pretraining. Indeed, rewarding all valid integers equally leads to an expected gradient of zero for policy gradient methods. Fig. 1(b) shows that while most samples are valid numbers after PPO training, the distribution remains highly skewed.
Instead, we can take a principled approach by training the LLM to match the target distribution with a GFlowNet learning objective. Such an objective directly optimizes the likelihood of the model generating a number to be proportional to the reward for that number, which is the number's (potentially unnormalized) probability under the target distribution. When the policy is initialized as the pretrained LLM, the resulting distribution after GFlowNet fine-tuning is shown in Fig. 1(c). Quantitatively, the KL divergence from the sampling distribution to the target (uniform) distribution decreases from 3.37 for the original LLM (on the support \([0,100]\)) to \(9.75\cdot 10^{-5}\) for the GFlowNet-fine-tuned model.
This example illustrates a general point: GFlowNet objectives provide a principled and flexible approach to fine-tuning LLMs to _match_ a target distribution where reward-maximizing RL fails to. On this simple task, this distribution matching could also be achieved through supervised fine-tuning; however, this would require access to samples from the target distribution, which are unavailable in general (though not in this simple example). In the following sections, we further illustrate this point in non-trivial problems involving intractable inference, reasoning with latent variables, and tool use.
## 3 Fine-tuning LLMs to sample from intractable distributions
We first describe how intractable inference emerges from interesting applications of LLMs, one of which is chain-of-thought reasoning seen through the lens of latent variable models, where the posterior distribution over the latent variable is intractable. We then discuss how GFlowNet objectives can be used to train amortized samplers to perform such intractable inference.
Figure 2: Empirical distributions of 512,000 integers from 1 to 100 generated by GPT-J fine-tuned with PPO (reward-maximizing; b) and GFlowNet fine-tuning (distribution-matching; c). Note the logarithmic \(y\)-scale.
### Problem: Intractable inference in large language models
Autoregressive language models decompose the distribution over sequences of tokens as a product of ordered conditions: \(p(w_{1:N})=p(w_{1})p(w_{2}\mid w_{1})\cdots p(w_{N}\mid w_{1:N-1})\). While this decomposition makes left-to-right sampling from the distribution tractable, sampling from other conditional distributions is intractable. Various problems of language modeling can be viewed as sampling from such intractable conditionals in the distribution over sequences of an LLM; we give two such examples and related terminologies in Table 1. Some tasks we study in SS4 are instances of these examples.
Tempered and contrastive sampling.In many applications (_e.g._, translation, summarization, dialogue systems), one wishes to sample from a low-temperature distribution over sequences \(Z\) conditioned on a prefix \(X\), _i.e._, \(q(Z\mid X)\propto p_{\text{LM}}(XZ)^{1/T}\) for some temperature \(T<1\), as highlighted samples are more likely to be fluent or accurate continuations of \(X\)(Tillmann and Ney, 2003). The limit of \(T\to 0\) gives a distribution that is peaky on the most likely continuation. However, sampling from \(q\), or finding its mode, is intractable, and it is common to resort to approximations, such as tempering the _tokenwise_ conditional distributions or using beam search to search for a mode. A related problem is sampling a continuation with a correction for its unconditional likelihood, _e.g._, \(q(Z\mid X)\propto p_{\text{LM}}(XZ)^{\alpha}p_{\text{LM}}(Z)^{\beta}\) with \(\beta<0\) and \(\alpha>0\), where applications again resort to approximating the next-token conditionals of \(q\) by tempering (Malkin et al., 2022b; Li et al., 2023).
Infilling and reverse generation.Infilling is the task of sampling a sequence of tokens conditioned on both its prior and subsequent context, which can be understood as sampling from the distribution \(q(Z\mid X,Y)\propto p_{\text{LM}}(XZY)\), where \(X\) and \(Y\) are fixed. Reverse generation is a special case, where \(X\) is an empty sequence. Besides being a meaningful task in its own right (Liu et al., 2019; Zhu et al., 2019), infilling and reverse generation are key components of newly emerging methods of LLM prompting, such as when LLMs are tasked with optimizing their own instruction sequences or reasoning steps (Zhou et al., 2023; Sordoni et al., 2023; Xu et al., 2023). Current applications achieve this by resorting to hand-engineered instructions and inverted prompts.
Constrained generation.Sampling of text with constraints and penalties - for example, those on the presence or the absence of certain words or on the score of an auxiliary classifier evaluated on the text - can be understood as sampling from a distribution \(q(Z)\propto p_{\text{LM}}(Z)c(Z)\), where \(c\) is an externally specified constraint. Current approaches to the problem use tokenwise approximations (Liu et al., 2021) or various problem-specific beam search and local search techniques (_e.g._, Schmaltz et al., 2016; Hokamp and Liu, 2017; Hu et al., 2019; Sha, 2020; Lu et al., 2022).
### Reasoning through latent variables
Chain-of-thought reasoning (Wei et al., 2022; Kojima et al., 2022) helps LLMs solve complex problems by producing a reasoning chain before giving the final answer. LLMs pretrained on general domain data can learn to produce useful chains of thoughts given demonstrations, which are usually handcrafted or generated by prompting the LM. Interestingly, although the capacity for chain-of-thought reasoning only emerges in large language models, knowledge can also be extracted from smaller language models when they are carefully fine-tuned (Schick and Schutze, 2021).
Motivated by this, we connect chain-of-thought reasoning to the general problem of inference in latent variable models illustrated in Fig. 1. Here, reasoning can be seen as posterior inference: sampling from the posterior distribution over a string of tokens \(Z\) conditioned on a prefix \(X\) and a suffix \(Y\)
\begin{table}
\begin{tabular}{l l l} \hline \hline Object & Meaning & Example 1 (infilling) & Example 2 (subjectivity classification) \\ \hline \(X\) & cause / condition / question & _The cat was hungry._ & _A deeply moving storyline._ \\ \(Z\) & mechanism / reasoning chain & _She ate a mouse._ & _This review expresses personal feelings._ \\ \(Y\) & effect / answer & _Now the cat is sleep, not hungry._ & _Answer: Subjective_ \\ \(p(Z\mid X)\) & conditional prior & \(p_{\text{LM}}(Z\mid X)\) \\ \(p(Y\mid X,Z)\) & likelihood of effect given & \(p_{\text{LM}}(Y\mid XZ)\) \\ & cause and mechanism & \\ \(p(Z,Y\mid X)\) & conditional joint, reward for \(Z\) & \(p_{\text{LM}}(ZY\mid X)\) \\ \hline \(p(Z\mid X,Y)\) & posterior (**intractable!**) & approximated and amortized by GFlowNet \(q_{\text{GN}}(Z\mid X,Y)\) \\ \(q(Y\mid X)\) & posterior predictive / & approximated as \(\sum_{Z\,q_{\text{GN}}}(Z\mid X)p_{\text{LM}}(Y\mid XZ)\), \\ & Bayesian model average & sampled as \(Z-q_{\text{GN}}(Z\mid X),Y-p_{\text{LM}}(Y\mid XZ)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Objects in language posterior inference. Given a pretrained ‘teacher’ LM \(p_{\text{LM}}\), we train a GFlowNet \(q_{\text{GFN}}\) to sample the posterior \(p(Z\mid X,Y)\). Amortization and generalization are achieved by making \(X\), and optionally \(Y\), an input to \(q_{\text{GFN}}\).
given an autoregressive language model \(p_{\text{LM}}\). The posterior is defined as
\[p_{\text{LM}}(Z\mid X,Y)=\frac{p_{\text{LM}}(XZY)}{\sum_{Z^{\prime}}p_{\text{LM} }(XZ^{\prime}Y)}\propto p_{\text{LM}}(XZY). \tag{2}\]
Our goal is to train models to sample \(Z\) from this posterior distribution. Intuitively, this allows us to sample likely reasoning chains that lead to the desired outcome \(Y\). Although we take \(Z\) to be a string of tokens, the same formalism and the GFlowNet objectives apply to other structured latent objects, such as trees or sets of natural language statements, as long as one has access to a likelihood model \(p(Y\mid XZ)\). While not investigated in this work, these generalizations could be important for formal reasoning and multi-step chains of inference. See, _e.g._, Yao et al. (2023); Hao et al. (2023); Besta et al. (2023) for approaches to reasoning in language using tree- or list-structured state spaces.
A latent variable model of this form is useful when the marginal distribution \(p_{\text{LM}}(Y\mid X)\) is harder to model than \(p_{\text{LM}}(Z\mid X)\) and \(p_{\text{LM}}(Y\mid XZ)\), _i.e._, a difficult inference is broken down into a chain of easier ones. By training a model to match the Bayesian posterior \(p_{\text{LM}}(Z\mid X,Y)\), we can learn to sample latent reasoning chains that increase the likelihood of producing \(Y\) from \(X\) via the sampled \(Z\).
However, we can also fine-tune the language model \(p_{\text{LM}}(Z\mid XY)\) itself to maximize the likelihood of data pairs \((X,Y)\) under the LVM. While it is generally intractable to directly maximize the data likelihood \(p_{\text{LM}}(X,Y)=\sum_{Z}p_{\text{LM}}(XZY)\) because of the summation over \(Z\), the (variational) expectation-maximization (EM) algorithm (Dempster et al., 1977; Beal, 2003; Koller and Friedman, 2009) can be used for this purpose. In the expectation step (E-step), we draw samples from the posterior over the latent variable \(p_{\text{LM}}(Z\mid X,Y)\), which could come from an amortized sampler of \(Z\). In the maximization step (M-step), we maximize the log-likelihood of the joint probability of the sampled latent variables \(\mathbb{E}_{Z-p_{\text{LM}}(Z\mid X,Y)}\) log \(p_{\text{LM}}(XZY)\) with respect to the parameters of the language model \(p_{\text{LM}}\). This combination of amortized inference (learning to sample the chain of thought) and supervised fine-tuning (optimizing the language model with the'supervision' involving \(Z\) sampled from the amortized posterior) will be illustrated in one of our experiments (SS4.3, Table 3).
### Amortized inference with GFlowNet objectives
For inference in the latent variable model, we leverage the probabilistic framework of generative flow networks (GFlowNets; Bengio et al., 2021, 2023). Using notation from Malkin et al. (2022), we briefly introduce relevant GFlowNet concepts pertaining to autoregressive sequence generation. Here, GFlowNets learn policies to sample sequences \(Z=z_{1}z_{2}\dots z_{n}\top\in\mathcal{Z}\) (where \(\top\) denotes a stop symbol) from a distribution over the space of sequences \(\mathcal{Z}\), given an unnormalized density (reward) \(R:\mathcal{Z}\rightarrow\mathbb{R}_{>0}\). The generative process is the same as in autoregressive language models: generation begins with an empty string, and at the \(i\)-th step a token \(z_{i}\) is sampled from a policy \(q_{\text{GFN}}(z_{i}\mid z_{1:i-1})\), which is then appended to the sequence. This process continues until a stop symbol \(\top\) is generated.
The marginal likelihood \(q_{\text{GFN}}^{\top}(Z)\) of sampling a terminal state \(Z=z_{1:n}\top\) is given by \(\prod_{i=1}^{n}q_{\text{GFN}}(z_{i}\mid z_{1:i-1})q_{\text{GFN}}(\top\mid z)\), where \(z_{1:0}\) is understood to be the empty string. The goal of GFlowNet training is to fit a parametric policy \(q_{\text{GFN}}(\cdot\mid\cdot;\theta)\) such that \(q_{\text{GFN}}^{\top}(Z)\propto R(Z)\), _i.e._, the likelihood of generating a complete sequence is proportional to its reward.
Learning objective.We use a modified version of the subtrajectory balance (SubTB; Madan et al., 2023) objective to account for trajectories being termizable at all states (Deleu et al., 2022). The objective for a sequence \(Z=z_{1:n}\top\) is
\[\mathcal{L}(Z;\theta)=\sum_{0\leq i<j\leq n}\left(\log\frac{R(z_{1:i}\top) \prod_{k=i+1}^{j}q_{\text{GFN}}(z_{k}\mid z_{1:k-1})q_{\text{GFN}}(\top\mid z_ {1:j})}{R(z_{1:j}\top)q_{\text{GFN}}(\top\mid z_{1:i})}\right)^{2}, \tag{3}\]
For sequence generation tasks, the SubTB objective is equivalent to the path consistency objective (Nachum et al., 2017) in max-entropy RL (Haarnoja et al., 2017), which has been previously used in the context of text generation (Guo et al., 2021).
Training policy.As the objective in Eq. 3 can be minimized to 0 for all trajectories \(\tau\) simultaneously given enough model capacity, we can use trajectories sampled from _any_ full-support distribution (training policy) to perform gradient descent on \(\mathcal{L}(\tau;\theta)\) with respect to \(\theta\). As the space we are sampling from is combinatorially large, it is important to have a training policy that can efficiently explore \(\mathcal{Z}\). To this end, we compose the mini-batch during training using trajectories from three sources: (1) the policy \(q_{\text{GFN}}\), (2) a tempered version of the current policy \(q_{\text{GFN}}\) and (3) a replay
buffer storing past trajectories. Replay buffers have been shown to be quite effective in improving GFlowNet training (Jain et al., 2022; Deleu et al., 2022; Shen et al., 2023).
**Parametrization, amortization, and generalization.** To sample the latent sequence \(Z\) from the posterior defined in Eq. 2, we parametrize the GFlowNet policy as an autoregressive language model that samples the latent \(Z\) one token at a time from left to right. By setting the reward \(R(Z)=p_{\text{LM}}(XZY)\propto p_{\text{LM}}(Z\mid X,Y)\), we learn a sampler for the posterior at convergence.
As illustrated in Fig. 1, depending on the task, we can condition the GFlowNet policy on either \(X\) or \(X,Y\). In cases such as reasoning (SS3.2), where there is only a single correct \(Y\) for each \(X\) and we interested in predicting \(Y\) for unseen \(X\) at test time, we can simply condition on \(X\). In this case, the GFlowNet policy is simply a language model that generates \(Z\) as a continuation of \(X\). To be precise, we initialize \(q_{\text{GFN}}\) as a copy of \(p_{\text{LM}}\) that is conditioned on the prefix \(X\), and then fine-tune2 it with a GFlowNet objective. With this view, sampling \(Z\) is an inverse problem: we need to infer \(Z\) given a (conditional) prior \(p_{\text{LM}}(Z\mid X)\) and an observation \(Y\) under likelihood model \(p_{\text{LM}}(Y\mid XZ)\).
Footnote 2: We use LoRA (Hu et al., 2022) instead of full fine-tuning for hardware efficiency in all experiments.
Allowing the GFlowNet policy to explicitly take \(X\) as input amortizes the sampling procedure and allows generalization to unseen \(X\). In this sense, the GFlowNet is a Bayesian model (akin to a LM cascade (Dohan et al., 2022) or deep language network (Sordoni et al., 2023)), in which \(Z\) are conditionally sampled 'parameters' that transform \(X\) into \(Y\). To predict the \(Y\) for an unseen \(X\), one performs Bayesian model averaging by drawing samples of \(Z\) from \(q_{\text{GFN}}(Z\mid X)\) followed by sampling from \(p_{\text{LM}}(Y\mid XZ)\).
In tasks such as infilling (SS4.2), however, the mapping from \(X\) to \(Y\) is one-to-many and \(Y\) is available at test-time. Here, we are interested in \(Z\) itself, rather than using it as an intermediate variable en route to generating \(Y\). The GFlowNet policy thus has to be conditioned on both \(X\) and \(Y\). To achieve this, the policy is conditioned on a prompt that contains both \(X\) and \(Y\) (for example, see Appendix B).
## 4 Empirical results
We first validate GFlowNet fine-tuning on text generation, where we seek to find likely sentence continuation given a prompt (SS4.1) or fill in a missing sentence in a story (SS4.2). Then, we study reasoning tasks that benefit from chain-of-thought reasoning (SS4.3) and external tool use (SS4.4).
### Sentence continuation
Task description.A natural application for autoregressive language models is that of sequence continuation: given a prompt, the model should generate a high-likelihood completion. In applications such as creative writing, we would like the continuations to be semantically diverse while still having a high likelihood under the language model. To demonstrate the benefits of GFlowNet fine-tuning, we consider the task of sampling the next sentence following a prompt.
Sampling autoregressively from the LM until a "." token is reached is unlikely to produce samples that have a high likelihood because the distribution over sentences has a fat tail. Existing approaches to generate sequence continuations include beam search and its variations (Vijayakumar et al., 2018; Shao et al., 2017), top-\(k\) sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2019), and tempered autoregressive sampling, among others. While useful, these methods are ultimately hand-crafted heuristics that leave room for improvement. Furthermore, some of these methods (_e.g._, beam search) involve a computationally expensive search procedure, compared to a single pass of a learned inference model that amortizes over prompts. Our GFlowNet policy autoregressively samples the sequence until a period is sampled, indicating the end of the sentence.
Figure 3: Maximum log-likelihood and diversity of continuations sampled for fixed prompts. GFlowNet fine-tuning (\(\star\)) samples higher log-likelihood sentences while maintaining more sample diversity than the baselines (\(\bullet\) and \(\dashdot\)), even when they are given 5\(\times\) the compute.
Given prompts \(X\), the LM is fine-tuned to generate the continuations \(Z\) from the tempered posterior by being trained with reward \(R(Z)=p_{\text{LM}}(Z|X)^{\frac{1}{T}}\). When \(T=1\), the GFlowNet will trivially sample proportional to \(p_{\text{LM}}\) without any fine-tuning, so we consider \(0<T<1\) to focus on the likely continuations.
We consider a dataset of prompts from OpenWebText (Gokaslan et al., 2019) with a 1.5B param GPT-2 XL (Radford et al., 2019) as the base model. We draw 8 samples from the fine-tuned model conditioned on a fixed prompt, consider the maximum-likelihood sample under the LM, and report the average over the dataset of prompts. To measure the semantic diversity of the samples, we compute the mean pairwise cosine distance between the embeddings (from a pretrained encoder (Reimers and Gurevych, 2019)) of the generated samples and average it over the dataset. We compare to baselines that are commonly used for producing continuations from LMs at inference time (beam search, diverse beam search, nucleus sampling, autoregressive sampling, tempered autoregressive sampling, and greedy generation).
Results.Quantitative results are reported in Fig. 3 and empirical samples are shown in Appendix A. At lower temperatures, our method excels in generating high-likelihood sentences, outperforming the leading baseline, diverse beam search. If we increase the number of beams (and therefore compute) to 5\(\times\) the number of samples produced by the GFlowNet, our performance remains comparable. Nevertheless, even in this scenario, the GFlowNet's generated samples exhibit notably higher diversity compared to diverse beam search and are on par with the best diversity-scoring benchmarks.
### Infilling stories
Task description.Next, we consider the story infilling task, a special case of the general infilling problem (SS3.1), where given the beginning \(X\) and end \(Y\) of a story, the goal is to generate the middle of the story \(Z\)(Zhu et al., 2019). This is challenging for a language model sampled left to right since continuations \(Z\) conditioned only on \(X\) might be incompatible with the ending \(Y\). We use the ROCStories corpus (Mostafazadeh et al., 2016), a dataset of short stories containing exactly 5 sentences each. Given the first 3 sentences and the last sentence, the goal is to generate the fourth sentence, which often involves a turning point in the story and is thus challenging to fill in.
As we expect the base model to contain the required knowledge, for this task we use a GPT-2 Large model (Radford et al., 2019) fine-tuned on the entire ROCStories training set as the base model. For evaluating the approach, we consider 900 samples from the dataset as training data to learn \(q_{\text{GFN}}(Z|X,Y)\) and evaluate the similarity of the generated infills on a dataset of 100 unseen stories. Along with the GFlowNet-fine-tuned model, we also consider two baselines: prompting the model to infill the story and supervised fine-tuning on the same data. Further details are in Appendix B.
Results.To measure the similarity of the generated infills with the reference infills available in the dataset, we compute BERTScore (Zhang et al., 2020), with DeBERTa (He et al., 2021) - which is correlated with human judgments - along with BLEU-4 (Papineni et al., 2002) and GLEU-4 (better suited for sentences; Wu et al., 2016) metrics. From our results summarized in Table 2, we observe that the infills generated by the model with GFlowNet fine-tuning are closer to the reference infills in the dataset than the baselines. By sampling from \(p_{\text{LM}}(Z|X,Y)\), the GFlowNet is able to account for the ending while generating the infill, resulting in infills that link the beginning and the end of the story coherently. For further analysis and details see Appendix B.
### Subjectivity classification
Task description.SUBJ (Pang and Lee, 2004) is a binary classification dataset for natural language understanding. It is a collection of movie reviews in which each review is labeled as _objective_, meaning that it references facts about the movie, or _subjective_, meaning that it expresses an opinion of the reviewer (see Table C.1 for examples). Given an unlabeled review, the model must predict whether it is objective or subjective. While supervised fine-tuning on the full dataset can achieve high test accuracy, we are interested in the low-data regime where we only have tens of labeled examples. We use the same instruction-tuned GPT-J 6B variant as in SS2 for this experiment. Without
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & BERTScore & BLEU-4 & GLEU-4 \\ \hline Prompting & \(0.081\pm 0.009\) & \(1.3\pm 0.5\) & \(3.2\pm 0.1\) \\ Supervised fine-tuning & \(0.094\pm 0.007\) & \(1.6\pm 0.8\) & \(3.7\pm 0.4\) \\ GFlowNet fine-tuning & \(\mathbf{0.184\pm 0.004}\) & \(\mathbf{2.1\pm 0.2}\) & \(\mathbf{4.2\pm 0.7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Similarities of sentences generated by different trained models to the ground truth on the story infilling task (mean \(\pm\) std over 5 runs).
any demonstrations, the model struggles with this task using the prompt in Table C.2 and achieves only 51.7% zero-shot accuracy.
This task is hard likely because it requires a latent reasoning step. A review could be considered objective because it analyzes the plot or facts about the movie, or it could be subjective because it expresses a personal opinion or makes a judgment. We denote the review \(X\), the predicted subjectivity \(Y\), and the latent reason \(Z\). Then, we GFlowNet-fine-tune the LLM \(q_{\text{GFN}}(Z\mid X)\), initialized with the base model \(p_{\text{LM}}\) to match the Bayesian posterior over rationales in Eq. 2. At test time, \(q_{\text{GFN}}(Z\mid X)\) generates 10 latent rationales (\(Z\)'s) for an unseen \(X\). The LLM \(p_{\text{LM}}\) then autoregressively samples from \(p_{\text{LM}}(Y\mid XZ)\) to produce 10 answers, the majority vote of which becomes the final prediction.
This posterior inference corresponds to the E-step in the EM algorithm, where the posterior \(p_{\text{LM}}(Z\mid X,Y)\) is defined in Eq. 2. Further, as described in SS3.2, we can take an M-step by updating \(p_{\text{LM}}\) to maximize \(\log p_{\text{LM}}(XZY)\) over a collection of \(Z\)'s sampled from the amortized posterior \(q_{\text{GFN}}\). This is equivalent to applying supervised fine-tuning after GFlowNet fine-tuning.
Results.We present few-shot prompting and supervised fine-tuning with LoRA as baselines. In few-shot prompting, we prepend 0, 10, 20, or 50 training examples to each test example using the prompt shown in Table C.2. We randomly shuffle few-shot demonstrations and report the mean and variance in Table 3. In supervised fine-tuning, we directly maximize \(\log p_{\text{LM}}(Y\mid X)\) over the same 10, 20, or 50 (\(X,Y\)) pairs. The variance is over model initialization and batch order. All entries except zero-shot prompting are aggregated over 3 random seeds. See Appendix C for experiment details.
GFlowNet fine-tuning consistently outperforms supervised fine-tuning in the low-data regime, as shown in Table 3. In some cases, performing supervised fine-tuning on top, which corresponds to running one step of the EM algorithm, further improves the performance.
### Solving arithmetic problems step by step
Task description.Arithmetic reasoning is a fitting benchmark to evaluate reasoning abilities of large language models as it requires multi-step reasoning and correctness is easy to evaluate (Cobbe et al., 2021). While the distribution of pretraining and fine-tuning data (Magister et al., 2023; Lee et al., 2023; Luo et al., 2023) and prompting choices (Imani et al., 2023) play a critical role in their arithmetic abilities, LLMs are susceptible to poor generalization by learning'shortcuts' to reasoning (Dziri et al., 2023). We consider a simple integer arithmetic task (Fig. 1), with a general pretrained base model, rather than a one pretrained on mathematical tasks (Jelassi et al., 2023). To avoid the pitfalls of symbolic calculations with language models, we adopt the tool use setting (Schick et al., 2023), where the model is equipped with a calculator that can perform parts of the computation, implemented as in Cobbe et al. (2021): when the model outputs '-' the expression preceding it is evaluated and appended to the sequence. To prevent the model from evaluating the entire expression in the question using the calculator, we limit the calculator to evaluating only two-term expressions. Consequently, reasoning here involves learning to plan using a tool with limited capabilities (Hao et al., 2023).
For training, we use a synthetic dataset of arithmetic expressions, limited to addition and subtraction. Following Zelikman et al. (2022), we use a small set of 50 demonstrations \((X,Z,Y)\) to seed the replay buffer in addition to 1000 examples \((X,Y)\). We use the same instruction-tuned GPT-J as in SS4.3 as the base model. Further details are in Appendix D. We report the accuracy on two types of examples: (1) unseen in-distribution expressions (3 or 4 operands) and (2) longer out-of-distribution expressions
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Number of Operands} \\ \cline{3-4} & \multicolumn{2}{c}{In-distribution} & OOD \\ \cline{2-4} Method & 3 & 4 & 5 \\ \hline \(k\)-shot CoT & \(k=0\) & 10.2 & 6.4 & 3.2 \\ & \(k=3\) & 15.8 \(\pm\) 3.1 & 11 \(\pm\) 1.7 & 5.4 \(\pm\) 0.2 \\ & \(k=5\) & 20.4 \(\pm\) 10.4 & 17.6 \(\pm\) 0.6 & 6.6 \(\pm\) 1.1 \\ & \(k=10\) & 26.5 \(\pm\) 1.4 & 15.2 \(\pm\) 1.7 & 8.9 \(\pm\) 1.9 \\ & \(k=20\) & 35.5 \(\pm\) 1.9 & 21 \(\pm\) 1.4 & 10.5 \(\pm\) 0.9 \\ \hline Supervised fine-tuning & 72.1 \(\pm\) 1.3 & 19.6 \(\pm\) 2.2 & 12.8 \(\pm\) 5.7 \\ \hline PPO & 30.6 \(\pm\) 4.1 & 13.7 \(\pm\) 4.1 & 5.6 \(\pm\) 3.1 \\ \hline GFlowNet fine-tuning & **95.2 \(\pm\) 1.3** & **75.4 \(\pm\) 2.9** & **40.7 \(\pm\) 9.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy (%) on an integer arithmetic task with addition and subtraction using a GPT-J GB model.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \multicolumn{3}{c}{Test accuracy (\%) \(\uparrow\)} \\ \hline Zero-shot prompting & \multicolumn{3}{c}{51.7} \\ \hline & \multicolumn{3}{c}{Training samples} \\ \cline{2-4} & 10 & 20 & 50 \\ \hline Few-shot prompting & 61.3 \(\pm\) 6.2 & 61.8 \(\pm\) 5.4 & 65.8 \(\pm\) 10.5 \\ Supervised fine-tuning & 64.3 \(\pm\) 2.8 & 69.1 \(\pm\) 0.8 & **89.7 \(\pm\) 0.4** \\ \hline GFlowNet fine-tuning & 71.4 \(\pm\) 1.8 & **81.1 \(\pm\) 0.4** & 87.7 \(\pm\) 2.2 \\ + Supervised fine-tuning & **75.2 \(\pm\) 1.8** & 78.7 \(\pm\) 1.6 & **89.9 \(\pm\) 0.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy (%) on SUBJ using an instruction-fine-tuned GPT-J GB.
(5 operands). As baselines, we consider zero-shot chain-of-thought prompting, \(k\)-shot prompting, supervised fine-tuning on the tool use sequences, and fine-tuning with PPO (Schulman et al., 2017). For all methods, we enable tool use and limit the model to generate only numbers and operators.
Results.From the results summarized in Table 4, the base model performs poorly even with chain-of-thought prompts. Including examples in context improves the performance considerably, with monotonic improvements as the number of examples increases. Supervised fine-tuning improves the performance significantly on the in-distribution examples, but the model still struggles to generalize on the out-of-distribution examples. Fine-tuning with PPO results also yields poor performance, caused in part by the poor calibration of the base reward model, _i.e._ it cannot distinguish good rationales from bad ones. Even though the sequences generated with PPO (illustrated in Appendix D) have high rewards, they are spurious and do not even define valid calls to the tool.
Such overoptimization to a misspecified reward is a widely noted issue in LLMs trained with RL (Gao et al., 2022). On the other hand, by matching the entire distribution, GFlowNet fine-tuning avoids collapsing to a single mode of the reward, thereby being robust to the misspecification of the reward (Eysenbach and Levine, 2022) and achieving significantly better performance on in and out-of-distribution examples. See Appendix D for additional results and analysis.
## 5 Further related work
Sampling from intractable marginals.Beyond the approximations mentioned in SS3.1, sampling from intractable posterior distributions given by pretrained models for tasks such as infilling and constrained generation has been an object of study. Miao et al. (2019); Zhang et al. (2020) use MCMC for these problems, Malkin et al. (2021) used a variable-neighborhood ascent for finding modes, and a sequential Monte Carlo approach was recently proposed by Lew et al. (2023). Others have studied the problem with _masked_ language models, using them to perform variants of Gibbs sampling (Wang and Cho, 2019; Goyal et al., 2022; Yamakoshi et al., 2022) and recovering marginal distributions over small sets of tokens (Torroba Hennigen and Kim, 2023).
GFlowNets.GFlowNets (Bengio et al., 2021) were originally proposed to learn policies for sampling discrete compositional objects from an unnormalized reward distribution, motivated by the need to sample diverse high-reward objects in scientific discovery (Jain et al., 2023), in particular, for biological sequence generation (Jain et al., 2022). The interpretation of GFlowNets as variational inference algorithms (Malkin et al., 2023; Zimmermann et al., 2023) makes them appropriate for sampling Bayesian posterior distributions over structured objects (_e.g._, Deleu et al., 2022; Deleu et al., 2023; van Krieken et al., 2022; Hu et al., 2023).
Chain-of-thought reasoning in LLMs.In recent work on classification and completion with language models, the latent reasoning chain \(Z\), in the notation of SS3.1, is called a 'chain of thought' (Wei et al., 2022). The chain of thought is typically generated by conditioning the language model on \(X\) with the use of specialized demonstrations or prompts (Kojima et al., 2022), with no guarantee of sampling the posterior accurately. Related to our Bayesian formulation, Wang et al. (2023) noted that appropriately aggregating the conclusions \(Y\) from several latent chains \(Z\) improves predictive performance. In Xu et al. (2023); Zhou et al. (2022), a posterior over latent token sequences is sampled using MCMC, while Zelikman et al. (2022) propose _fine-tuning_ on successful (high-reward, in our language) chains of thought, which achieves reward maximization but gives no guarantee of diversity. We expect these methods to generalize poorly to difficult exploration problems, while diversity-seeking approaches, such as GFlowNets, allow fine-tuning to take advantage of generalizable structure in the posterior and have a goal of sampling the full posterior over latent reasoning chains.
## 6 Conclusion
The knowledge compressed in LLMs is crucial for tasks such as infilling and constrained generation, but querying this knowledge involves sampling from intractable posterior distributions. We propose to use GFlowNet objectives to train LLMs to sample from such posterior distributions. Empirical results show that GFlowNet fine-tuning finds a better fidelity-diversity trade-off for text generation and also improves sample efficiency and generalization on downstream tasks compared to maximum-likelihood training or reward-maximizing policy optimization. As an amortized inference algorithm, our method converts computation into better test-time performance without additional data.
Future work should investigate transfer and generalization across tasks, in particular, building a 'universal reasoner' as a model \(q(Z\mid X)\) shared between \(X\) from different tasks, as was recently considered by Wang et al. (2023). One should investigate the benefit of using a better knowledge model, _e.g._, a more capable base LLM, as a starting point for GFlowNet fine-tuning. The ability to draw multiple samples from a GFlowNet can also be used to quantify epistemic uncertainty. Finally, we adopt the GFlowNet formalisms with the perspective of generalizing to latent variables \(Z\) with a richer generative process than left-to-right sampling. We hope that the GFlowNet paradigm will enable more flexible reasoning with LLMs in the future: extending probabilistic programming with language variables (Beurer-Kellner et al., 2023), using structured chains of thought (Yao et al., 2023; Besta et al., 2023), and extending to program synthesis and planning with world models.
Limitations.Due to resource constraints, our experiments are with models up to 6B parameters. Nonetheless, we expect the conclusions to hold for larger models. In fact, our method can potentially benefit larger models more because it is harder to optimize a larger model with maximizing objectives on a small amount of data. Furthermore, our method improves inference but does not directly improve the knowledge in the LM. Many issues with LLMs, such as hallucination or miscalibration, are closely related to the knowledge representation and thus not addressed.
## Reproducibility
We discuss the details of the proposed algorithms in SS3.3 and provide all the implementation details and hyperparameters for the experiments in the main paper and appendix. Code for our experiments is available at [https://github.com/GFNOrg/gfn-lm-tuning](https://github.com/GFNOrg/gfn-lm-tuning).
## Acknowledgments
The authors are grateful to Bonaventure Dossou and Salem Lahlou for their help in the early stages of this project. We also thank Robert Hawkins, Arian Hosseini, Zhen Wang, and Anirudh Goyal for valuable discussions and suggestions of related work.
GL acknowledges funding from CIFAR, Samsung, and a Canada Research Chair in Neural Computation and Interfacing.
YB acknowledges funding from CIFAR, NSERC, IBM, Intel, Genentech, and Samsung.
The research was enabled in part by computational resources provided by the Digital Research Alliance of Canada ([https://allianeccan.ca](https://allianeccan.ca)), Mila ([https://mila.quebec](https://mila.quebec)), and NVIDIA. |
2308.05237 | Financial Fraud Detection: A Comparative Study of Quantum Machine
Learning Models | In this research, a comparative study of four Quantum Machine Learning (QML)
models was conducted for fraud detection in finance. We proved that the Quantum
Support Vector Classifier model achieved the highest performance, with F1
scores of 0.98 for fraud and non-fraud classes. Other models like the
Variational Quantum Classifier, Estimator Quantum Neural Network (QNN), and
Sampler QNN demonstrate promising results, propelling the potential of QML
classification for financial applications. While they exhibit certain
limitations, the insights attained pave the way for future enhancements and
optimisation strategies. However, challenges exist, including the need for more
efficient Quantum algorithms and larger and more complex datasets. The article
provides solutions to overcome current limitations and contributes new insights
to the field of Quantum Machine Learning in fraud detection, with important
implications for its future development. | Nouhaila Innan, Muhammad Al-Zafar Khan, Mohamed Bennai | 2023-08-09T21:47:50Z | http://arxiv.org/abs/2308.05237v1 | # Financial Fraud Detection: A Comparative Study of Quantum Machine Learning Models
###### Abstract
In this research, a comparative study of four Quantum Machine Learning (QML) models was conducted for fraud detection in finance. We proved that the Quantum Support Vector Classifier model achieved the highest performance, with F1 scores of 0.98 for fraud and non-fraud classes. Other models like the Variational Quantum Classifier, Estimator Quantum Neural Network (QNN), and Sampler QNN demonstrate promising results, propelling the potential of QML classification for financial applications. While they exhibit certain limitations, the insights attained pave the way for future enhancements and optimisation strategies. However, challenges exist, including the need for more efficient quantum algorithms and larger and more complex datasets. The article provides solutions to overcome current limitations and contributes new insights to the field of Quantum Machine Learning in fraud detection, with important implications for its future development.
Quantum Machine Learning, Quantum Neural Networks, Quantum Feature Maps, Fraud Detection.
Introduction
_Fraud_ is the act of deceiving and misleading a person, or group of people, with the intention of obtaining some kind of gain (oftentimes financial). It involves the provisioning of misrepresented information or data to the victim, which seems "too good to be true", or the request of the victim's private data. Frequently, the targets of these attacks are elderly folk or those individuals whom are not technologically inclined. Fraudsters play on the emotions of their victims by usually creating a need for urgency around performing a certain task, like the victim disclosing his/her confidential information like identity/social security numbers, pin codes, One-Time Pins (OTPs), or other information that can render the victim susceptible. Over the years, fraud schemes have become even more sophisticated, and with the advent of Generative Artificial Intelligence (GenAI) becoming more ubiquitous, more suave and ultra-modern schemes such as the employment of various phishing scams and Natural Language Processing (NLP) to use voices of the victim's family members or friends are used in order to gain their trust, and credence.
Broadly speaking, fraud can be categorised into the following flavours:
1. **Purloinment of Identity:** Also known as "identity theft", This occurs when the perpetrator steals the personal information from the victim and "assumes their identity" in the sense of using their details with nefarious intent: Using the victim's personal identification number, applying for any licenses, using the victim's debit/credit card details for purchasing goods or paying for services.
1. **Insurance Claims Fraud:** This occurs when the perpetrator intentionally files fallacious insurance claims or overinflates the value of losses that occurred.
1. **Financial Fraud:** This falls under the broader category of white collar crimes and constitutes:
1. **Accounting Fraud:** Also known as "crooking the books". This involves the deliberate manipulation and misrepresentation of figures in financial statements to mislead investors and interested parties regarding the company's financial health.
1. **Ponzi and Pyramidal Schemes:** These constitute schemes whereby victims outlay some capital with the promise of receiving enormously high returns in
short periods of time. In these schemes, funds are taken from the late investor "Tom" and given to the earlier investors "Dick" and "Harry". At the end of these schemes, the late investors are not paid out the promised return, or any return whatsoever, and the so called "expert investment manager" disappears.
1. **Embezzlement:** This type of fraud occurs when an entrusted party in a company holds fiduciary responsibilities and abuses their power by stealing or misappropriating funds, or assets, to suit their own objectives.
1. **Insider Trading:** This occurs when a party has access to non-public, privileged information about the company and they hedge against the company's stock price rising or plummeting. This ties into corporate espionage, where spies are deployed into companies to steal trade secrets and report to them parties of interest, who use this information to take advantage of the company.
1. **Wire Fraud:** Using electronic media such as emails, phone calls, text messages, or personalised social media messages to hoodwink victims. Typically, scammers will act under false pretences to impersonate an agent at a bank or institution, ask the victim to transfer funds from their accounts or disclose sensitive data. In addition, these scammers play on the victims personal troubles like romance (the famous "Nigerian Prince scam"), or the victims financial woes like lottery prize scams, or inheritance scams, the victims philanthropic nature with charity scams, the victim's need to secure employment with job offer scams, or tech support scams.
1. **Credit Fraud:** This involves the unauthorised usage for purchasing goods, paying for services, and using the victim's debit or credit cards. Typically, this would involve the scammer getting a hold of the victim's 16-digit card number, then phishing for the card's expiry date and the 3-digit Card Verification Value (CVV).
1. **Internet Fraud:** This is the collective term for online scams and phishing attacks whereby the scammer uses emails, pop-up messages, websites, and social media to get the victim to make a payment or disclose their confidential information.
The focus of this paper is concentrated on credit fraud. According to a 2022 study by UK Finance, fraud resulted in losses of PS1.2 bil. (sterling), and 80% of app fraud originates from online solicitations. In a 2023 study published by the news agency CNBC, it is estimated
that in 2022, fraud cost consumers in the US $8.8 bil. Such high consumer costs directly correlate to economic downturns for countries and, thus, translate to worldwide economic collapse. Thus, an accurate and quick fraud detection system is needed to tame this type of fraud.
The idea of fraud detection using (Classical) Machine Learning (CML) models is not novel and oftentimes forms a standard textbook exercise/capstone project in this regard, and many big corporates across the financial, telecommunications, and consulting industries have fraud detection models deployed into production. For example, several of these CML models that utilise: Multivariate Logistic Regression (see Alenzi & Aljehane, 2020), Support Vector Machines (SVMs) - see Kumar _et al_, 2022; Gyamfi & Abdulai, 2018, Random Forest Classifiers (see Liu _et al_, 2015; Xuan _et al_, 2018; Xuan _et al_, 2018), Gradient Boosting Machines (see Taha & Malebury, 2020), comparative studies across methods (see Kumar _et al_, 2020; Han _et al_, 2020; Afriyie _et al_, 2023), or combining models in ensembles (see Nandi _et al_, 2022) show high fidelity, robustness, and ease of implementation.
Additionally, researchers have also applied various Deep Learning (DL) approaches: Autoencoders and Restricted Boltzmann Machines (RBMs) - see Pumsirirat & Yan, 2018, Graph Neural Networks (GNNs) - see Ma _et al_, 2021. The only time-consuming aspect of the model lifecycle is data cleaning and feature engineering.
_Quantum Machine Learning_ (QML) is a newly developing field in which researchers began to express interest back in the early 2000s by combining the then emerging field of Quantum Computing (QC), an idea accredited to Feynman, 1982, and CML. The goal is to leverage properties of the fundamental units of QC, qubits, and QML algorithms to obtain a computational advantage over analogous classical approaches.
However, the crystallisation and commercialisation of these ideas began to flourish in the early 2010s, and one of the most pioneering books and papers is credited to Wittek, 2014 and Biamonte _et al_, 2017 respectively, who set the stage for a formalised research track - Of course, if one looks deep enough, one may find many earlier papers, but it is beyond the scope of mentioning research works of chronological order, rather those with the highest impact. Potentially, QML can radically transform the paradigm and approach to CML by facilitating the discovery of novel algorithms that are more efficient than their classical counterparts. Since this is a rapidly developing field and we are in the Noisy Intermediate-Scale Quantum (NISQ) era of QC - see Preskill, 2018, there is no single approach. We
discuss these approaches in Tab. 1. below.
It is important to note that while QML has immense potential, it is still in the early stages of its development. Breakthroughs in hardware design, computing power, Quantum cloud technologies, and new approaches to QC will result in the more widespread adoption of QML to solve daily tasks, much like how CML is a tool that all major companies are trying to integrate and embed into their organisational processes.
The question arises: "If these CML models are so successful and doing such a fantastic
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}}
**Approach** & **Description** \\ Quantum Approach to CML & This entails the development of novel Quantum algorithms to solve computationally-expensive CML tasks. For example, the Quantum Support Vector Classifier has been shown to train on large datasets faster than the classical Support Vector Machine. \\ Quantum-supplemented Approach CML & to This involves using Quantum principles to enhance existing CML algorithms. For example, the Quantum Neural Network (QNN) offers several advantages over the classical Neural Network (CNN). \\ Composite Classical-Quantum Machine Learning & This approach offers a hybrid procedure that combines elements from classical computing and QC to solve CML tasks. For example, a Quantum Computer may be used to preprocess the data, and a CML algorithm may be used to optimise the model’s weights, biases, and additional parameters. \\ Applications of QML to Other Domains & This approach involves developing and modifying existing QML algorithms for applications in areas beyond CML. As an example, QML is used extensively in the field of Computational Chemistry. One such use case is by Innan _et al_, 2023 in which a Variational Quantum Eignesolver (VQE) was modified to perform electronic structure calculations, and a novel algorithm was presented. \\ \end{tabular}
\end{table}
Table 1: Approaches to Quantum Machine Learning
job in flagging fraudulent use cases, what is the need for QML fraud detection models?" We advocate for adopting a Quantum approach because we believe it provides the following advantages over the classical approaches in the post-NISQ era:
1. **Analysis of Real-time Data:** Quantum Computers provide the opportunity to analyse vast swathes of real-time data in a methodical and structured manner with the potential to be exponentially faster. This is particularly important in fraud detection applications, where real-time detection is mandatory to mitigate the risk of large losses.
2. **Decrease in the Amount of Inessential Data:** Fraud detection involves the analyses of large swathes of data, and although fraud accounts for such large losses, it is rare to detect while it is in progress (usually detected after it occurs), and the training data has to be specifically fabricated from real-time data; thus, a lot of redundancies occur. Since Quantum Computers offer the opportunity to analyse data in a reduced amount of time, the amount of redundant data is thereby minimised.
3. **Scalability through Parallelisation:** QML offers the opportunity to work with larger datasets because of its ability to parallelise algorithms in a streamlined manner as compared to CML.
4. **Reduction in Algorithm Computational Complexity:** By utilising the Quantum Mechanical properties of Superposition and Entanglement, QML algorithms are less expensive than CML algorithms.
In this paper, we apply the Quantum Support Vector Classifier, the Variational Quantum Classifier, the Estimator Quantum Neural Network, and the Sampler Quantum Neural Network to the BankSim dataset. This paper is divided into the following sections:
In Sec. II., we provide a comprehensive precis of the relevant literature papers pertaining to anomaly detection and fraud prediction.
In Sec. III., we provide an overview of the theoretical constructs of the methods used. Namely, the data encoding and the QML methods respectively.
In Sec. IV., we discuss the dataset used and present the results of applying the QML models. Thereafter, we discuss the results by alluding to the various model heuristic metrics.
In Sec. V., we provide closing remarks on the findings of this paper.
Literature review
Since the launch of IBM's Qiskit package and Xanadu's PennyLane, it has become more common to apply QML methods for fraud detection. However, we note that this is a fairly new application for QML, with many papers not being very old. In this regard, we note the following literature pieces:
Although strictly not a paper that applies the methods to fraud detection in financial data, anomaly detection forms an integral component of fraud detection. Thus, it is noteworthy to mention the work of Liu and Rebentrost, 2018, who discuss the potential applications of anomaly detection to Quantum data and propose a Quantum anomaly detection algorithm based on autoencoders. This is particularly useful when real-world data is converted to Quantum states via some feature map embedding. The research highlights the usage of the Quantum methods (Quantum Principal Component Analysis, Quantum Density Estimation, Quantum Support Vector Machines, and Quantum \(k\)-Nearest Neighbours) and compares them to their classical counterparts. Lastly, it gives advantages for the superiority of the Quantum methods over classical methods for anomaly detection, such as faster processing time of the data and enhanced accuracy.
Liang et al, 2019 propose two Quantum anomaly detection algorithms that find applications in fraud detection. The basis for these algorithms comprises density estimation and multivariate Gaussian distributions. The goal is to find the probability density function for the training data. The advantage of this approach over classical approaches is that these algorithms scale logarithmically with respect to the number of datapoints in the training data and the dimensionality of the Quantum states. Thus, making the algorithm superior in efficiency for handling high-dimensional data. In addition, the authors propose a method for calculating the determinant of any Hermitian operator, which is particularly useful for anomalous data with a higher-dimensional normal distribution. The advantages of these algorithms are demonstrated experimentally by illustrating comparable accuracy and precision in a shorter time.
Kottmann et al, 2021 introduced the unsupervised QML algorithm known as _Variational Quantum Anomaly Detection_ (VQAD) that takes simulation data and extracts the phase diagram, _a priori_, without knowledge of the system. Importantly, the authors have demonstrated that the algorithm works in realistic scenarios for both real-noise simulations and
on a real Quantum computer. Further, it was shown to improve the anomaly detection scheme by employing measurement error mitigation and adopting the circuits according to the physical device. Although more oriented towards Physics, the findings of this paper have potentially important implications for fraud detection.
Kyriienko & Magnusson, 2022 develop a Quantum protocol for anomaly detection and apply their technique for detecting credit card fraud. By establishing classical benchmarks, a comparative study is done against different types of Quantum kernels (products of data-dependent rotations with variational circuits, and evolution circuits, the spin-glass Hamiltonian's or the Heisenberg Hamiltonian) is established, and it is shown that Quantum fraud detection is superior to classical methods. Specifically, for supervised fraud detection, Quantum kernels offer higher expressivity and generalisability by outperforming RBF kernels, \(K(\mathbf{x},\mathbf{x}^{\prime})=\exp\left(-\frac{\|\mathbf{x}-\mathbf{x}^{ \prime}\|_{2}^{2}}{2\sigma^{2}}\right)\), for the free parameter \(\sigma\), by over 10% on the average precision heuristic. For unsupervised fraud detection, Quantum kernels offer a 15% increase in average precision and grow as the system size grows. Lastly, the authors discuss future improvements in near- and mid-term Quantum hardware.
Grossi _et al_, 2022 use the Qiskit software stack (IBM Safer Payments and IBM Quantum Computers) to present an end-to-end application of Quantum Support Vector Machines for classification in financial services and a comparative study of the state-of-the-art QML methods collated against the classical methods. The paper shows that the hybrid method outperforms the classical method with respect to accuracy and the false positive rate (FPR) measures. Feature selection plays a pivotal role in optimising the fraud detection system. The paper proposes a Quantum Feature Importance Selection Algorithm (QFISA) that selects the most important features from a dataset to reduce the dimensionality of the dataset for running the experiment on a real Quantum device. Lastly, the drawbacks and limitations of the Factorial Analysis of Mixed Data (FAMD) method are highlighted (overlap between components, and not showing any discrimination power between the reduced variables), and it is shown how the method proposed is superior in this regard.
Wang _et al_, 2022 propose a framework using QML for analysing online transaction data that is time series-based, highly imbalanced, and high-dimensional in order to detect fraudulent records. Using an enhanced-Support Vector Machine with Quantum annealing solvers, they benchmark this method against CML models. This research highlights the challenges encountered when dealing with real-time transactional data and how a Quantum approach
potentially provides a better approach and can be more broadly applied to other critical business applications. While providing a roadmap for further research, the authors caution that several factors must be accounted for when implementing a fraud detection model on such data; namely:
* **Accuracy:** How close to the actual values does one want the predicted values to be?
* **Speed:** How urgently do you need the model to detect anomalies?
* **Cost of Computing:** Whether one, or the company that one works for, has the financial resources to purchase hardware, and access extra qubits, to perform such calculations.
Guo _et al_, 2022 propose an Anomaly Detection based on the Density Estimation (ADDE) algorithm, which hinges on the estimation of the amplitude of a Quantum state, and they show that it has an exponential speed-up in the number of training datapoints and dimensions over classical algorithms. Further, the authors show how the proposed algorithm can be used for anomaly detection based on Kernel Principal Component Analysis (KPCA). Lastly, it is indicated that the findings in this paper are not limited to fraud detection but can also be applied to other domains, namely: Military surveillance, intrusion detection, and healthcare.
Other references are contained therein in the aforementioned literature pieces. One may expect that there exists a plethora of application-based papers of QML papers for fraud detection, unexpectedly, there are not so many.
## III Theory
We present the theory of the data encoding methods used in the paper, namely: ZZFeatureMap, PauliFeatureMap, ZFeatureMap, and QML models: QSVC, VQC, EQNN, SQNN, used below. This is because the theory is not widely known, it helps to establish the context, justifies the choice of methods used, guides the analyses and interpretation, and enhances the overall credibility of this research.
### Data Encoding Methods
#### iii.1.1 ZZFeatureMap
The ZZFeatureMap class is a Quantum circuit representing a second-order Pauli-\(Z\) evolution. It takes as input a feature dimension, which is the number of qubits in the circuit, and the number of repetitions, which specifies how many times the rotation and entanglement blocks are repeated. The circuit is constructed by applying Hadamard gates to all qubits, followed by rotation and entanglement blocks as shown in Fig 1.
The rotation blocks apply single-qubit rotations based on the classical data, parameterised by angles determined by a classical non-linear function \(\phi\), which by default is \(\phi(x)=x\) for a single feature and \(\phi(x,y)=(\pi-x)(\pi-y)\) for two features, and in our case with four features: \(\phi(x,y,z,w)=(\pi-x)(\pi-y)(\pi-z)(\pi-w)\). The entanglement blocks entangle the qubits based on the specified entanglement structure using controlled-\(X\) (CNOT) gates.
#### iii.1.2 PauliFeatureMap
The PauliFeatureMap class represents a Quantum circuit that enables a Pauli expansion of a given data set. The Pauli expansion is a method for representing the data set as a product of Pauli operators, where each Pauli operator corresponds to a distinct feature within the data. The expression for the Pauli operator combination is given as:
\[U_{\varphi(\mathbf{x})}=\exp\left(\imath\sum_{S\in\mathcal{I}}\phi_{S}( \mathbf{x})\prod_{i\in S}P_{i}\right),\]
Figure 1: Circuit diagram for the ZZFeatureMap.
where \(\mathcal{I}\) is the set of qubit indices describing the connections in the feature map, and \(\phi_{S}(\mathbf{x})\) is the data mapping function. The data mapping function \(\phi_{S}(\mathbf{x})\) maps classical input data \(\mathbf{x}\) into the Quantum circuit, enhancing the circuit's representation capabilities. It is defined as follows:
\[\phi_{S}(\mathbf{x})=\begin{cases}x_{i}&\text{if }S=\{i\},\\ \prod_{j\in S}\left(\pi-x_{j}\right)&\text{if }|S|>1.\end{cases}\]
The PauliFeatureMap circuit, as shown in Fig 2, is constructed by initially applying Hadamard gates to all qubits. Subsequently, a series of rotation gates are applied to the qubits, with the rotation angle for each qubit determined by the data function, \(\phi\). Finally, entangling gates are applied to the qubits, similar to the procedure used in the previous feature map. The PauliFeatureMap circuit can be repeated multiple times to enhance the accuracy of the approximation, similar to other feature maps.
#### ii.2.3 ZFeatureMap
The ZFeatureMap class represents a first-order Pauli \(Z\)-evolution circuit. As a sub-class of PauliFeatureMap, it operates with fixed Pauli strings "\(Z\)", resulting in the absence of entangling gates in its first-order expansion. This unique characteristic makes the ZFeatureMap
Figure 2: Circuit diagram for the PauliFeatureMap.
particularly well-suited for specific applications where a shallow Quantum circuit without entanglement is desired.
Similar to the ZZFeatureMap, the ZFeatureMap is tailored for a designated number of qubits, known as the _feature dimension_, and the user can specify the number of repetitions to replicate the rotation blocks. The circuit is constructed by applying Hadamard gates to all qubits, followed by rotation blocks as shown in Fig 3. The rotation blocks are structured following the same principles employed in the ZZFeatureMap.
The ZFeatureMap class also offers essential attributes for inspecting the circuit, including the feature dimension, the number of repetitions, and the entanglement strategy. In the case of the ZFeatureMap, the entanglement strategy is null since no entangling gates are present.
The ZFeatureMap class complements the ZZFeatureMap by providing an alternative Quantum feature map that aligns with specific use cases where entangling gates are to be avoided. Its customisable nature, and absence of entanglement, allow for efficient Quantum data encoding and processing.
### Quantum Support Vector Classifiers
The _Quantum Support Vector Classifier_ (QSVC) is the Quantum Mechanical analogue of the classical Support Vector Machine (SVM), as depicted in Fig 4. The SVM model aims to find the optimal _planum separans_ (separating hyperplane) that categorises the datapoints. This is achieved by _maximal margin classification_: Minimising the margin (distance between
Figure 3: Circuit diagram for the ZFeatureMap.
classes of datapoints) while simultaneously maximising the distance between the closest datapoints from each class and the hyperplane; see the excellent texts of Bishop, 2006; Goodfellow _et al_, 2016 for a full mathematical elucidation.
The output of a QSVC is given by
\[f(\mathbf{x})=\sum_{j=1}^{n}\alpha_{j}K(\mathbf{x},\mathbf{x}_{j})+\mathbf{b},\]
where \(\alpha_{j}\) are the coefficients of the classifier, \(\mathbf{b}\) are the bias terms, and \(K\) are the kernels - which gives a measure of similarity between the datapoints \(\mathbf{x}\), and the \(j^{\text{th}}\) datapoint \(\mathbf{x}_{j}\). Schuld & Petruccione, 2021 provide an excellent discussion of the various kernel types. We advocate that the kernel is the most important component of a QSVC and significantly affects its performance. Thus, in the style of "hyper-parameter tuning", one should experiment with various kernels to see which gives the best model performance.
### Variational Quantum Classifiers
The _Variational Quantum Classifier_ (VQC) is a type of Quantum circuit parameterised by learnable weights. The weights are optimised using a classical method to minimise the loss function. As indicated in Fig 5, the VQC operates as follows:
1. **Quantum State Preparation:** Let \(\mathbf{\theta}=(\theta_{1},\theta_{2},\ldots,\theta_{n})\), where \(n\) is the number of registers in the circuit, be the set of learnable weights, initialised randomly for each \(0\leq\theta_{i}\leq 1\). The initial state is represented as \(|\Psi_{0}(\mathbf{\theta})\rangle\), and is oftentimes simply-prepared Quantum states such as a series \(|0\rangle\) states.
Figure 4: Architecture of the Quantum Support Vector Classifier.
3. **Application of a Unitary Transformation:** In this part of the circuit, a series of Quantum gates are applied to the initial states. Let \(G_{i}\in\{I,X,Y,Z,H,S,T,R_{X},R_{Y},R_{Z},\text{CNOT},\text{SWAP},\ldots\}\) be Quantum gates, for \(1\leq i\leq m\), and then we apply a series of Quantum gates on the initial state. We can be sure that no matter what combination of these Quantum gates we have, they form a unitary operator, i.e. \(U=\bigotimes_{i=1}^{m}G_{i}\). Mathematically, this part of the circuit is given by \(U\left|\Psi_{0}(\mathbf{\theta})\right\rangle\equiv\left|\Psi(\mathbf{\theta})\right\rangle\).
4. **Measurement:** Measurement is performed on the result \(U(\mathbf{\theta})\) in order to extract information from the Quantum states.
Steps III.C.2. and III.C.3. are repeated in order to minimise the loss function, \(J(\mathbf{\theta})\), and the process is stopped once an acceptance criterion is met.
### Estimator Quantum Neural Networks
The _Estimator Quantum Neural Network_ (EQNN) is a hybrid Classical-Quantum neural Network architecture whereby the Quantum component is known as the _feature map_ and converts the classical data into Quantum states. As shown in Fig 6, the EQNN operates as follows:
Figure 5: Architecture of the Variational Quantum Classifier.
**III.D.1 State Preparation via Quantum Feature Map:** Given classical data \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), the Quantum feature map, \(\Phi:\mathbf{x}\longrightarrow\left|\Psi_{0}(\mathbf{\theta})\right\rangle\), encodes the classical data into parameterised Quantum states, \(\left|\Psi_{0}(\mathbf{\theta})\right\rangle\), using the VQC. As is the case with the VQC, the states \(\left|\Psi_{0}(\mathbf{\theta})\right\rangle\) are oftentimes just a series of \(\left|0\right\rangle\) states.
**III.D.2 Performing Measurement:** Measurement is performed on (some of) the qubits in the computational \((\{\left|0\right\rangle,\left|1\right\rangle\})\) basis to obtain classical features.
**III.D.3 Processing in a Classical Neural Network:** The classical features that are extracted here are fed to fully-connected classical neural network architecture in order to produce the predicted values, \(\hat{\mathbf{y}}\).
**III.D.4 Model Optimisation and Optimal Parameter Search:** In this step, the architecture is optimised to discover the optimal parameters \(\mathbf{\theta}^{*}\) of the VQC, as well as the weights, \(\mathbf{W}^{*}\), and biases, \(\mathbf{b}^{*}\), of the classical neural network, such that the loss function is minimised; i.e. \((\mathbf{\theta}^{*};\mathbf{W}^{*};\mathbf{b}^{*})=\underset{\mathbf{\theta},\mathbf{ W},\mathbf{b}}{\arg\min}\ J(\mathbf{y};\hat{\mathbf{y}})\). Importantly, this optimal search is carried out in parallel.
### Sampler Quantum Neural Networks
Analogous to the EQNN, the _Sampler Quantum Neural Network_ (SQNN) also contains a hybrid Classical-Quantum architecture. However, the SQNN is equipped with a _Quantum
Figure 6: Architecture of the Estimator QNN.
_Sampler_, which extracts example Quantum states from the complex probability distributions associated with the Quantum states. As illustrated in Fig 7, the SQNN operates as follows:
1. **State Preparation via Quantum Feature Map:** Exactly the same as the case of the EQNN; see III.D.1.
2. **Application of the Quantum Sampler:** Oftentimes this is taken to be the Quantum Approximate Optimisation Algorithm (QAOA); see Farhi _et al_, 2014. The purpose is to efficiently extract example Quantum states from the complex probability distribution corresponding to problem solutions under specific variable configurations.
3. **Sample Extraction:** Samples are chosen from the examples generated by the Quantum sampler.
4. **Utilising Classical Methods to Extract the Best Solutions:** From the samples, the best solutions to the given task are chosen using some kind of classical scheme.
5. **Optimal Parameter Search:** The optimal parameters are found using a classical optimisation method in order to minimise the cost function, i.e. \(\boldsymbol{\theta}^{*}=\underset{\boldsymbol{\theta}}{\arg\min}\ J\).The values of \(\boldsymbol{\theta}\) are fed back to the VQC, and the process begins once again. The process is repeated until the optimal values of the parameters are found.
Figure 7: Architecture of the Sampler QNN.
Results and Discussion
### Dataset and Feature Selection
The dataset used in this research study is derived from BankSim, an agent-based simulator of bank payments based on aggregated transactional data provided by a prominent bank in Spain. The primary objective of BankSim is to generate synthetic data tailored explicitly for fraud detection research. To achieve this goal, statistical analysis and Social Network Analysis (SNA) were deployed to study the relationships between merchants and customers, developing a calibrated model - see Lopez-Rojas & Axelsson, 2014.
The BankSim dataset encompasses 594 643 records obtained over 180 steps, simulating approximately six months of temporal activity. From these records, 587 443 are regular payments, while 7 200 are classified as fraudulent transactions. It is important to note that the simulated fraud occurrences were introduced by incorporating thieves aiming to steal an average of three cards per step and performing around two fraudulent transactions per day.
The dataset comprises nine feature columns and one target column, each offering essential insights to discern underlying patterns and characteristics. The features encompassed are as follows:
* **Step:** Representing the temporal aspect, this feature denotes the simulation day, effectively encompassing 180 steps, emulating six months.
* **Customer:** Denoting customer identification, this feature distinguishes individual customers engaging in transactions.
* **ZipCodeOrigin:** An indicator of each transaction's zip code of origin or source, offering the potential for geographic analysis.
* **Merchant:** Capturing the merchant's identification, this feature differentiates between various merchants involved in the transactions.
* **ZipMerchant:** This feature denotes the zip code associated with each merchant, providing further potential for geographic insights.
* **Age:** Representing the customer's age, this feature is categorized into discrete age groups, including "0": \((\leq 18)\), "1": \((19-25)\), "2": \((26-35)\), "3": \((36-45)\)
"4": \((46-55)\), "5": \((56-65)\), "6": \((>65)\), and "U" : (Unknown).
* **Gender:** Categorizing the gender of each customer, this feature includes values such as "E" (Enterprise), "F" (Female), "M" (Male), and "U" (Unknown).
* **Category:** Capturing the category of each purchase transaction, this feature imparts valuable insights into the nature and type of transactions.
* **Amount:** Representing the monetary value of each purchase, this feature offers critical information on transaction volumes.
* **Fraud:** This binary target variable classifies each transaction as fraudulent (denoted by "1") or benign (denoted by "0"). This classification forms the basis for the subsequent fraud detection analysis.
Graphical analysis played a crucial role in deepening our understanding of the dataset. We generated several visualisations, including histograms, bar plots, and a heatmap, to gain valuable insights into the data distribution and uncover potential patterns.
Figure 8: Histogram of fraudulent and non-fraudulent payments.
Fig. 8 displays a histogram comparing payment amounts for fraudulent and non-fraudulent transactions. Our analysis reveals that fraudulent transactions involve higher payment amounts on average (mean = 567.23, std = 128.47) compared to legitimate transactions (mean = 145.68, std = 50.32). This insight highlights the significance of payment amount as a distinguishing factor between the two transaction categories.
Fig. 9 presents a bar plot depicting fraudulent payments categorized by age and gender. The visualisation indicates that individuals aged 26 to 35 (45%) and females (56%) constitute more fraudulent transactions. In comparison, males (34%) and individuals aged 36 to 45 years (32%) show a lower incidence of involvement in fraudulent activities. These demographic trends offer valuable guidance for developing targeted fraud detection strategies.
Fig. 10 illustrates the distribution of fraudulent payments across different merchant categories. Specific merchant categories, such as "sports & toys" and "health", exhibit a disproportionately higher occurrence of fraudulent transactions, representing 20% and 15% of all fraud cases, respectively. This finding emphasizes the importance of considering merchant categories as a relevant feature in our fraud detection models.
Figure 9: Count of fraudulent payments by age and gender.
To identify the most informative features that significantly contribute to our fraud detection models, we employed Principal Component Analysis (PCA) to reduce the dimensionality of the dataset while preserving the most valuable information. As shown in Fig. 11, the results of the PCA analysis indicated the order of importance of the features based on their corresponding principal components. Notably, the feature "amount" emerged as the most influential, followed by "merchant," "category," "customer," "step," "age," "gender," "zipMerchant," and "zipcodeOri." This valuable ranking guided our further feature selection process. Subsequently, we conducted a logical analysis to investigate the relationships between these selected features and their potential impact on fraud detection. The logical analysis confirmed that the features "age," "gender," "category," and "amount" exhibited distinct patterns in fraudulent and non-fraudulent transactions, making them promising candidates for our fraud detection models.
We further examined the correlation heatmap to gain deeper insights into the relationships among the selected features (Fig. 12). The heatmap matrix displayed the pairwise correlations among "age," "gender," "category," "amount," and "fraud." The correlation heatmap
Figure 10: Distribution of fraudulent payments by merchant category.
showcased the strength and nature of the relationships. Notably, the feature "amount" exhibited a weak negative correlation with "fraud," suggesting a potential association between higher transaction amounts and fraudulent transactions. Based on the insights gained from the logical analysis and confirmed by the correlation heatmap, we concluded that the features "age," "gender," "category," and "amount" were the most informative variables for our fraud detection models. Incorporating these features into our fraud detection framework allows us to deliver robust and efficient financial security and risk management practices, advancing the field.
### Data Analysis and Experimental Setup
Before conducting the fraud detection analysis, a rigorous data preprocessing and cleaning process was undertaken to ensure the dataset's quality and suitability for reliable model training and evaluation. The original dataset was loaded, and specific subsets were extracted to create a balanced dataset containing 200 records with 100 instances of fraudulent and non-fraudulent transactions. A data transformation step addressed inconsistencies in the "age" column, which contained non-numeric characters, by extracting numerical values from the age categories using regular expressions. Consequently, each age was converted to an integer for accurate representation in the subsequent analysis.
Figure 11: Feature importance in fraud detection.
To prepare the dataset for model training, certain categorical features, such as "category" and "gender," were transformed into numerical representations using scikit-learn's LabelEncoder. This encoding process allowed the model to process these categorical variables effectively during training. Subsequently, the dataset was further prepared by removing unused features and converting the remaining features into numerical values to ensure homogeneity across the data. The dataset was split into training and testing sets using the train_test_split function from scikit-learn to facilitate the model training process. The training set denoted as \(X_{\text{train}}\) and \(y_{\text{train}}\) contained a portion of the data used for training the model. The testing set, represented as \(X_{\text{test}}\) and \(y_{\text{test}}\), was kept separate and served as unseen data to evaluate the model's performance.
The feature matrix \(X\) encompassed all pertinent features, excluding the "fraud" column,
Figure 12: Correlation heatmap of features.
which served as the target variable. The target variable, denoted as \(y\), distinguished between fraudulent transactions (encoded as (1)) and non-fraudulent transactions (encoded as (0)). This distinction was essential for the model to learn patterns and accurately classify new data. Following these preprocessing steps and dividing the dataset into training and testing sets, the data was ready for the subsequent model training and evaluation processes.
We employed our four Quantum Machine Learning models for the training process, each tailored to specific configurations. To optimize these models effectively, we harnessed the power of the Qiskit optimizer, implementing the COBYLA algorithm with a maximum iteration limit of 200. This prudent choice of optimizer facilitated efficient convergence towards the optimal solution, ensuring the training process was effective and resource-efficient.
To provide an ideal environment for training, we utilized the Aer backend with the Qasm-Simulator. This choice enabled us to simulate Quantum circuits effectively, enabling seamless training of the models. Following the training process, we meticulously evaluated the performance of each model using various key metrics. These metrics comprehensively understood each model's predictive capabilities and effectiveness.
### Results and Interpretation
In this paragraph, we present a comprehensive evaluation of our Quantum Machine Learning models, including QSVC, VQC, EQNN, and SQNN, on our dataset using three distinct feature maps: ZZFeatureMap, PauliFeatureMap, and ZFeatureMap. The primary evaluation metrics were precision, recall, and F1 scores for the fraud (Class 1) and non-fraud (Class 0) cases.
The results demonstrated that the QSVC model utilizing the ZFeatureMap achieved the highest performance, with impressive F1 scores of 0.98 for fraud and non-fraud classes Tab. 2. Furthermore, the QSVC model accurately identified fraudulent and non-fraudulent transactions.
Notably, the QSVC model, based on the Qiskit library's QSVC class, does not involve a loss function as in classical machine learning algorithms. Instead, it leverages the Quantum kernel to measure the similarity between Quantum states, enabling it to classify data points effectively.
The VQC model, also employing the ZFeatureMap, performed well with an F1 score of
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c}
**QML Model** & \multicolumn{3}{c|}{**ZZFeatureMap**} & \multicolumn{3}{c|}{**PauliFeatureMap**} & \multicolumn{3}{c}{**ZFeatureMap**} \\ & \multicolumn{3}{c|}{**Precision**} & \multicolumn{3}{c|}{**Recall F1-score**} & \multicolumn{3}{c|}{**Precision**} & \multicolumn{3}{c|}{**Recall F1-score**} & \multicolumn{3}{c}{**Precision**} & \multicolumn{3}{c}{**Recall F1-score**} \\ \hline
**QSVC** & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \hline Class 0 & 0.62 & 0.68 & 0.65 & 0.58 & 0.61 & 0.59 & 1.00 & 0.97 & 0.98 \\ Class 1 & 0.68 & 0.62 & 0.65 & 0.56 & 0.52 & 0.54 & 0.97 & 1.00 & 0.98 \\ \hline Accuracy & \multicolumn{3}{c|}{} & 0.65 & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Macro avg & 0.65 & 0.65 & 0.65 & 0.57 & 0.57 & 0.56 & 0.98 & 0.98 & 0.98 \\ Weighted avg & 0.65 & 0.65 & 0.65 & 0.57 & 0.57 & 0.57 & 0.98 & 0.98 & 0.98 \\
**VQC** & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Class 0 & 0.55 & 0.52 & 0.53 & 0.54 & 0.45 & 0.49 & 0.86 & 0.95 & 0.9 \\ Class 1 & 0.52 & 0.55 & 0.53 & 0.50 & 0.59 & 0.54 & 0.93 & 0.84 & 0.88 \\ \hline Accuracy & \multicolumn{3}{c|}{} & 0.53 & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Macro avg & 0.53 & 0.53 & 0.53 & 0.52 & 0.52 & 0.52 & 0.89 & 0.9 & 0.9 \\ Weighted avg & 0.53 & 0.53 & 0.53 & 0.52 & 0.52 & 0.51 & 0.89 & 0.9 & 0.9 \\
**EstimatorQNN** & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Class 0 & 0.53 & 0.52 & 0.52 & 0.52 & 0.45 & 0.48 & 0.70 & 1.00 & 0.83 \\ Class 1 & 0.50 & 0.52 & 0.51 & 0.48 & 0.55 & 0.52 & 1.00 & 0.55 & 0.71 \\ Accuracy & \multicolumn{3}{c|}{} & 0.52 & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Macro avg & 0.52 & 0.52 & 0.52 & 0.50 & 0.50 & 0.50 & 0.85 & 0.78 & 0.77 \\ Weighted avg & 0.52 & 0.52 & 0.52 & 0.50 & 0.50 & 0.50 & 0.85 & 0.78 & 0.77 \\
**SamplerQNN** & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Class 0 & 0.57 & 0.68 & 0.62 & 0.52 & 0.45 & 0.48 & 0.58 & 0.71 & 0.64 \\ Class 1 & 0.57 & 0.45 & 0.50 & 0.48 & 0.55 & 0.52 & 0.59 & 0.45 & 0.51 \\ Accuracy & \multicolumn{3}{c|}{} & 0.57 & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ Macro avg & 0.57 & 0.56 & 0.56 & 0.50 & 0.50 & 0.50 & 0.58 & 0.58 & 0.57 \\ Weighted avg & 0.57 & 0.57 & 0.56 & 0.50 & 0.50 & 0.50 & 0.58 & 0.58 & 0.58 \\ \hline \end{tabular}
\end{table}
Table 2: Performance Comparison of Quantum Machine Learning Models
0.90. However, in contrast to QSVC, the VQC model experienced a loss during training. In Fig. 13, we observe that the VQC model achieved a loss of 0.5 when using the ZFeatureMap, while losses of 0.95 were observed for the PauliFeatureMap and ZZFFeatureMap. The lower loss with ZFeatureMap indicates that this data encoding strategy leads to better convergence during the optimisation process, contributing to the higher accuracy achieved by the VQC model with this feature map.
On the other hand, the EQNN model, using the ZFeatureMap, showed a relatively lower F1 score of 0.78. Fig. 14 illustrates the corresponding loss values, with the ZFeatureMap achieving a loss of 0.5, the PauliFeatureMap a loss of 0.96, and the ZZFFeatureMap a loss of 0.97. The higher losses with the latter two feature maps suggest that the optimisation process encountered difficulties reaching an optimal solution, reducing accuracy for the EQNN model.
The limited accuracy of the EQNN model might result from the inherent limitations of the Quantum circuits used for data encoding. The simplicity of the Quantum circuit utilized by the EQNN model might not adequately capture the complex patterns present in the dataset. Exploring more expressive Quantum circuits or advanced Quantum architectures could offer potential improvements. Similarly, the SQNN model demonstrated lower accuracy than the other models, which was expected.
Fig. 15 shows the corresponding loss values, with the ZFeatureMap achieving a loss of 0.97.
0.458, the PauliFeatureMap a loss of 0.454, and the ZZFFeatureMap a loss of 0.455. The higher losses indicate that the SQNN model struggled to find an optimal solution, resulting in lower accuracy.
We also observed that the SQNN possesses lower accuracy than the other models, which aligns with our expectations because SQNNs are better suited to combinatorial optimisation and general constraint-imposing problems, such as scheduling problems, map colouring, and logic-placement number assignment games like Sudoku. The inherent
Figure 14: Loss function of Estimator QNN model.
Figure 15: Loss function of Sampler QNN model.
limitations of SQNNs in handling continuous and high-dimensional data, as encountered in our dataset, could explain the observed lower accuracy in the context of fraud detection.
## V Conclusion
In conclusion, our research presents a rigorous and insightful comparative study of four cutting-edge Quantum Machine Learning models: QSVC, VQC, EQNN, and SQNN. We have comprehensively understood their capabilities and limitations by evaluating their performance on a meticulously curated dataset and utilizing three distinct feature maps, ZZFeatureMap, PauliFeatureMap, and ZFeatureMap.
Among the models evaluated, QSVC stood out as the top performer, showcasing unparalleled excellence with F1 scores of 0.98 for both fraud and non-fraud classes. Its utilisation of the Quantum kernel for state similarity measurement proves to be a potent strategy, circumventing the need for conventional loss functions and yielding extraordinary results.
VQC also demonstrated remarkable performance, boasting an impressive F1 score of 0.90. However, we observed a potential area for refinement during its training process, suggesting avenues for future exploration to harness its power. In contrast, EQNN and SQNN exhibited comparatively lower F1 scores, hinting at the influence of the Quantum circuits used for data encoding on their accuracy. Addressing these limitations might be the key to unlocking their potential in this field.
Our findings reinforce the promise of Quantum computing in revolutionizing machine learning paradigms. The exceptional performance of QSVC and VQC attests to the vast potential of Quantum algorithms for solving complex classification problems with unprecedented precision.
## Declarations
### Conflicts of interest
The authors have no competing interests or other interests that might be perceived to influence the results and/or discussion reported in this paper.
|
2301.09106 | The state of quantum computing applications in health and medicine | Medicine, including fields in healthcare and life sciences, has seen a flurry
of quantum-related activities and experiments in the last few years (although
biology and quantum theory have arguably been entangled ever since
Schr\"odinger's cat). The initial focus was on biochemical and computational
biology problems; recently, however, clinical and medical quantum solutions
have drawn increasing interest. The rapid emergence of quantum computing in
health and medicine necessitates a mapping of the landscape. In this review,
clinical and medical proof-of-concept quantum computing applications are
outlined and put into perspective. These consist of over 40 experimental and
theoretical studies. The use case areas span genomics, clinical research and
discovery, diagnostics, and treatments and interventions. Quantum machine
learning (QML) in particular has rapidly evolved and shown to be competitive
with classical benchmarks in recent medical research. Near-term QML algorithms
have been trained with diverse clinical and real-world data sets. This includes
studies in generating new molecular entities as drug candidates, diagnosing
based on medical image classification, predicting patient persistence,
forecasting treatment effectiveness, and tailoring radiotherapy. The use cases
and algorithms are summarized and an outlook on medicine in the quantum era,
including technical and ethical challenges, is provided. | Frederik F. Flöther | 2023-01-22T11:53:40Z | http://arxiv.org/abs/2301.09106v2 | # The state of quantum computing applications in health and medicine
###### Abstract
Quantum computing hardware and software have made enormous strides over the last years[1]. Questions around quantum computing's impact on research and society have changed from "if" to "when/how". The 2020s have been described as the "quantum decade", and the first production solutions that drive scientific and business value are expected to become available over the next years. Medicine, including fields in healthcare and life sciences, has seen a flurry of quantum-related activities and experiments in the last few years (although medicine and quantum theory have arguably been entangled ever since Schrodinger's cat[2]). The initial focus was on biochemical and computational biology problems[3, 4, 5, 6, 7]; recently, however, clinical and medical quantum solutions have drawn increasing interest. The rapid emergence of quantum computing in health and medicine necessitates a mapping of the landscape.
In this review, clinical and medical proof-of-concept quantum computing applications are outlined and put into perspective. These consist of over 40 experimental and theoretical studies from the last few years. The use case areas span genomics, clinical research and discovery, diagnostics, and treatments and interventions. Quantum machine learning (QML) in particular has rapidly evolved and shown to be competitive with classical benchmarks in recent medical research. Near-term QML algorithms, for instance, quantum support vector classifiers and quantum neural networks, have been trained with diverse clinical and real-world data sets. This includes studies in generating new molecular entities as drug candidates, diagnosing based on medical image classification, predicting patient persistence, forecasting treatment effectiveness, and tailoring radiotherapy. The use cases and the applied algorithms are summarized.
In addition, this review provides an outlook on medicine in the quantum era. There has been much discussion about healthcare's journey towards precision medicine and the quadruple aim (better health, lower costs, enhanced patient experiences, and improved healthcare practitioner work lives)[8]. While a range of technical and ethical challenges remain, quantum computing is poised to become a key enabler for advancing towards the holy grail: keeping people healthy through proactive medical care and guidance at the level of an individual.
## Introduction
As quantum computing's progress has rapidly accelerated in the early 2020s, a cross-industry race has begun to secure quantum talent, build quantum skills, map real-world problems to quantum algorithms, capture quantum application intellectual property (IP), and prepare for quantum advantages. Certain types of applications gathered research interest right from the start; for instance, simulating nature through enhanced chemistry and physics calculations[9]
and solving finance problems[10]. Recently, the possibilities of quantum computing have increasingly sparked research interest in other fields as well. This is evidenced by clinical and medical proof-of-concept studies, which have seen a remarkable growth over the last years in conjunction with the exploration of healthcare[11] and life sciences[12] use cases.
Defined by the characteristics of the algorithms and the types of problems for which the algorithms are used, three primary quantum algorithm application categories can generally be distinguished:
1. Simulating nature - chemistry, physics, materials science,...
2. Processing data with complex structure - artificial intelligence / machine learning (AI/ML), factoring, ranking,...
3. Search and optimization - pricing, risk analysis, sampling,...
Note that a given quantum algorithm may be part of more than one category. For example, the variational quantum eigensolver (VQE) algorithm[13] has been applied to strongly correlated systems in chemistry ("Simulating nature") as well as finding the optimal configuration of non-quantum systems ("Search and optimization").
## Results
The studies are grouped into three main use case areas:
1. Genomics and clinical research
2. Diagnostics
3. Treatments and interventions
The connection strengths between the use case areas and the algorithm application categories are illustrated in Figure 1; these were assigned based on the number of proof-of-concept studies associated with each category as well as the applicability of each category to problems typical for a given use case area. It is evident that the category "Processing data with complex structure" is particularly relevant to health and medicine; most of the proof-of-concept studies in this review are based on quantum AI/ML methods.
In the context of quantum AI/ML, variational quantum circuits (VQCs) are sometimes considered to be building blocks of quantum neural networks (QNNs)[14], that is, neural networks where parameterized quantum circuits are introduced in the hidden layers. In other instances, a VQC is treated as a synonym for a QNN (along with a parameterized quantum circuit and quantum circuit learning)[15]. In this review, no hard distinction is made.
An overview of the explored use cases is given in Figure 2 and a list of the studies and their approaches is provided in Table 1. For many of the proof-of-concept use cases outlined, the quantum approaches are already competitive with the classical benchmarks; while many studies have considered downsized versions of the problems, there is generally no reason to suppose that these benefits will not carry over to more realistic problem variants. Moreover, although an entire "quantum algorithm zoo" exists[16], the algorithms are all based on a limited number of core primitives. Therefore, notwithstanding the particular characteristics for a given problem, such as the data structure, success of applying one algorithm/primitive in a certain field likely bodes well for uses of that algorithm/primitive in other fields. In the following, each study will now be discussed.
Figure 1: Three key quantum computing use case areas in health and medicine linked to quantum algorithm application categories. The wider the connecting line, the more applicable the category.
Figure 2: Health and medicine quantum computing use cases that have been investigated in proof-of-concept studies.
\begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{Orthogonal QNN} & \multicolumn{1}{|c|}{Classification of retinal color fundus} & \multicolumn{1}{|c|}{53} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{and chest X-ray images} & \\ \hline QFT & Image reconstruction & 54 \\ \hline QSVC, quantum kernel & Classification of breast cancer & 55, 56 \\ Gaussian process, & & \\ transfer learning-based & & \\ QNN & & \\ \hline QSVC trained via & Classification of rheumatoid arthritis & 57 \\ quantum kernel & with thermal hand images & \\ alignment & & \\ \hline QNN, Quantum distance & Classification of Alzheimer's disease & 58, 62 \\ classifier (QDC), QSVC & & \\ \hline QNN, VQC & Classification of COVID-19 & 59, 60, 61, 63 \\ \hline VQC & Classification of diabetes & 64 \\ \hline Quantum random & Classification of heart failure & 65 \\ forests, quantum k- & & \\ nearest neighbors, & & \\ quantum decision trees, & & \\ quantum Gaussian Naive & & \\ Bayes & & \\ \hline QDC, QSVC & Classification of bone marrow & 66 \\ & transplant survival, breast cancer, & \\ & heart failure & \\ \hline VQC, QNN & Classification of states of mind with & 67 \\ & electroencephalogram (EEG) signals & \\ \hline Quantum k-means & Classification of heart disease & 68 \\ \hline QSVC & Classification of medication persistence & 70 \\ & for individuals with rheumatoid & \\ & arthritis & \\ \hline QNN & Treatment effectiveness of knee & 71 \\ & arthroplasty & \\ \hline QNN & COVID-19 outbreak prediction & 72 \\ \hline Quantum deep & Adaptive radiotherapy & 75 \\ reinforcement learning & & \\ \hline \multicolumn{1}{|c|}{**Search and optimization**} & \\ \hline Grover's & DNA sequence alignment & 19 \\ \hline QAOA (quantum & De novo DNA sequence reconstruction & 22 \\ approximate & & \\ optimization algorithm) & & \\ \hline Grover's, QPE & Estimation of algorithmic information & 23 \\ & from DNA sequences & \\ \hline QAOA, VQE, VQC, & Protein structure for lattice model- & 31, 32, 33, 34 \\ Grover's & based systems & \\ \hline \end{tabular}
#### Genomics and clinical research
How can we truly understand an individual at the most granular level? Clearly, genomics is crucial. Over the last decades, we have seen milestones such as the sequencing of the human genome as well as genome-wide association studies (GWAS). It has now become clear, however, that the function and workings of the human genome are much more complex than imagined. The correlations between genomes and outcomes are convoluted and there are, for instance, generally not one-to-one links between genes and diseases. Moreover, pattern problems in the study of haplotypes (groups of genes that are inherited together) and single nucleotide polymorphisms (genomic variations at single base positions between people) quickly become very complicated, reaching non-deterministic polynomial-time (NP) hardness[17].
As a result, there is great interest to adapt the quantum techniques that have already been developed for problems such as string search and matching, for instance, based on Grover's algorithm[18], to genomic problems. Many experiments have focused on better understanding genomic patterns, leveraging algorithms from the "Processing data with complex structure" and "Search and optimization" categories. For example, DNA sequence alignment was explored with Grover's algorithm[19] and the quantum Fourier transform (QFT) was applied to pairwise sequence alignment[20, 21]. De novo DNA sequence reconstruction was carried out through a framework involving the quantum approximate optimization algorithm (QAOA)[22]. Once (genomic) sequences have been obtained, it is then of great interest to analyze the algorithmic information in them; this was explored using Grover's algorithm and phase estimation[23]. Note that many of these early advances in better understanding genomic strings and sequences may of course also be applied to related problems in omics in due course, for instance, involving RNA sequences. Likewise, the pattern and information encoding perspective can also be complemented with deeper insights at the molecular level through the application of quantum algorithms from the "Simulating nature" and "Search and optimization" categories.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Quantum walk and & Protein structure for non-lattice & 36, 37 \\ quantum Markov chain & model-based systems & \\ Monte Carlo & & \\ \hline VQE & Protein-ligand interactions involving & 38 \\ & lysine-specific demethylylase 5 (KDM5A) & \\ \hline VQE & Binding energy differences for \(\beta\)- & 39 \\ & secretase (BACE1) inhibitors & \\ \hline Gaussian boson & Ligand binding to the tumor necrosis & 40 \\ sampling* & factor-\(\alpha\) converting enzyme & \\ \hline VQE & Force fields & 43 \\ \hline QPE & Electronic structure of cytochrome & 45 \\ & P450 (CYP) enzyme active sites & \\ \hline \end{tabular}
\end{table}
Table 1: Overview of quantum algorithms applied in clinical and medical proof-of-concept studies grouped by algorithm application category.
in addition to genomics, there are diverse fields of clinical research in biology and biochemistry that seem poised to benefit from future quantum advantages. The discovery, and ultimately development, of new molecular entities and drugs is of central importance here. While millions of compounds have already been considered in the literature, the total number of possible carbon-based compounds whose molecular masses are similar to those of living systems is around 10\({}^{60}\). Given the large fraction of this gargantuan chemical space that has not yet been explored, the significant potential for future breakthroughs is clear [24]. Multiple overviews about quantum opportunities in the drug discovery space have been published [25, 26, 27, 28]. "Simulating nature" algorithm applications play a prominent role, but the other two categories have also been investigated in this context. A general theme is to reduce the need for lengthy and expensive experiments through better simulations of biology, thus creating in silico laboratories.
Protein folding and design have gained much attention in recent years through both classical [29, 30] and quantum advances. For instance, lattice model-based systems were explored using variational quantum algorithms, including QAOA [31], VQE [32], and other variational techniques [33], as well as Grover's algorithm [34]. For the VQE adaptation [32], it was even shown that the number of physical qubits required scales only as the square of the number of amino acids (but without a convergence guarantee), putting structures with 100+ amino acids within reach as quantum hardware develops over the next years [35]. Generalization to non-lattice models was investigated through quantum walk and quantum Markov chain Monte Carlo methods [36, 37]. Given the classical advances, it is likely that the quantum methods will be particularly advantageous for those problem variations that are most challenging for classical methods, for example, trying to predict the structures of proteins with unnatural amino acids (where classical machine learning struggles due to a lack of training data) or trying to understand conformations and behavior in dynamic settings, such as when proteins interact with water molecules, ligands, and other proteins. For protein-ligand interactions, symmetry-adapted perturbation theory (SAPT) was combined with VQE in benchmarks for systems containing the human cancer-relevant protein lysine-specific demethylase 5 (KDM5A) [38] and VQE was also extended through density matrix embedding theory (DMET) in order to calculate the binding energy differences for \(\beta\)-secretase (BACE1) inhibitors [39]. In addition, docking was investigated through Gaussian boson sampling [40] (a restricted form of quantum computing [41]) by predicting ligand binding to the tumor necrosis factor-\(\alpha\) converting enzyme, which is connected with immune system diseases and cancer.
A variety of further applications to help accelerate the drug discovery process has been explored. These include estimating force fields, accurate calculations of which are crucial for scaling molecular dynamics techniques, through QNNs [42] and VQE [43]. In addition, QPE and VQE methods were studied for the active space, the limited number of orbitals that are of primary interest and treated fully quantum mechanically, of (strongly correlated) chemical systems; F2, [Fe] hydrogenase, and the photosensitizer temporofin were considered [44]. Other studies focused on estimating the quantum (and classical) resources to compute the electronic structure of cytochrome P450 enzymes (CYPs) via QPE [45] and applying quantum generative adversarial networks (QGANs) to create new drug candidates [46]. Quantum machine learning, specifically quantum support vector classifiers (QSVCs) that enhance calculation of the kernel, also yielded promising results compared with classical state-of-the-art methods for virtual screening in drug discovery [47]. In another investigation, cheminformatic molecular descriptor
data sets for COVID-19, as well as whole-cell screening sets for plague and Mycobacterium tuberculosis, were compressed and then classified using QSVCs and QNN-like methods[48]. Finally, absorption, distribution, metabolism, excretion, and toxicity (ADMET) studies may be enhanced, as was demonstrated in a toxicity screening experiment where a quantum graph machine learning algorithm (quantum evolution kernel) was applied to a biochemistry data set with information about 286 molecules and their effects on mice[49].
### Diagnostics
Only when it is possible to accurately assess an individual's health status and potential future development in fine detail can tailored treatments and interventions be properly assigned. As such, quantum computing may enable the move away from (late) diagnoses focused on single diseases towards a regime where a continually updated health status can be determined for each individual; this will only be possible by building on new insights from the previous use case area, "Genomics and clinical research".
Quantum AI/ML algorithms are particularly relevant for diagnostic applications. Not medical care but health-related behaviors, socioeconomic factors, and environmental aspects are now believed to contribute up to 90 percent to health outcomes[50]. Hence, it is imperative to understand the quickly growing and increasingly heterogeneous health-relevant data that are becoming available, particularly real-world data (RWD) such as information from electronic health records (EHRs), claims, disease registries, and fitness trackers[51]. The many potentially pertinent variables lead to high-dimensional feature spaces and interactions between the variables result in complex interdependencies, correlations, and patterns; quantum AI/ML algorithms can penetrate such data structures in ways that are beyond the means of purely classical methods.
Analyzing and getting actionable insights from medical images is a field that has significantly grown in importance over the last years and decades. As such, a broad array of quantum applications is being explored in this space, including the enhancement of processing steps such as image edge detection, segmentation, and classification[52]. In classifying retinal color fundus and chest X-ray images, orthogonal QNNs were investigated, and quantum circuits were also used to accelerate the training of classical neural networks[53]. Based on computed tomography (CT) and positron emission tomography (PET) data, QFT-based algorithms were developed for enhancing image reconstruction[54]. In the context of classifying breast cancer, QSVCs and quantum kernel Gaussian process methods were applied[55] as well as transfer learning-based QNNs[56]. Rheumatid arthritis was detected by classifying thermal hand images with QSVCs trained via quantum kernel alignment[57]. Alzheimer's disease was classified with MRI images using QNNs[58] and COVID-19 was classified with QNNs using chest X-ray[59] as well as CT lung images[60, 61].
Next to images, diseases and disease risks have also been classified and predicted in early studies of supervised quantum AI/ML. Copy number variations (CNVs), differences in the number of repetitions of a genomic section between individuals, in neuronal single-cell samples from healthy individuals and those with Alzheimer's disease were used as features; building on the efficiency with which quantum computers can evaluate inner products, this allowed quantum distance classifiers (QDCs) to predict whether a given sample is from a
healthy or a sick individual [62]. COVID-19 was diagnosed through VQCs based on features such as temperature (fever), fatigue, muscle pain, and coughing [63]. VQCs were also employed to predict diabetes [64]. A diversity of methods - quantum random forests, quantum k-nearest neighbors, quantum decision trees, and quantum Gaussian Naive Bayes - was studied for the purpose of classifying heart failure [65]. Conversely, QDCs and QSVCs were applied to assess multiple conditions in the same study, namely bone marrow transplant survival, breast cancer, and heart failure [66]. Moreover, VQCs as well as QNNs were even used to predict states of mind based on electroencephalogram (EEG) signals from individuals who responded towards a product with a like/dislike in a neurotransmitter experiment [67]. Finally, in the same way that unsupervised learning is a younger discipline than supervised learning for classical ML, the field of unsupervised quantum AI/ML is younger than supervised quantum AI/ML. Still, even here there is already early medical work underway - the quantum k-means algorithm was used for clustering individuals based on their demographic and laboratory measurement data and predicting heart disease [68].
### Treatments and interventions
The applications outlined in the previous two use case areas, "Genomics and clinical research" as well as "Diagnostics", form the foundation for tailored treatments and interventions. As for diagnostics, quantum AI/ML algorithms lend themselves particularly well to treatment and intervention use cases.
Next to knowing an individual's health status and disease risks, it is essential to understand likely adherence, engagement, and behavior in order to achieve optimal outcomes [69]. RWD again plays a central role here. Based on millions of EHRs, for instance, the medication persistence of individuals with rheumatoid arthritis was predicted with QSVCs and a general framework to help assess empirical quantum advantage potential was introduced [70]. Another explored research topic is the treatment effectiveness of knee arthroplasty [71]. QNNs were applied to clinico-demographic data from 170 individuals that were treated over two years. The results were encouraging, but the study also noted that further validation using unstructured RWD is needed. Optimal measures at the population level require better models too, for example, regarding outbreak prediction and disease spread dynamics. Using a COVID-19 time series data set with confirmed cases, number of deaths, and number of recovered individuals, different types of QNNs (including continuous-variable ones) were applied for this purpose [72].
As quantum techniques continue to mature and proliferate, there is hope that they can accelerate the discovery process itself as well as enable progress for some of the thorniest medical treatment and intervention problems. Precision oncology is a case in point. Currently, only a third of individuals respond to drug-based cancer therapies [73]. One key challenge is the need to make sense of terabytes and terabytes of relevant data for an individual with cancer. Work has already begun on leveraging quantum algorithms for the purpose of getting actionable insights from such data and ultimately tailoring cancer treatments to the level of the individual [74]. One of the early applications showing promise is adaptive radiotherapy, as was demonstrated by modelling the clinical decisions as quantum states and applying quantum deep reinforcement learning to an institutional data set based on 67 stage III non-small cell lung cancer patients [75].
## Discussion
Ever since the beginnings of medicine thousands of years ago, medicine has continually incorporated new ideas, knowledge, and methods to become more effective. The holy grail has not changed: keeping people healthy through proactive individual care. Quantum computing is very young but, as the only known computational model that has exponential speedups compared with traditional approaches[76], poised to become one of the midtheist tools in healthcare and medicine with the power to make previously intractable problems now solvable.
For quantum computing to become this enabler for health and medicine, however, a wide range of technical and ethical challenges must still be overcome (Figure 3). First, quantum hardware and software need to continue improving, including increased qubit numbers, decreased error rates, and more efficient algorithms. Second, there are various challenges around making quantum computing practical for medicine which are similar to those in digital health efforts. These include data accessibility (without which even quantum computing loses its power), model explainability (essential for obtaining the support of clinicians, medical practitioners, and individuals), and patient privacy (critical for developing the long-term trust of individuals in the technology). Third, new challenges specific to quantum computing have appeared. Examples are data security, replicability, and skill development, which will now be discussed in turn.
Some quantum algorithms, specifically Shor's and Grover's algorithm, are able to solve the mathematically hard problems at the heart of current cryptography significantly faster than classical methods. All data that are not encrypted with quantum-safe protocols are thus
Figure 3: Examples of technical and ethical challenges that must be addressed for quantum computing to become transformative in health and medicine.
already at risk due to the possibility of "harvest now, decrypt later" attacks[77]; given the sensitivity and long security time value of medical data, this problem is exacerbated. As a result, cross-industry quantum-safe standards are already being developed[78] and will soon be implemented[79]. Furthermore, replicability, required in order to achieve clinical approvals and individual acceptance, is a challenge for quantum computers. Quantum computers, by their very nature, are designed to go beyond traditional means and address classically intractable problems; for many problems, however, new (quantum) solutions cannot be efficiently verified. Replicability is further complicated by the probabilistic nature of quantum computing, the multifarious architectures, the presence of noise, and the (still) limited access to quantum hardware. Hence, methodologies and frameworks to secure regulatory approvals and general support will need to be developed, as has been done for classical Al/ML[80]. Finally, there is fierce competition for quantum talent, particularly practitioners who combine quantum skills with medical expertise. As a result, talent development needs to be extended, including the introduction of new roles such as "quantum translators"[81].
The development of medicine-focused quantum computing collaborations and consortia is critical with regard to addressing many of these challenges. Such ecosystems are beginning to emerge[82, 83, 84, 85, 86] to help practitioners tackle problems with a quantum state of mind. All of this will take time and effort, but the significant rewards along the road toward quantum-enhanced health and medicine make it a highly worthwhile journey to start sooner rather than later.
## Methods
The literature search was conducted primarily through the Google Scholar and PubMed platforms. The logical operators OR and AND were combined with search terms such as the following: Al, algorithm, application, artificial intelligence, biology, chemistry, clinical, clinical research, diagnosis, diagnostics, drug, genomics, health, intervention, machine learning, medicine, ML, nature, optimization, QML, quantum, quantum artificial intelligence, quantum computing, quantum machine learning, search, simulation, treatment.
Studies were only included if the work explored quantum computing algorithms for applications within, or closely related to, health and medicine. Furthermore, the focus of this review is on the quantum circuit model and gate-based quantum computers. Gaussian boson sampling is briefly touched on, but other non-universal approaches, such as quantum annealing, were excluded.
## Acknowledgements
The author would like to thank Travis L. Scholten for helpful discussions. |
2302.08748 | Utilization of domain knowledge to improve POMDP belief estimation | The partially observable Markov decision process (POMDP) framework is a
common approach for decision making under uncertainty. Recently, multiple
studies have shown that by integrating relevant domain knowledge into POMDP
belief estimation, we can improve the learned policy's performance. In this
study, we propose a novel method for integrating the domain knowledge into
probabilistic belief update in POMDP framework using Jeffrey's rule and
normalization. We show that the domain knowledge can be utilized to reduce the
data requirement and improve performance for POMDP policy learning with RL. | Tung Nguyen, Johane Takeuchi | 2023-02-17T08:16:52Z | http://arxiv.org/abs/2302.08748v1 | # Utilization of domain knowledge to improve POMDP belief estimation
###### Abstract
The partially observable Markov decision process (POMDP) framework is a common approach for decision making under uncertainty. Recently, multiple studies have shown that by integrating relevant domain knowledge into POMDP belief estimation, we can improve the learned policy's performance. In this study, we propose a novel method for integrating the domain knowledge into probabilistic belief update in POMDP framework using Jeffrey's rule and normalization. We show that the domain knowledge can be utilized to reduce the data requirement and improve performance for POMDP policy learning with RL.
## I Introduction
Partially observable Markov decision process (POMDP) is a probabilistic sequential decision-making framework that is commonly used for robot planning [1, 2, 3, 4]. After formulating a decision making problem as a POMDP, we can use a reinforcement learning (RL) algorithm to learn a policy that solves this problem. POMDP has many successful applications when applied to various real-world tasks such as navigation or medical assistant robot [5, 6].
Many studies showed that using information about the domain can improve the performance of the robot's policy and achieve higher task completion rate [7, 8, 9]. A big disadvantage of existing works is that the domain knowledge used in these works is either deterministic rules, which are represented by Answer Set Programming (ASP), or needs to be manually crafted and only applicable to a very specific type of domains. This drawback puts a strict limitation to the application of the previous studies.
In this work, we propose a novel method that utilizes additional domain information when estimating the POMDP belief. Our method works with a generic representation of domain information that can be applied to a large number of decision making tasks. Our main contributions in this paper are as follows:
* The proposed method employs Jeffrey's rule [10] and normalization, which combines the advantages of previous studies.
* We demonstrate that when using the proposed method the policy learning requires fewer training episodes to converge in comparison to previous works in a simulation object fetching task. Furthermore, the learned policy that uses our proposed method achieves better performance compared to the policies learned by the previous methods.
## II Related works
There are multiple studies attempted to integrate additional knowledge about the domain into the POMDP belief estimation to improve the belief estimation step of a POMDP.
Zhang et al. [7] is one of the first studies in this line of research. In this work, the authors used the domain knowledge represented as ASP rules and use it to revise the POMDP belief state in the object localization task. The knowledge is used to determine a _bias belief state_ using Fechner's law. This bias belief is combined with the standard belief distribution using linear and logarithmic normalization (r-norm). [7] showed that the proposed method helps the robot to locate the target object more accurately. However, the method requires that the knowledge is deterministic. In addition, Fechner's law is also applicable to a few specific domains.
[11] proposed a method that uses _probabilistic logic (P-log)_, which is an extension of ASP that allows logical reasoning with probabilistic rules. Similar to ASP in the method from [7], P-log is used to determine a _prior belief state_ from the probabilistic rules. [11] uses the prior belief as the initialized belief state in the beginning of the planning process. The method proposed in [11] still needs the domain information to be manually crafted and carefully designed beforehand, which is time-consuming and not always available.
Going with a different direction, Chitnis et al. [9] utilized Jeffrey's rule to revise the distributions in the belief state with"rules" created from domain knowledge. In addition, the author of this work proposed to use _factored belief_ to reduce the computational complexity of belief state estimation. Experiment results showed that the proposed method using Jeffrey's rule and factored belief helps improve the task completion rate.
## III Problem setting
In this section, we describe our problem setting under a formal POMDP framework and the representation of the domain knowledge. In this work, we use a factorized belief representation for the belief state, which is similar to [12, 13].
### _POMDP formulation_
Our POMDP formulation is defined as a tuple of \((S,O,B,T,Z,R)\), which are defined as follows:
* the state space. In our work, we consider domains where the a state consists of multiple attributes, \(s=(f_{1},f_{2},...f_{N})\forall s\in S\). Each attribute \(f_{i}\) is a discrete random variable that takes values from
\(\{F_{i1},F_{i2},...F_{iM_{i}}\}\). Here, we denote the size of each attribute's value space \(\mathcal{F}_{i}\) as \(|\mathcal{F}_{i}|=M_{i}\).
* the action space, which is the set of all available actions that can be performed in the domain.
* the observation spaces for each of the \(N\) attributes in a state. Each \(O_{i}\) is the set of all observations for each attribute \(f_{i}\). In our problem setting, the observation space is identical to the state space, thus, \(O_{i}\equiv\mathcal{F}_{i}\) and \(O\equiv\mathcal{S}\).
* the belief space. In a standard setting, the belief \(b\) is a distribution over the state space \(S\), which means \(b=(P(s_{1}),P(s_{2}),...,P(s_{i}),...),s_{i}\in S\). However, the size of \(S\) is which can be extremely large. Therefore, we employ a factorization strategy, with the belief \(b\) consists of distributions of the attributes. In other words, \(b=(b_{1},b_{2},...b_{N})\), with \(b_{i}=(P(f_{i}=F_{i1}),P(f_{i}=F_{i2}),...P(f_{i}=F_{iM_{i}})),\forall i\in\{1,2..N\}\). Each \(b_{i}\) can be seen as a distribution for random variable \(f_{i}\), and the factorized belief uses marginalized distribution of each attribute, instead of the joint distribution.
* transition function, which defines the probability of "moving" to a new state from the current state. Denote the current time step as \(t\) and the current state as \(s^{t}=(f^{t}_{1},f^{t}_{2},...f^{t}_{N})\). The transition function \(T\) is defined as \(T(f^{t}_{i},a^{t-1},f^{t-1}_{i})=(f^{t}_{i}|f^{t-1}_{i},a^{t-1})\) with \(a^{t-1}\) is the action chosen in the previous time step.
* observation function, which defines the probability of receiving the observation. This probability is define as \(Z(o^{t}_{i},a^{t-1},f^{t}_{i})=P(o^{t}_{i}|f^{t}_{i},a^{t-1})\).
* reward function, which defines the reward that is received when performing an action \(a\) given state \(s\), \(R(s,a)=\sum_{r\in\mathcal{R}}rP(r|s,a)\). The solution of the decision making problem is a policy that maximizes the total reward that we can receive in an episode.
Let us denote the observation at time step \(t-1\) as \(o^{t-1}=(o^{t-1}_{i},o^{t-1}_{2},...o^{t-1}_{N})\) and the current state is \(s^{t-1}=(f^{t-1}_{1},f^{t-1}_{2},...f^{t-1}_{N})\). The accumulation of observations and actions from the beginning of an episode to the current time step \(t-1\) is called a _history_, denoted by \(h^{t-1}=(o^{1},a^{1},o^{2},a^{2},...o^{t-1},a^{t-1})\). The belief consists of distributions conditioned on the history, thus, we have \(b^{t}_{i}=(P(f^{t}_{i}=F_{i1}|h^{t-1}),P(f^{t}_{i}=F_{i2}|h^{t-1}),...P(f^{t}_ {i}=F_{iM_{i}}|h^{t-1})),\forall i\in\{1,2..N\}\). At time step \(t-1\), we perform an action \(a^{t-1}\), move to next time step \(t\) and observe new observation \(o^{t}\). Next, we update our belief state \(b^{t}\) using the following belief estimation formula,
\[b^{t}_{i}=\frac{Z(o^{t}_{i},a^{t-1},f^{t}_{i})\sum^{N}_{j=1}T(f^{t}_{i},a^{t- 1},f^{t-1}_{i})b^{t-1}_{j}}{X} \tag{1}\]
, with \(X\) is the normalization factor to ensure sum of the probabilities equal to 1, \(b^{t}_{i}\) is the belief for attribute \(i\).
### _Domain knowledge_
Domain knowledge or domain information contains knowledge about the domain that we can use to revise the belief state. As mentioned above, a big disadvantage of previous works is that they all require very specific type of domain knowledge which does not generalize well to different domains. In our work, the domain knowledge is the conditional probabilities \(P(f_{i}|f_{j})\), which represents the relation between two attributes \(f_{i},f_{j}\). This type of representation is domain-agnostic and applicable to any task that have the state consists of discrete attributes. Finally, let us denote the set of domain knowledge as \(\mathcal{P}\).
### _Jeffrey's rule of conditioning_
Jeffrey's rule [10] of conditioning provides a way to calculate probabilities given new evidence. Let us assumes that we have a partition \(\{E_{1},E_{2},...,E_{n}\}\) of an event \(E\) and all the elements in this partition are mutually exclusive and exhaustive. Assume that the new probabilities \(\{P^{*}(E_{1}),P^{*}(E_{2}),...,P^{*}(E_{n})\}\) are given as evidence; with any event \(A\), the new probability of \(A\) can be calculated by,
\[P^{*}(A)=\sum^{n}_{i=1}P(A|E_{i})P^{*}(E_{i}) \tag{2}\]
Jeffrey's rule is equivalent to the judgment that the "J-condition"
\[P^{*}(A|E_{i})=P(A|E_{i}) \tag{3}\]
holds for all \(A\) and \(E_{i}\). The following example demonstrates a way of applying Jeffrey's rule. Given three events \(A,B\), and \(C\) with \(P(A)=0.2\), \(P(B)=0.3\), and \(P(A)=0.5\). Let us assume that we have a new evidence that \(P^{*}(A)=0.4\). Thus, we have, \(P^{*}(B)+P^{*}(C)=P(\neg A)=0.6\). Therefore, \(P^{*}(B)=0.24\) and \(P^{*}(C)=0.36\).
## IV Method
In this section, we describe our belief update method that uses domain knowledge for belief estimation. In principle, the proposed method utilizes normalization and Jeffrey's rule to revise the belief state.
### _Belief normalization with bias_
Our normalization method follows the same principle as in [7], which calculates a bias belief from the domain information and performs normalization on the belief using the bias. However, we calculate the bias belief with chain rule given the conditional probabilities that are our domain information.
Let us consider two attributes \(x\) and \(y\) with support \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. At time step \(t\), we calculate the components \(b^{t}_{x},b^{t}_{y}\) of the belief state using the standard formula in Equation 1. Denote \(P^{yx}\) as the knowledge matrix that contains information of \(P(y|x)\), which satisfies,
\[P^{yx}[i][j]=P(y=j|x=i) \tag{4}\]
Similarly, we define a knowledge matrix \(P^{xy}\) for \(P(x|y)\). Let us recall that \(b_{x}\) and \(b_{y}\) can be viewed as column vectors. From the chain rule of probabilities, we have,
\[b^{T}_{x} =P^{xy}\times b^{T}_{y} \tag{5}\] \[=P^{xy}\times(P^{yx}\times b^{T}_{x})\] \[=(P^{xy}\times P^{yx})\times b^{T}_{x}\]
With \(P^{xy}\) and \(P^{yx}\) given, we can solve for \(b_{x}\) that satisfies Equation 5 and derive \(b_{y}\) from it. Note that \(b_{x}\) and \(b_{y}\) can be seen as belief when having no observation, thus we can use them as initial belief in the beginning of each episode [11]. After obtaining the bias belief \(b_{x^{*}}^{*}\), we integrate it into the standard belief using the following formula:
\[\hat{b}_{x}^{t}=\left((1-\beta)\times(b_{x}^{t})^{r}+\beta\times(b_{x}^{*})^{r }\right)^{\frac{1}{r}} \tag{6}\]
with \(r\in R\) and \(\beta\in(0,1)\) are hyper parameters. Intuitively, this normalization method provides a way to "regularize" the belief to avoid being over-confident in our estimation of the belief state, which happens in domains with noisy observation.
### _Belief revision with Jeffrey's rule_
Chitnis et al. [9] proposed the using of Jeffrey's rule to update the belief state with domain knowledge. Our work also utilized Jeffrey's rule but without the assumption of independence between the attributes. Let us consider \(x\) and \(y\) as two attributes of the state in our domain, and with the domain knowledge \(P(x|y)\) and \(P(y|x)\). Our belief revision algorithm is as follow:
```
1:procedureJeffreyRevision(\(b_{x}^{*},b_{x}^{t}\))
2:\(i\gets argmax([b_{x}^{*}-b_{x}])\)
3:if\(max(|b_{x}^{*}-b_{x}|)>threshold\)then
4:\(\hat{b}_{x}^{t}[i]=((1-\beta)\times(b_{x}^{t}[i])^{r}+\beta\times(b_{x}^{*}[i] )^{r})^{\frac{1}{r}}\)
5: re-scale the belief \(b_{x}^{t}\) with new \(\hat{b}_{x}^{t}[i]\), obtain \(\hat{b}_{x}^{t}\)
6:else
7:\(\hat{b}_{x}^{t}=b_{x}^{t}\)
8:endif
9:return\(\hat{b}_{x}^{t}\)
10:endprocedure
```
**Algorithm 1** Belief revision using Jeffrey's rule
Note that, \(threshold\) is a hyper parameter that is dependent on each domain.
### _Belief revision with Jeffrey's rule and normalization_
Our proposed method of integrating domain knowledge into the belief state can be viewed as a combination of Jeffrey's rule and normalization. We apply this algorithm at each turns of the interaction process. Finally, the full interaction process is described in Algorithm 2.
```
1:procedureInteractionProcess(\(\mathcal{P}\)) \(\triangleright\)\(\mathcal{P}\) is the set of domain knowledge
2: calculate bias \(b_{x}^{*}\) for all attribute \(x\) using Equation 5
3:\(t\gets 0\)
4:repeat
5:for all attribute \(x\) in state \(s\)do
6: calculate belief \(b_{x}^{t}\) using Equation 1
7:\(\hat{b}_{x}^{t}\leftarrow\)JeffreyRevision(\(b^{*},b_{x}^{t}\))
8: calculate new belief \(\hat{b}_{x}^{t}\) using Equation 6
9:endfor
10: selects new action \(a^{t}\) using policy \(\pi\)
11:\(t\gets t+1\)
12:until s is terminal
13:endprocedure
```
**Algorithm 2** Interaction process with belief revision
## V Evaluation
This section describes the details of experiments to evaluate the proposed method of belief state estimation with domain information. We conduct experiments in a simulated object fetching task, to assert these hypotheses.
**Domain and task description.** The second task is object fetching in a grid-like domain with size of \(13\times 13\), as shown in Figure 1. The objective of the robot in this task is to navigate through the domain to find the object (blue circle) and bring it to the target location (red). The domain is divided into four areas: room1 (pink), corridor (blue), room2 (yellow), and the hall (green). The state is represented by \(s=(x,y,l,d,h)\), where \(x\) and \(y\) are attributes for the row and column position of the robot in the grid world, \(x,y\in[0,12]\). \(l\) refers to the area where the robot is at the current time step. \(l\) takes value from the set \(\{0-room1,1-corridor,2-room2,3-hall\}\). \(d\) refers to the direction which the robot is facing, \(d\in\{North,East,South,West\}\). \(h\) is the attribute that represents whether the robot is holding the object or not, it takes the value of \(True\) or \(False\). In this domain, we set \(d\) and \(h\) to be fully observable.
There are four actions: \(TurnLeft,TurnRight,Move,\) and \(Grab\). \(TurnLeft\) (\(TurnRight\)) means the robot turns to its left (right) hand side while staying in the same position. Taking the \(Move\) action moves the robot forward to the grid in front of it. When the robot performs \(Grab\), it tries to grab the object in the front grid. In this domain, the robot observes \(o_{x},o_{y}\), and \(o_{l}\) from the three attributes \(x,y,l\) that are partially observable. Let us denote the observation of \(x,y,l\) as \(o_{x},o_{y},o_{l}\), respectively. The observation \(o_{x}\) is within the range \([x-2,x+2]\). The observation function is defined by,
\[O(o_{x},a,x)=\begin{cases}p,&\text{if }o_{x}=x\\ (1-p)/(K-1),&\text{otherwise}\end{cases} \tag{7}\]
with \(p=0.3\), and \(K\) is the number of elements in the
Fig. 1: The object fetching task in grid-like domain with different areas.
set \([x-2,x+2]\cap[0,12]\). For observation of area \(l\), \(K\) is the number of elements in \([x-2,x+2]\cap[0,3]\). The reward function for the object fetching task is define as follow:
\[R=\begin{cases}-10,&\text{if hit obstacles/boundaries}\\ 100&\text{if reach target while holding object}\\ 20&\text{if successfully find and grab the object}\\ -1,&\text{otherwise}\end{cases} \tag{8}\]
We use the same type of domain knowledge as in the navigation task, which represents the relationship between two attributes \(x\) and \(y\). In addition, the knowledge of \(P(x|l),P(l|x),P(y|l)\), and \(P(l|y)\) are also used.
**Experiment setup.** In this experiment, we use Q-learning, a popular reinforcement learning algorithm, to train the policy for robot planning [14]. In order to apply Q-learning, the probabilities in the belief state are quantized to the nearest 0.1 value. The discount rate \(\lambda\) is set to 1, and learning rate \(\alpha\) is 0.1. We set \(\alpha\) to be multiplied by a factor of 0.9 every 500 episodes of training. \(\beta\) in Equation 3 is set to 0.5, and \(r\) is set to 1. The maximum number of steps the robot can take during one episode is 200. Due to differences in problem and formulation, it is not possible to directly use the methods from previous studies. However, we implemented baselines based on these studies for comparison with our proposed method. The following models are used in this experiment:
* _normal:_ the method that does not use domain knowledge and only use the standard formula for belief estimation.
* _bias_combine:_ this method is from [7], which uses Equation 6 for belief revision.
* _bias_init:_ this method is inspired by the work in [8], which uses the bias belief for initialization at the beginning of each episode.
* _jeffrey:_ this method is based on the idea from [9], which uses Jeffrey's rule.
* _proposed:_ the proposed method
**Experiment results.** From Figure 2, we can see that using domain knowledge for belief revision significantly improves the convergence of the training process, and the model converges the fastest when using our proposed method.
Table I shows the performance of the policies trained by different methods. Similar to the results above, we can see that the robot learns to solve the task with higher success rate when using the domain knowledge for belief revision. In addition, the policy trained by the proposed method achieves the best performance and significantly outperforms the previous methods in both metrics.
## VI Conclusions
In this study, we propose a novel method that uses additional domain knowledge for POMDP belief estimation process. Our proposed method use normalization and Jeffrey's rule of conditioning to revise the belief state. We demonstrated that the proposed method helps to improve policy learning of the robot in two simulation domains and achieve significantly better result in comparison to previous methods.
In the future, we would like to conduct experiment with physical environment to further confirm the effectiveness of the proposed method.
Fig. 2: Experiment results in object fetching. |
2307.04974 | Determination of matter radius and neutron-skin thickness of
$^{60,62,64}$Ni from reaction cross section of proton scattering on
$^{60,62,64}$Ni targets | In our previous work, we determined matter radii $r_{\rm m}({\rm exp})$ and
neutron-skin thickness $r_{\rm skin}({\rm exp})$ from reaction cross sections
$\sigma_{\rm R}({\rm exp})$ of proton scattering on $^{208}$Pb, $^{58}$Ni,
$^{40,48}$Ca, $^{12}$C targets, using the chiral (Kyushu) $g$-matrix folding
model with the densities calculated with Gogny-D1S-HFB (D1S-GHFB) with angular
momentum projection (AMP). The resultant $r_{\rm skin}({\rm exp})$ agree with
the PREX2 and CREX values. As for $^{58}$Ni, our value is consistent with one
determined from the differential cross section for $^{58}$Ni+$^{4}$He
scattering. As for p+$^{60,62,64}$N scattering, $\sigma_{\rm R}({\rm exp})$ are
available as a function of incident energies $E_{\rm in}$, where $E_{\rm
in}=22.8 \sim 65.5$~MeV for $^{60}$Ni, $E_{\rm in}=40,60.8$~MeV for $^{62}$Ni,
$E_{\rm in}=40, 60.8$~MeV for $^{64}$Ni. Our aim is to determine matter radii
$r_{\rm m}({\rm exp})$ for $^{60,62,64}$Ni from the $\sigma_{\rm R}({\rm
exp})$. Our method is the Kyushu $g$-matrix folding model with the densities
scaled from D1S-GHFB+AMP densities, Our skin values are $r_{\rm skin}({\rm
exp})=0.076 \pm 0.019,~0.106 \pm 0.192,~0.162 \pm 0.176$~fm, and $r_{\rm
m}({\rm exp})=3.759 \pm 0.011,~3.811 \pm 0.107,~3.864 \pm 0.101$~fm for
$^{60,62,64}$Ni, respectively. | Shingo Tagami, Tomotsugu Wakasa, Masanobu Yahiro | 2023-07-11T02:29:24Z | http://arxiv.org/abs/2307.04974v1 | # Determination of matter radius and neutron-skin thickness of \({}^{60,62,64}\)Ni
###### Abstract
**Background:** In our previous work, we determined matter radii \(r_{\rm m}({\rm exp})\) and neutron-skin thickness \(r_{\rm skin}({\rm exp})\) from reaction cross sections \(\sigma_{\rm R}({\rm exp})\) of proton scattering on \({}^{208}\)Pb, \({}^{58}\)Ni, \({}^{40,48}\)Ca, \({}^{12}\)C targets, using the chiral (Kyushu) \(g\)-matrix folding model with the densities calculated with Gogny-D1S-HFB (D1S-GHFB) with angular momentum projection (AMP). The resultant \(r_{\rm skin}({\rm exp})\) agree with the PREX2 and CREX values. As for \({}^{58}\)Ni, our value is consistent with one determined from the differential cross section for \({}^{58}\)Ni+\({}^{4}\)He scattering. As for p+\({}^{60,62,64}\)Ni scattering, \(\sigma_{\rm R}({\rm exp})\) are available as a function of incident energies \(E_{\rm in}\), where \(E_{\rm in}=22.8\sim 65.5\) MeV for \({}^{60}\)Ni, \(E_{\rm in}=40,60.8\) MeV for \({}^{62}\)Ni, \(E_{\rm in}=40,60.8\) MeV for \({}^{64}\)Ni.
**Purpose:** Our aim is to determine matter radii \(r_{\rm m}({\rm exp})\) for \({}^{60,62,64}\)Ni from the \(\sigma_{\rm R}({\rm exp})\).
**Method:** Our method is the Kyushu \(g\)-matrix folding model with the densities scaled from D1S-GHFB+AMP densities,
**Results:** Our skin values are \(r_{\rm skin}({\rm exp})=0.076\pm 0.019,\ 0.106\pm 0.192,\ 0.162\pm 0.176\) fm, and \(r_{\rm m}({\rm exp})=3.759\pm 0.011,\ 3.811\pm 0.107,\ 3.864\pm 0.101\) fm for \({}^{60,62,64}\)Ni, respectively.
## I Introduction and conclusion
### Background
_Background:_ A novel method for measuring nuclear reactions in inverse kinematics with stored ion beams was successfully used to extract the matter radius \(r_{\rm m}({\rm exp})\) of \({}^{58}\)Ni [1]. The experiment was performed at the experimental heavy-ion storage ring at the GSI facility. Their result determined from the differential cross section for \({}^{58}\)Ni+\({}^{4}\)He scattering is \(r_{m}({\rm GSI})=3.70(7)\) fm.
Reaction cross section \(\sigma_{\rm R}\) is a standard observable to determine \(r_{\rm m}({\rm exp})\) and neutron-skin thickness \(r_{\rm skin}({\rm exp})\); note that \(r_{\rm skin}\) can be evaluated from the \(r_{\rm m}\) by using the \(r_{\rm p}({\rm exp})\) calculated with the isotope shift method based on the electron scattering [2]. In fact, we determined \(r_{\rm m}({\rm exp})\) and \(r_{\rm skin}({\rm exp})\) from \(\sigma_{\rm R}({\rm exp})\) of proton scattering on \({}^{208}\)Pb, \({}^{58}\)Ni, \({}^{40,48}\)Ca, \({}^{12}\)C targets, using the chiral (Kyushu) \(g\)-matrix folding model with the proton and neutron densities scaled with D1S-GHFB+AMP densities [3], where D1S-GHFB+AMP is the abbreviation of Gogny-D1S-HFB with angular momentum projection (AMP). Our skin values \(r_{\rm skin}({\rm exp})\) agree with the PREX2 and CREX values. As for \({}^{58}\)Ni, our matter radius \(r_{m}({\rm exp})=3.711\pm 0.010\) fm. is consistent with \(r_{m}({\rm GSI})=3.70(7)\) fm. As for p+\({}^{60,62,64}\)N scattering, \(\sigma_{\rm R}({\rm exp})\) are available as a function of incident energy \(E_{\rm in}\)[4; 5; 6]; here \(E_{\rm in}=22.8\sim 65.5\) MeV for \({}^{60}\)Ni, \(E_{\rm in}=40,60.8\) MeV for \({}^{62}\)Ni, \(E_{\rm in}=40,60.8\) MeV for \({}^{64}\)Ni. Now we consider p+\({}^{60,62,64}\)N scattering.
## II Folding model
Kohno calculated the \(g\) matrix for the symmetric nuclear matter, using the Brueckner-Hartree-Fock method with chiral N\({}^{3}\)LO 2NFs and NNLO 3NFs [7]. He set \(c_{D}=-2.5\) and \(c_{E}=0.25\) so that the energy per nucleon can become minimum at \(\rho=\rho_{0}\). Toyokawa _et al._ localized the non-local chiral \(g\) matrix [8], using the localization procedure proposed
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(r_{\rm p}({\rm exp})\) & \(r_{\rm m}({\rm exp})\) & \(r_{\rm n}({\rm exp})\) & \(r_{\rm skin}({\rm exp})\) & Data \\ \hline \({}^{58}\)Ni & \(3.6849\) & \(3.711\pm 0.010\) & \(3.740\pm 0.019\) & \(0.055\pm 0.019\) & [3] \\ \({}^{60}\)Ni & \(3.723\) & \(3.759\pm 0.011\) & \(3.799\pm 0.019\) & \(0.076\pm 0.019\) & [4; 5] \\ \({}^{62}\)Ni & \(3.753\) & \(3.811\pm 0.107\) & \(3.859\pm 0.192\) & \(0.106\pm 0.192\) & [6] \\ \({}^{64}\)Ni & \(3.772\) & \(3.864\pm 0.101\) & \(3.933\pm 0.176\) & \(0.162\pm 0.176\) & [6] \\ \hline \end{tabular}
\end{table}
Table 1: Values of \(r_{\rm m}\), \(r_{\rm n}\), \(r_{\rm skin}\), \(r_{\rm p}\). The \(r_{\rm p}({\rm exp})\) are determined from the charge radii [2]. ’Data’ shows citations on \(\sigma_{\rm R}\). The radii are shown in units of fm.
by the Melbourne group [9; 10]. The resulting local \(g\) matrix is referred to as "local Kyushu \(g\)-matrix".
We use the Kyushu \(g\)-matrix folding model [8] with the densities calculated with D1S-GHFB+AMP [11]. The Kyushu \(g\)-matrix itself [8] is constructed from the chiral nucleon-nucleon (NN) interaction with the cutoff 550 MeV.
In this paper, we consider proton-nucleus scattering. The potential \(U(\mathbf{R})\) consists of the direct and exchange parts, \(U^{\rm DR}(\mathbf{R})\) and \(U^{\rm EX}(\mathbf{R})\)[12; 13]. The validity of the localization is shown in Refs. [12].
### Scaling procedure of proton and neutron densities
For example, the neutron density \(\rho_{n}(r)\) is scaled from the D1S-GHFB+AMP one. We can obtain the scaled density \(\rho_{\rm scaling}(\mathbf{r})\) from the original density \(\rho(\mathbf{r})\) as
\[\rho_{\rm scaling}(\mathbf{r})=\frac{1}{\alpha^{3}}\rho(\mathbf{r}/\alpha) \tag{1}\]
with a scaling factor
\[\alpha=\sqrt{\frac{\langle\mathbf{r}^{2}\rangle_{\rm scaling}}{\langle\mathbf{r}^{2} \rangle}}. \tag{2}\]
We scale the neutron density so that the \(f\times\sigma_{\rm R}({\rm D1S})\) may reproduce the data (\(\sigma_{\rm R}(\exp)\)) under that condition that the \(r_{\rm p}({\rm scaling})\) agrees with \(r_{\rm p}(\exp)\)[2] of electron scattering, where \(\sigma_{\rm R}({\rm D1S})\) is the result of D1S-GHFB+AMP for each \(E_{\rm in}\). and \(f\) is the average of \(\sigma_{\rm R}(\exp)/\sigma_{\rm R}({\rm D1S})\) over \(E_{\rm in}\). The matter radius \(r_{\rm m}(E_{\rm in})\) thus obtained depends on \(E_{\rm in}\). We then take the average of \(r_{\rm m}(E_{\rm in})\) over \(E_{\rm in}\). The resulting value \(r_{\rm m}(\exp)\) is shown in Table 1. The corresponding \(r_{\rm n}(\exp)\) and \(r_{\rm skin}(\exp)\) obtained from \(r_{\rm m}(\exp)\) and \(r_{\rm p}(\exp)\) are also shown in Table 1.
D1M [14; 15] is an improved version of D1S for binding energies of many nuclei. We can use D1M instead of D1S, leading to the change of \(U(\mathbf{R})\). The results of D1M are the same as our results of Table 1.
This scaling procedure is used for proton scattering on Sn in Ref. [16].
## III Results
Figure 1 shows reaction cross sections \(\sigma_{\rm R}\) as a function of \(E_{\rm in}\). Our result \(\sigma_{\rm R}({\rm D1S})\) overshoots somewhat, but \(f\times\sigma_{\rm R}({\rm D1S})\) is close to the central values of experimental data [4; 5], where \(f=0.97488\). The \(f\times\sigma_{\rm R}({\rm D1S})\) are used in order to determine \(r_{\rm m}(\exp)\) and \(r_{\rm skin}(\exp)\).
The same scaling procedure is taken also for \(p\)+\({}^{62,64}\)Ni scattering, where \(F=0.96213,~{}0.98232\) for \({}^{62,64}\)Ni, respectively.
The data on \(E_{\rm B}/A\) (the total binding energy per nucleon) weakly depend on \(A\) for \({}^{58,60,62,64}\)Ni [17]. This is true for \(r_{\rm m}(\exp)/A^{1/3}\) (the matter radius per nucleon). In fact, \(A\) dependence of \(r_{\rm m}(\exp)/A^{1/3}\times E_{\rm B}/A\) is even smaller; namely, the average \(\beta\) of \(r_{\rm m}(\exp)/A^{1/3}\times E_{\rm B}/A\) over \(A\) is
\[\beta=8.437385~{}{\rm MeV}\cdot{\rm fm} \tag{3}\]
and the error is \(0.024~{}{\rm MeV}\cdot{\rm fm}\). The relative error is 0.2847%. This indicates that the \(r_{\rm m}(\exp)/A^{1/3}\) (the matter radius per nucleon) is in inverse proportional to the total binding energy per nucleon \(E_{\rm B}/A\); see Table 2 for \(E_{\rm B}/A\) and \(r_{\rm m}(\exp)/A^{1/3}\).
Using \(\beta=8.437385~{}{\rm MeV}\cdot{\rm fm}\), \(E_{\rm B}/A\), we can derive the central values of \(r_{\rm m}(\exp)\). These are 3.711, 3.759, 3.811, 3.864 fm for \({}^{58,60,62,64}\)Ni. These values agree with our values of Table 1. One can easily evaluate the central value of matter radius from \(\beta\), \(E_{\rm B}/A\). This is convenient.
###### Acknowledgements.
We would like to thank Dr. Toyokawa for his contribution.
Figure 1: \(E_{\rm in}\) dependence of reaction cross sections \(\sigma_{\rm R}\) for \(p\)+\({}^{60}\)Ni scattering. Closed circles denote \(\sigma_{\rm R}({\rm D1S})\) of the D1S-GHFB+AMP densities, whereas open circles correspond to \(f\times\sigma_{\rm R}({\rm D1S})\). The data (crosses) are taken from Refs. [4; 5]. |
2303.07752 | The enhanced YSO population in Serpens | The Serpens Molecular Cloud is one of the most active sites of ongoing star
formation at a distance of about 300 pc, and hence is very well-suited for
studies of young low-mass stars and sub-stellar objects. In this paper, for the
Serpens star forming region, we find potential members of the Young Stellar
Objects population from the Gaia DR3 data and study their kinematics and
distribution. We compile a catalog of 656 YSOs from available catalogs ranging
from X-ray to the infrared. We use this as a reference set and cross-match it
to find 87 Gaia DR3 member stars to produce a control sample with revised
parameters. We queried the DR3 catalog with these parameters and found 1196
stars. We then applied three different density-based machine learning
algorithms (DBSCAN, OPTICS and HDBSCAN) to this sample and found potential
YSOs. The three clustering algorithms identified a common set of 822 YSO
members from Gaia DR3 in this region. We also classified these objects using
2MASS and WISE data to study their distribution and the progress of star
formation in Serpens. | Priya Hasan, Mudasir Raja, Md Saifuddin, S N Hasan | 2023-03-14T09:58:00Z | http://arxiv.org/abs/2303.07752v1 | # The enhanced YSO population in Serpens
###### Abstract
The Serpens Molecular Cloud is one of the most active sites of ongoing star formation at a distance of about 300 pc, and hence is very well-suited for studies of young low-mass stars and sub-stellar objects. In this paper, for the Serpens star forming region, we find potential members of the Young Stellar Objects population from the Gaia DR3 data and study their kinematics and distribution. We compile a catalog of 656 YSOs from available catalogs ranging from X-ray to the infrared. We use this as a reference set and cross-match it to find 87 Gaia DR3 member stars to produce a control sample with revised parameters. We queried the DR3 catalog with these parameters and found 1196 stars. We then applied three different density-based machine learning algorithms (DBSCAN, OPTICS and HDBSCAN) to this sample and found potential YSOs. The three clustering algorithms identified a common set of 822 YSO members from Gaia DR3 in this region. We also classified these objects using 2MASS and WISE data to study their distribution and the progress of star formation in Serpens.
star clusters: embedded -- near-infrared photometry -- colour-magnitude diagrams -- pre-mainsequence stars -- machine learning --Gaia DR3--2MASS--WISE 0000 000 000 000 000 000
## 1 Introduction
Star forming regions (SFRs) house embedded star clusters and are the birthplaces of stars which provide the missing links in understanding the star formation (SF) process (Ascenso, 2017). As these young clusters are embedded in gas and dust, optical techniques (like multi-color optical photometry or spectroscopy) are inefficient in identification of Young Stellar Objects (YSOs). Infrared (IR) data is well suited for observations of embedded clusters. Complementary data ranging from X-ray to millimeter wavelengths, and spectroscopic follow-ups of the newly discovered population of young stars in star forming regions enrich our understanding of SF in these regions. It is difficult to identify the members of any SFR, especially for nearby regions (within 500 pc), because they occupy large areas of the projected sky and would take a substantial amount of observational time. This paper presents an updated sample of young stellar members of Serpens based on Gaia DR3 data using machine learning clustering techniques (Canovas _et al._, 2019).
Serpens is an interesting star-forming region for which unbiased datasets exist (Harvey _et al._, 2007; Djupvik _et al._, 2006; Enoch _et al._, 2009). It was identified as a site of active star formation by (Strom _et al._, 1974), extends several degrees around the young variable star \(VV\,S\,er\) and forms part of the large local dark cloud complex called the Aquila Rift, which has been extensively mapped in several molecular line surveys (Dame & Thaddeus, 1985; Dame _et al._, 1987, 2001). It is well-suited for studies of very young low-mass stars and sub-stellar objects because of its proximity of 260 pc (Harvey _et al._, 2007) and young age of 1-5 Myr (Eiroa _et al._, 2008).
As part of the NOAO survey program 'Towards a Complete Near-Infrared Spectroscopic and Imaging Survey of Giant Molecular Clouds' (PI: E. A. Lada), the Serpens Molecular Cloud was observed with the Florida Multi-Object Imaging Near-Infrared Grism Observational Spectrometer (FLAMIINGOS) at the Kitt Peak National Observatory 2.1 m telescope. In an earlier paper (Hasan, 2012), used this data to study the YSO population and made important inferences about the SF processes in Serpens. The paper discussed the distribution of young embedded sources using the Nearest Neighbor Method applied to a carefully selected sample of near-infrared excess (NIRX) stars that trace star formation in the complex and identified six clusters, of which three were not earlier reported in lit
erature. A median age of 1-2 Myr and a mean distance of 300 pc for the cluster was determined.
The Spitzer Legacy Survey 'Molecular Cores to Planet Forming Disks' Core to Disks (c2d) (Evans _et al._, 2009) in Serpens shows evidence of sequential star formation from SW to NE in the main Serpens Core. The surface density of young stars in this region is much higher, by a factor of 10-100, than that of the other star-forming regions mapped by c2d (Evans _et al._, 2009). It is an ideal region to build a 'template' for the study of disk evolution up to a few Myr within a well defined region by multi-wavelength observations of young stars and sub-stellar objects.
Gorlova _et al._ (2010) made a spectroscopic study of the Serpens core. The Serpens Main Cluster, known since mid 70s, is made of two compact protoclusters, lying in a 0.6 pc long filamentary structure, along NW-SE. The two sub-clusters have similar masses within similar sized regions (\(\approx 30M\odot\) in 0.025 pc\({}^{2}\)) each and an average age of \(10^{5}\) yr but differ in their velocity structures and molecular emission. The NW cluster devoid of bright NIR sources, has outflows powered by deeply embedded Class 0 and I protostars. Duarte-Cabral _et al._ (2011) inferred that star formation was probably triggered by the collision of two filament-like clouds. A large scale extinction map was presented by Cambresy (1999).
Unsupervised machine learning (ML) clustering techniques are used to find patterns or clusters in unlabeled databases. The problem of cluster recognition can be approached in a variety of ways using these methods, including centroid-based algorithms (like the \(k\)-means algorithm), distribution-based clustering (like Gaussian-mixture models), or density-based algorithms. (For an overview of clustering analysis in astronomy see Feigelson & Babu (2012), Chap. 3.3 and references therein).
The density-based algorithms are particularly useful for locating clusters with arbitrary shapes that can be generically characterised as overdensities in a low density environment. They also have the benefit of not requiring any prior knowledge of the dataset being analysed. In other words, these algorithms do not assume any distribution (such as one or many Gaussians) when associating the data points with a cluster, hence the user does not need to be aware of the number of clusters contained in the dataset. One of the most well-known methods in many fields is density-based spatial clustering of applications with noise (DBSCAN; Ester _et al._ (1996) and it is gaining popularity in astronomy (Joncour _et al._, 2018; Cantat-Gaudin _et al._, 2019). Ordering Points To Identify the Clustering Structure (OPTICS; Ankerst _et al._ (1999) and the hierarchical density-based spatial clustering of applications with noise (HDBSCAN; Campello _et al._ (2013) algorithms are improvements on DBSCAN that are gaining popularity due to their proven ability to detect different types of clusters.
Due to the young age of Serpens, we can assume that its members will have similar velocity distributions and will occupy a small area of the Galaxy. In contrast to the star population in the field, the cloud members should, in the multi-dimensional space described by their spatial coordinates and kinematic properties, appear to be grouped. In the five-dimensional space, which is defined by the three spatial coordinates and the two kinematic parameters proper motion in right ascension \(\mu_{a}^{*1}\) and declination \(\mu_{\delta}\), we ran the clustering algorithms. The DBSCAN, OPTICS, and HDBSCAN algorithms utilised in this paper are from Pedregosa _et al._ (2011). By comparing their results, we aim to reduce the bias in selection that is inherent in each algorithm and provide a more reliable sample of YSO candidates for Serpens members.
The paper is planned as follows: Section 1 is the introduction and the motivation for this work. Section 2 of our study provides a description of the data and sample construction we used. The three algorithms are applied to our Gaia sample in Section 3 where we also describe our methodology. In Section 4, we go over the characteristics of this sample and present the Two Micron All Sky Survey (2MASS) and Wide-field Infrared Survey Explorer (WISE) photometry and classification of our sample. Section 5 contains the Summary and Conclusions of our work.
## 2 Data and Sample construction
### Initial sample
Gaia provides high-precision astrometric data (positions: right ascension (\(\alpha\)) and declination (\(\delta\)), parallax (\(\varpi\)), and proper motions in right ascension (\(\mu_{\alpha}\)) and in declination (\(\mu_{\delta}\)) which is of great significance to studies of open clusters (Prusti _et al._, 2016; Gaia Collaboration _et al._, 2022).
We began by compiling a list of YSOs in Serpens shown in Fig. 1 and matching it with the 2MASS Skrutskie _et al._ (2006) catalog.
* The Spitzer Legacy c2D Survey "Molecular Cores to Planet Forming Disks" included a 0.89 deg\({}^{2}\) area of Serpens. The High Reliability Catalog included 377,456 total sources with 286 candidate YSOs (Harvey _et al._, 2007).
* Winston _et al._ (2009) included a sample of 137 YSOs obtained from Chandra X-ray data in the Serpens core region.
* The Florida Multi-Object Imaging Near-Infrared Grism Observational Spectrometer (FLAMINOSOS) described in Hasan (2012) includes a sample of 345 YSOs.
* Oliveira _et al._ (2009) took 78 optical spectra in Serpens and found 58 stars (75%) were confirmed to be young, mostly K- and M-type stars that belong to the cloud.
* Spezzi _et al._ (2010) present a deep optical/near-infrared imaging survey of the Serpens molecular cloud as complementary optical data to the c2d Legacy survey to study the star/disk formation and evolution in this cloud.
* Herczeg _et al._ (2019) used Gaia DR2 parallaxes and proper motions to statistically measure \(\approx\) 1167 kinematic members of Serpens, to evaluate the star formation history of the complex in a very large area of \(\approx 36\times 34\) degrees. We will compare our results with the above ones.
We combined the above catalogs (Fig 1) to obtain 656 unique sources, matched them first with 2MASS Skrutskie _et al._ (2006) and then with Gaia DR3. This method is preferable to a sky cross-match by coordinates because it does not require to transform the 2MASS coordinates from the J2000 to the J2015.5 epoch. We found that 250 sources matched with Gaia DR3 sources, but only 87 matched with members from Herczeg _et al._ (2019).
For the 87 matched DR3 stars that are reliable members, we found the following average astrometric properties of the control sample listed in Table 1 and shown in Fig. 2. Following Bailer-Jones (2015) we computed the individual distances as \(d=1/\varpi\) since the parallax fractional error of this sample is lower than 10, and the average distance is 436.7 pc.
We used these values to query the DR3 data in a 2\({}^{0}\) radius from Serpens core, which is the most active region, with the following constraints: RUWE \(<\) 1.4, RPIx \(>\) 10, 5 \(>\mu_{\alpha}>\) 1 11.6 \(<\mu_{\delta}<\) -6 4.2 \(>\) Plx \(>\) 0 and derived a sample of 1196 stars with an average value of \(R_{V}\)= -5.08 km/s for 66 stars, where \(R_{V}\) is the radial velocity.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline Status & \(\alpha\) & \(\delta\) & \(\varpi\) & \(\mu_{\alpha}^{*}\) & \(\mu_{\delta}\) \\ & (deg) & (deg) & (mas) & (mas/yr) & (mas/yr) \\ \hline Mean & 277.4 & 0.75 & 2.29 & 2.19 & -8.47 \\ Sigma & 0.16 & 0.4 & 0.26 & 1.04 & 0.68 \\ \hline \end{tabular}
\end{table}
Table 1: Mean and standard deviation (1\(\sigma\)) of the control sample.
Figure 1: Data plotting on WISE image: Spitzer c2D (Harvey _et al._, 2007) (286) red plus FIRX Hasan (2012)) (345) orange squares Xray Winston _et al._ (2009) (138) blue rhombus, R,Z Spezzi _et al._ (2010) (78) yellow circles Total:656 unique sources
Figure 2: WISE image of the Serpens with RGB colours mapped to 22, 4.6, and 3.4 \(\mu\)m. The control sample members are represented as yellow circles.
### Page 4 of 1
## 3 Clustering Algorithms
For our study, we considered the three spatial coordinates
\[X=d\ cos\ \delta\ cos\ \alpha\]
\[Y=d\ cos\ \delta\ sin\ \alpha\]
\[Z=d\ sin\ \delta\]
where \(d\) is the distance computed as the inverse of parallax and the two kinematic parameters proper motions in right ascension \(\mu_{a}^{*}\) and in declination \(\mu_{\delta}\). Given the low fraction of objects with radial velocity measurements in our Gaia sample, we restricted the kinematic analysis to only \(\mu_{a}^{*}\) and \(\mu_{\delta}\).
We then applied the three clustering algorithms DBSCAN, OPTICS and HDBSCAN to our 5 parameters described above. Clusters are localised and arbitrary shaped regions of an N-dimensional space with an excess of points per volume unit. The points that do not satisfy this condition are classified as noise. The two parameters \(\epsilon\) and \(mPts\) are used to describe the density threshold. A sphere of radius \(\epsilon\) is drawn around each point. If a minimum of \(mPts\) points are found in the \(\epsilon\) radius of a point, it is called a core point. Points which lie in the \(\epsilon\) radius of a core point but do not have minimum \(mPts\) are called border points and points outside the \(\epsilon\) radius which do not have minimum \(mPts\) are noise points.
### Dbscan
DBSCAN was first introduced by Ester _et al._ (1996). The algorithm strongly depends on the input parameters \(\epsilon\) and \(mPts\) and uses it to identify clusters. We varied the values of \(\epsilon\) and \(mPts\) and obtained the cluster points and noise points described in Table 2. We find that more than 91.9% of the control sample stars are identified using DBSCAN with the parameters used.
### Optics
By definition, all clusters discovered by DBSCAN in a given dataset have about the same density. Furthermore, in clusters with significant density gradients, such as a cluster made of a very dense core surrounded by a low density "halo," this algorithm struggles to identify all of the members. The hierarchical clustering algorithm Ordering Points To Identify the Clustering Structure (OPTICS) Ankerst _et al._ (1999) creates clusters with strong density gradients by exploring a range of \(\epsilon\).
OPTICS locates the cluster's densest areas and records this data in two variables called core distance and reachability distance. The former represents the distance between a core point and its nearest neighbour, and the latter is the maximum of the core distance of a core point. For a particular value of \(mPts\), OPTICS organises the points into groups based on how far they can be reached from the densest region of the cluster. The reachability plot displays a string of distinctive troughs connected to individual potential clusters as a function of \(\epsilon\). Figure 3 is the reachability plot for our sample and clearly shows a single valley with an \(\epsilon\) close to 0.5.
### Hdbscan
Finding the ideal \(\epsilon\) and \(mPts\) values is challenging, which is a disadvantage of both DBSCAN and OPTICS. It is difficult to clearly identify the first and last points of the valleys in the reachability-distance plots produced by OPTICS and the step-like slope shift in the k-distance curves utilised by DBSCAN in high-density datasets. Finding suitable hyperparameters is made easier by the hierarchical method HBDSCAN because it only needs one hyperparameter \(mCls\) (the "minimum cluster size") which is conceptually equivalent to \(mPts\)(Campello _et al._, 2013). Similar to OPTICS, HDBSCAN is sensitive to the density gradients inside a cluster and can recognise clusters of various densities.
When we ran HDBSCAN, we found a cluster where
\begin{table}
\begin{tabular}{|l|l|c|c|} \hline \(\epsilon\) & \(mPts\) & No of stars & Control stars (\%) \\ & & (core points) & identified \\ \hline
0.5 & 50 & 978 & 96.5 \\
1.0 & 50 & 822 & 91.9 \\
1.5 & 50 & 1099 & 91.9 \\ \hline \end{tabular}
\end{table}
Table 2: Explored hyperparameters and number of cluster elements identified by DBSCAN. The last column shows the percentage of control sample stars identified.
Figure 3: OPTICS: Reachability plot which shows the existance of only one cluster with \(\epsilon\approx 0.5\)
the core points had 1103 stars. We matched these to the YSOs obtained by DBSCAN and OPTICS and got 822 common stars. These are stars which have a very high probability of being member YSOs.
## 4 Infrared 2MASS and WISE photometry
We cross-matched our data with 2MASS and WISE to 814 and 720 objects respectively. Figure 4 shows the classification obtained for bare photospheres, Class II and Class III stars respectively using the method described in Koenig _et al._ (2012).
We then plotted our YSOs on the WISE image to see the distribution of the sources (Fig. 5.)
As found by (Evans _et al._, 2009) Serpens shows evidence of sequential star formation from SW to NE to the main Serpens Core. It was reported that SF has taken place along the South West to North East direction. In the figure, we see Class II and III stars seem distributed but most of the bare photospheric stars are towards the west. Further studies are required to find ages of thse YSOs and trace the star formation.
## 5 Results and Conclusion
This paper shows a unique method to identify young members of a star forming region, in this case, Serpens. YSOs are difficult to observe in the optical and hence other wavelengths ranging from Xray to IR are used in their identification and study. As Serpens is close to us, it occupies a very large region of the sky and Gaia, being an all sky survey with unprecedented accuracy is ideal to use for this purpose.
In this work, we compiled YSO data of Serpens from various sources and wavelengths (656 stars) and matched it to Gaia DR3 data to find most probable YSO members (87 stars). This was used to build a control sample with data that was used to query Gaia DR3 to obtain 1196 stars.
In the 5-parameter space of X, Y, Z and \(\mu_{\alpha}^{*}\) and \(\mu_{\delta}\) we applied three different density-based machine learning algorithms (DBSCAN, OPTICS and HDBSCAN) and found 822 common YSO members in the region. We found that they have similar values (due to our search criteria), but are spatially separated. We classified these objects using 2MASS and WISE data to find the distribution of Class II and Class III objects to study their distribution. This is a potential method of increasing the YSO sample of star forming regions using machine learning techniques.
## Acknowledgements
The authors would like to thank the referee for valuable comments that helped improve the paper.
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
Figure 4: YSO Classification using 2MASS and WISE: The Class II stars are in green, Class III in blue and Photospheres in red. The stars from the control sample are the black squares.
Figure 5: YSO Distribution using 2MASS and WISE colors, where Class II is green ovals, Class III is blue squares and photospheres is red rhombuses. |
2301.11749 | A Multi-task Multi-stage Transitional Training Framework for Neural Chat
Translation | Neural chat translation (NCT) aims to translate a cross-lingual chat between
speakers of different languages. Existing context-aware NMT models cannot
achieve satisfactory performances due to the following inherent problems: 1)
limited resources of annotated bilingual dialogues; 2) the neglect of modelling
conversational properties; 3) training discrepancy between different stages. To
address these issues, in this paper, we propose a multi-task multi-stage
transitional (MMT) training framework, where an NCT model is trained using the
bilingual chat translation dataset and additional monolingual dialogues. We
elaborately design two auxiliary tasks, namely utterance discrimination and
speaker discrimination, to introduce the modelling of dialogue coherence and
speaker characteristic into the NCT model. The training process consists of
three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2)
intermediate training with auxiliary tasks using additional monolingual
dialogues; 3) context-aware fine-tuning with gradual transition. Particularly,
the second stage serves as an intermediate phase that alleviates the training
discrepancy between the pre-training and fine-tuning stages. Moreover, to make
the stage transition smoother, we train the NCT model using a gradual
transition strategy, i.e., gradually transiting from using monolingual to
bilingual dialogues. Extensive experiments on two language pairs demonstrate
the effectiveness and superiority of our proposed training framework. | Chulun Zhou, Yunlong Liang, Fandong Meng, Jie Zhou, Jinan Xu, Hongji Wang, Min Zhang, Jinsong Su | 2023-01-27T14:41:16Z | http://arxiv.org/abs/2301.11749v1 | # A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation
###### Abstract
Neural chat translation (NCT) aims to translate a cross-lingual chat between speakers of different languages. Existing context-aware NMT models cannot achieve satisfactory performances due to the following inherent problems: 1) limited resources of annotated bilingual dialogues; 2) the neglect of modelling conversational properties; 3) training discrepancy between different stages. To address these issues, in this paper, we propose a multi-task multi-stage transitional (MMT) training framework, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. We elaborately design two auxiliary tasks, namely utterance discrimination and speaker discrimination, to introduce the modelling of dialogue coherence and speaker characteristic into the NCT model. The training process consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with gradual transition. Particularly, the second stage serves as an intermediate phase that alleviates the training discrepancy between the pre-training and fine-tuning stages. Moreover, to make the stage transition smoother, we train the NCT model using a gradual transition strategy, _i.e._, gradually transiting from using monolingual to bilingual dialogues. Extensive experiments on two language pairs demonstrate the effectiveness and superiority of our proposed training framework.
Neural Chat Translation, Monolingual Dialogue, Dialogue Coherence, Speaker Characteristic, Gradual Transition.
## 1 Introduction
Neural Chat Translation (NCT) is to translate a cross-lingual chat between speakers of different languages into utterances of their individual mother tongue. Fig. 1 depicts an example of cross-lingual chat where one speaks in English and another in Chinese with their corresponding translations. With more international communication and cooperation all around the world, the chat translation task becomes more important and has broader applications in daily life.
In this task, sentence-level Neural Machine Translation (NMT) models [1, 2, 3] can be directly used to translate dialogue utterances sentence by sentence. In spite of its practicability, sentence-level NMT models often generate unsatisfactory translations due to ignoring the contextual information in dialogue history. To address this problem, many researches [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] adapt context-aware NMT models to make chat translation through their capability of incorporating dialogue history context. Generally, these methods adopt a pretrain-finetune paradigm, which first pre-train a sentence-level NMT model on a large-scale parallel corpus and then fine-tune it on the chat translation dataset in a context-aware way. However, they still can not obtain satisfactory results in the scenario of chat translation, mainly due to the following aspects of limitations: 1) The resource of bilingual chat
Fig. 1: An example of cross-lingual chat (Env\(\otimes\)Zh). The speaker s1-specific utterance \(\mathbf{x_{u}}\) is being translated from English to Chinese with corresponding dialogue history context.
translation corpus is usually limited, thus making an NCT model insufficiently trained to fully exploit dialogue context. 2) Conventional ways of incorporating dialogue context neglect to explicitly model its conversational properties such as dialogue coherence and speaker characteristic, resulting in incoherent and speaker-inconsistent translations. 3) The abrupt transition from sentence-level pre-training to context-aware fine-tuning breaks the consistency of model training, which hurts the potential performance of the final NCT model. Therefore, it is of great significance to train a better NCT models by resolving the above three aspects of limitations.
In this paper, we propose a **m**ulti-task **m**ulti-stage transitional (MMT) training framework where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. Specifically, our proposed framework consists of three training stages, also following the pretrain-finetune paradigm. The first stage is still to pre-train the NCT model through sentence-level translation on the large-scale parallel corpus, resulting in the model \(M_{1}\). At the second stage, using \(M_{1}\) for model initialization, we continue to train the model through the previous sentence-level translation task along with two auxiliary dialogue-related tasks using additional monolingual dialogues, obtaining the model \(M_{2}\). The auxiliary tasks are related to dialogue coherence and speaker characteristic, which are two important conversational properties of dialogue context. For the dialogue coherence, we design the task of _Ultterance Discrimination_ (UD). The UD task is to judge whether an utterance and a given section of contextual utterances are within the same dialogue. For the speaker characteristic, we design the _Speaker Discrimination_ (SD) task. The SD task is to discriminate whether a given utterance and a piece of speaker-specific dialogue history contexts are spoken by the same speaker. Finally, at the last stage, initialized by \(M_{2}\), the model is fine-tuned using a gradual transition strategy and eventually becomes a context-aware NCT model \(M_{3}\). Concretely, the NCT model is trained through the objective comprised of chat translation, UD and SD tasks. During this process, we initially construct training samples for the two auxiliary tasks from additional monolingual dialogues and gradually transit to using bilingual dialogues.
The MMT training framework enhances the NCT model from the following aspects. Firstly, the relatively abundant monolingual dialogues function as a supplement to the scarce annotated bilingual dialogues, making the model more sufficiently trained to exploit dialogue context. Secondly, the UD and SD tasks are directly related to dialogue coherence and speaker characteristic, thus introducing the modelling of these two conversational properties into the NCT model. Thirdly, the second training stage serves as an intermediate phase that alleviates the discrepancy between sentence-level pre-training and context-aware fine-tuning. Particularly, it endows the model with the preliminary capability to capture dialogue context for the subsequent NCT training. It is notable that the two dialogue-related auxiliary tasks exist at both the second and third stages with different training data, which maintains the training consistency to some extent. Therefore, at the third stage, the NCT model can be more effectively fine-tuned to leverage dialogue context using the chat translation dataset with only a small number of annotated bilingual dialogues.
In essence, the major contributions of our paper are as follows:
* In NCT, our work is the first attempt to use additional relatively abundant monolingual dialogues for training, which helps the model more sufficiently trained to capture dialogue context for chat translation.
* We elaborately design two dialogue-related auxiliary tasks, namely utterance discrimination and speaker discrimination. This makes the model more capable of modelling dialogue coherence and speaker characteristic, which are two important conversational properties of dialogue context.
* We propose to alleviate the training discrepancy between pre-training and fine-tuning by introducing an intermediate stage (Stage 2) and adopting a gradual transition strategy for the context-aware fine-tuning (Stage 3). At the second stage, the model is simultaneously optimized with the two auxiliary tasks on the additional monolingual dialogues. Moreover, at the third stage, we train the NCT model by gradually transiting from using monolingual to bilingual dialogues, making the stage transition smoother. Thus, the NCT model can be more effectively fine-tuned on the small-scale bilingual chat translation dataset.
* We will release the code of this work on Github [https://github.com/DeepLearnXMU](https://github.com/DeepLearnXMU).
The remainder of this paper is organized as follows. Section 2 gives the NCT problem formalization, introduces the basic architecture of our NCT model and describes the conventional two-stage training including sentence-level pre-training and context-aware fine-tuning. Section 3 elaborates our proposed MMT training framework. In Section 4, we report the experimental results and make in-depth analysis. Section 5 summarizes the related work, mainly involving several existing studies on NCT and context-aware NMT models. Finally, in Section 6, we draw the conclusions of this paper.
## 2 Background
In this section, we first give the NCT problem formalization (Section 2.1). Then, we describe the Flat-NCT model, which is the model architecture used in this work (Section 2.2). Finally, we introduce the dominant approach of training an NCT model, which consists of sentence-level pre-training (Section 2.3.1) and context-aware fine-tuning (Section 2.3.2).
### _Problem Formalization_
In the scenario of this work, we denote the two speakers involved in a dialogue as \(s1\) and \(s2\). For a cross-lingual chat, as shown in the example in Fig. 1, the two speakers speak in the source and target language, respectively. We assume
they have alternately given utterances in their own languages for \(u\) turns, resulting in the source-language utterance sequence \(X{=}{\bf x}_{1},{\bf x}_{2},{\bf x}_{3},{\bf x}_{4},...,{\bf x}_{u-1},{\bf x}_{u}\) and the target-language utterance sequence \(Y{=}{\bf y}_{1},{\bf y}_{2},{\bf y}_{3},{\bf y}_{4},...,{\bf y}_{u-1},{\bf y}_{u}\). Notably, \(X\) and \(Y\) contain both the utterances originally spoken by one speaker and the translated utterances from the other speaker. Specifically, among these utterances, \({\bf x}_{1},{\bf x}_{3},...,{\bf x}_{u}\) are originally spoken by the source-language speaker \(s1\) and \({\bf y}_{1},{\bf y}_{3},...,{\bf y}_{u}\) are the corresponding translations in the target language. Analogously, \({\bf y}_{2},{\bf y}_{4},...,{\bf y}_{u-1}\) are originally spoken by the target-language speaker \(s2\) and \({\bf x}_{2},{\bf x}_{4},...,{\bf x}_{u-1}\) are the translated utterances in the source language.
Besides the bilingual dialogues, our proposed training framework uses additional monolingual dialogues \(D_{\overline{X}}\) of the source language and \(D_{\overline{Y}}\) of the target language. Slightly different from the bilingual dialogue, the two speakers (\(s1\) and \(s2\)) in a monolingual dialogue speak in the same language. We also assume a source-language monolingual dialogue \(\overline{X}{\in}D_{\overline{X}}\) and a target-language monolingual \(\overline{Y}{\in}D_{\overline{Y}}\) proceed to the \(u\)-th turn, resulting in \(\overline{\bf x}_{1},\overline{\bf x}_{2},\overline{\bf x}_{3},\overline{\bf x }_{4},...,\overline{\bf x}_{u-1},\overline{\bf x}_{u}\) and \(\overline{\bf y}_{1},\overline{\bf y}_{2},\overline{\bf y}_{3},\overline{\bf y }_{4},...,\overline{\bf y}_{u-1},\overline{\bf y}_{u}\), respectively.
Then, we give the necessary definitions in the remainder of this paper. For clarity, we list all definitions [1] in Table I. For a bilingual dialogue, we define the dialogue history context of \({\bf x}_{u}\) on the source side as \(\mathcal{C}_{\mathbf{x}_{u}}{=}{\bf x}_{1},{\bf x}_{2},{\bf x}_{3},...,{\bf x} _{u-1}\) and that of \({\bf y}_{u}\) on the target side as \(\mathcal{C}_{\mathbf{y}_{u}}{=}{\bf y}_{1},{\bf y}_{2},{\bf y}_{3},...,{\bf y} _{u-1}\). According to original speakers, on the source side, we define the speaker \(s1\)-specific dialogue history context of \({\bf x}_{u}\) as the partial sequence of its preceding utterances \({\bf c}_{\mathbf{x}_{u}}^{s1}{=}{\bf x}_{1},{\bf x}_{3},...,{\bf x}_{u-2}\) and the speaker \(s2\)-specific dialogue history context of \({\bf x}_{u}\) as \(\mathcal{C}_{\mathbf{x}_{u}}^{s2}{=}{\bf x}_{2},{\bf x}_{4},...,{\bf x}_{u-1}\). On the target side, \(\mathcal{C}_{\mathbf{y}_{u}}^{s1}{=}{\bf y}_{1},{\bf y}_{3},...,{\bf y}_{u-2}\) and \(\mathcal{C}_{\mathbf{y}_{u}}^{s2}{=}{\bf y}_{2},{\bf y}_{4},...,{\bf y}_{u-1}\) denote the speaker \(s1\)-specific and \(s2\)-specific dialogue history contexts of \({\bf y}_{u}\), respectively. When it comes to a monolingual dialogue, we also formalize different types of dialogue history contexts \(\{C_{\overline{\mathbf{y}}_{u}},C_{\overline{\mathbf{y}}_{u}},C_{\overline{ \mathbf{x}}_{u}}^{s1},C_{\overline{\mathbf{x}}_{u}}^{s2},C_{\overline{\mathbf{ y}}_{u}}^{s1},C_{\overline{\mathbf{y}}_{u}}^{s2}\}\) in a similar way.
### _The NCT model_
We use the Flat-Transformer introduced in [14] as our basic NCT model, which we denote as Flat-NCT. Figure 2 shows the architecture of the Flat-NCT, mainly including _input representation layer_, _encoder_ and _decoder_.
#### 2.2.1 Input Representation Layer
For each utterance \({\bf x}_{u}{=}x_{1},x_{2}{,}{\cdots},x_{|{\bf x}_{u}|}\) to be translated, \([\mathcal{C}_{\mathbf{x}_{u}};{\bf x}_{u}]\) is fed into the NCT model as input, where \([;]\)
1. For each item of \(\{C_{\mathbf{x}_{u}},C_{\mathbf{y}_{u}},C_{\mathbf{x}_{u}}^{s1},C_{\mathbf{x} _{u}}^{s2},C_{\mathbf{x}_{u}}^{s1},C_{\mathbf{y}_{u}}^{s2},C_{\overline{\mathbf{ x}}_{u}},C_{\overline{\mathbf{y}}_{u}},C_{\overline{\mathbf{x}}_{u}}^{s1},\)\(C_{\overline{\mathbf{x}}_{u}}^{s2},\)\(C_{\overline{\mathbf{x}}_{u}}^{s1},\)\(C_{\overline{\mathbf{y}}_{u}}^{s2}\}\), taking \(C_{\mathbf{x}_{u}}\) for instance, we prepend a special token \([\)\([\)\(s1\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ
denotes the concatenation. Different from the conventional embedding layer that only includes word embedding **WE** and position embedding **PE**, we additionally add a speaker embedding **SE** and a turn embedding **TE**. The final embedding \(\textbf{B}(x_{i})\) of each input word \(x_{i}\) can be written as
\[\textbf{B}(x_{i})=\textbf{WE}(x_{i})+\textbf{PE}(x_{i})+\textbf{SE}(x_{i})+ \textbf{TE}(x_{i}), \tag{1}\]
where \(\textbf{WE}\in\mathbb{R}^{|V|\times d}\), \(\textbf{SE}\in\mathbb{R}^{2\times d}\) and \(\textbf{TE}\in\mathbb{R}^{|U|\times d}\). Here, \(|V|\), \(|U|\) and \(d\) denote the size of shared vocabulary, maximum dialogue turns, and the hidden size, respectively.
#### 2.2.2 Encoder
The encoder of our NCT model has \(L\) identical layers, each of which is composed of a self-attention (\(\mathrm{SelfAtt}\)) sub-layer and a feed-forward network (FFN) sub-layer.2 Let \(\textbf{h}_{e}^{(l)}\) denote the hidden states of the \(l\)-th encoder layer, it is calculated using the following equations:
Footnote 2: The layer normalization is omitted for simplicity.
\[\begin{split}\textbf{z}_{e}^{(l)}&=\mathrm{SelfAtt }(\textbf{h}_{e}^{(l-1)})+\textbf{h}_{e}^{(l-1)},\\ \textbf{h}_{e}^{(l)}&=\mathrm{FFN}(\textbf{z}_{e}^{ (l)})+\textbf{z}_{e}^{(l)},\end{split} \tag{2}\]
where \(\textbf{h}_{e}^{(0)}\) is initialized as the embedding of input words. Particularly, words in \(\mathcal{C}_{\textbf{x}_{u}}\) can only be attended to by those in \(\textbf{x}_{u}\) at the first encoder layer while \(\mathcal{C}_{\textbf{x}_{u}}\) is masked at the other layers, as implemented in [14].
#### 2.2.3 Decoder
The decoder also consists of \(L\) identical layers, each of which additionally has a cross-attention (\(\mathrm{CrossAtt}\)) sub-layer compared to the encoder. Let \(\textbf{h}_{d}^{(l)}\) denote the hidden states of the \(l\)-th decoder layer, it is computed as
\[\begin{split}\textbf{z}_{d}^{(l)}&=\mathrm{SelfAtt }(\textbf{h}_{d}^{(l-1)})+\textbf{h}_{d}^{(l-1)},\\ \textbf{c}_{d}^{(l)}&=\mathrm{CrossAtt}(\textbf{z}_ {d}^{(l)},\textbf{h}_{e}^{(L)})+\textbf{z}_{d}^{(l)},\\ \textbf{h}_{d}^{(l)}&=\mathrm{FFN}(\textbf{c}_{d}^{ (l)})+\textbf{c}_{d}^{(l)},\end{split} \tag{3}\]
where \(\textbf{h}_{e}^{(L)}\) corresponds to the top-layer encoder hidden states.
At each decoding time step \(t\), the \(t\)-th decoder hidden state \(\textbf{h}_{d,t}^{(L)}\) is fed into a linear transformation layer and a softmax layer to predict the probability distribution of the next target token:
\[p(y_{t}|y_{<t},\textbf{x}_{u},\mathcal{C}_{\textbf{x}_{u}})=\mathrm{Softmax} (\textbf{W}_{o}\textbf{h}_{d,t}^{(L)}+\textbf{b}_{o}), \tag{4}\]
where \(\textbf{W}_{o}\in\mathbb{R}^{|V|\times d}\) and \(\textbf{b}_{o}\in\mathbb{R}^{|V|}\) are trainable parameters.
### _Two-stage Training_
#### 2.3.1 Sentence-level Pre-training
At this stage, the NCT model is pre-trained on a large-scale parallel corpus \(D_{sent}\) in the way of a vanilla sentence-level translation. For each parallel sentence pair \((\textbf{x},\textbf{y})\in D_{sent}\), taking \(\textbf{x}\) as input, the model is optimized through the following objective:
\[\mathcal{L}_{sent}(\theta_{nct})=\sum_{t=1}^{|\textbf{y}|}\log(p(y_{t}| \textbf{x},y_{<t})), \tag{5}\]
where \(\theta_{nct}\) is the parameters of the NCT model, \(\textbf{y}\)=\(y_{1},y_{2},\cdots,y_{|\textbf{y}|}\) is the target translation, \(y_{t}\) is the \(t\)-th word of \(\textbf{y}\) and \(y_{<t}\) denotes the partial sequence \(y_{1},\cdots,y_{t-1}\) of target words preceding \(y_{t}\).
#### 2.3.2 Context-aware Fine-tuning
After the sentence-level pre-training, the model is then fine-tuned using the bilingual chat translation dataset \(D_{bct}\) in a context-aware way. Concretely, given a piece of \(U\)-turn parallel bilingual dialogue utterances \((X,Y)\in D_{bct}\), where \(X\)=\(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{U}\) is in the source language while \(Y\)=\(\textbf{y}_{1},\textbf{y}_{2},\cdots,\textbf{y}_{U}\) is in the target language,3 the training objective at this stage can be formalized as
Footnote 3: Note that \(X\) contains both the utterances originally spoken by the source-language speaker and the translations of those originally spoken by the other speaker of the target language, which is the same for \(Y\).
\[\mathcal{L}_{nct}(\theta_{nct})=-\sum_{u=1}^{U}\log(p(\textbf{y}_{u}|\textbf{x }_{u},\textbf{x}_{<u},\textbf{y}_{<u})), \tag{6}\]
where \(\textbf{x}_{<u}\) and \(\textbf{y}_{<u}\) are the preceding utterance sequences of the \(u\)-th source-language utterance \(\textbf{x}_{u}\) and the \(u\)-th target-language utterance \(\textbf{y}_{u}\), respectively. More specifically, \(p(\textbf{y}_{u}|\textbf{x}_{u},\textbf{x}_{<u},\textbf{y}_{<u})\) is calculated as
\[p(\textbf{y}_{u}|\textbf{x}_{u},\textbf{x}_{<u},\textbf{y}_{<u})=\prod_{t=1}^{| \textbf{y}_{u}|}p(y_{t}|y_{<t},\textbf{x}_{u},\textbf{x}_{<u},\textbf{y}_{<u}), \tag{7}\]
where \(y_{t}\) is the \(t\)-th target word in \(\textbf{y}_{u}\) and \(y_{<t}\) denotes the preceding tokens \(y_{1},y_{2},\cdots,y_{t-1}\) before the \(t\)-th time step.
## 3 Multi-task Multi-stage Transition Training Framework
In this section, we give a detailed description of our proposed multi-task multi-stage transitional (MMT) training framework for NCT, which aims to improve the NCT model with dialogue-related auxiliary tasks using additional monolingual dialogues. In the following subsections, we first introduce the two proposed dialogue-related auxiliary tasks (Section 3.1) in detail. Then, we elaborate the procedures of our proposed training framework (Section 3.2).
### _Auxiliary Tasks_
In our proposed training framework, we elaborately design two auxiliary tasks that are related to two important conversational properties of dialogue context, namely dialogue coherence and speaker characteristic. The first task for dialogue coherence is utterance discrimination (UD) and the second for speaker characteristic is speaker discrimination (SD). Together with the main chat translation task, the NCT model can be enhanced to generate more coherent and speaker-consistent translations through multi-task learning.
In the following subsections, in order to clearly describe the two auxiliary tasks, we just take a source-language dialogue \(X\!=\!\!\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_{4},...,\mathbf{ x}_{u-1},\mathbf{x}_{u}\) for instance, which can be generalized to other types of dialogues (\(Y\), \(\overline{X}\) and \(\overline{Y}\)).
#### 3.1.1 Utterance Discrimination (UD)
A series of previous studies [16, 17, 18, 19, 20] have indicated that the modelling of global contextual coherence can lead to more coherent generated text. From this perspective, we design the task of UD to introduce the modelling of dialogue coherence into the NCT model.
As shown in Fig. 3(a), our UD task aims to distinguish whether an utterance and a given section of contextual utterances are within the same dialogue. To this end, we construct positive and negative training samples from the monolingual and bilingual dialogues, where a training sample (\(\mathcal{C}_{\mathbf{x}_{u}}\), \(\overline{\mathbf{x}}\)) contains a section of dialogue history context \(\mathcal{C}_{\mathbf{x}_{u}}\) and a selected utterance \(\widetilde{\mathbf{x}}\) with the label \(\ell_{ud}^{X}\). For a positive sample with label \(\ell_{ud}^{X}=1\), \(\widetilde{\mathbf{x}}\) is exactly \(\mathbf{x}_{u}\), while for a negative sample with label \(\ell_{ud}^{X}=0\), \(\widetilde{\mathbf{x}}\) is a randomly selected utterance from any other irrelevant dialogue. Formally, the training objective of UD is defined as follows:
\[\mathcal{L}_{ud}^{X}(\theta_{nct},\theta_{ud})=-\mathrm{log}(p(\hat{\ell}_{ud }^{X}=\ell_{ud}^{X}|\mathcal{C}_{\mathbf{x}_{u}},\widetilde{\mathbf{x}})), \tag{8}\]
where \(\theta_{nct}\) and \(\theta_{ud}\) are the trainable parameters of the NCT model and UD classifier, respectively.
To estimate the probability in Eq. 8, we first obtain the representations \(\mathbf{H}_{\widetilde{\mathbf{x}}}\) of the utterance \(\widetilde{\mathbf{x}}\) and \(\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}}\) of the dialogue history context \(\mathcal{C}_{\mathbf{x}_{u}}\) using the NCT encoder. Specifically, \(\mathbf{H}_{\widetilde{\mathbf{x}}}\) is calculated as \(\frac{1}{|\widetilde{\mathbf{x}}|}\sum_{i=1}^{|\widetilde{\mathbf{x}}|} \mathbf{h}_{e,i}^{(L)}\) while \(\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}}\) is defined as the encoder hidden state \(\mathbf{h}_{e,0}^{(L)}\) of the prepended special token '[cls]' in \(\mathcal{C}_{\mathbf{x}_{u}}\). Then, the concatenation of \(\mathbf{H}_{\mathbf{x}_{u}}\) and \(\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}}\) is fed into a binary UD classifier, which is an extra fully-connected layer on top of the NCT encoder:
\[\begin{split} p(\hat{\ell}_{ud}^{X}=1|\mathcal{C}_{\mathbf{x}_{u} },\widetilde{\mathbf{x}})&=\mathrm{sigmoid}(\mathbf{W}_{ud}[ \mathbf{H}_{\widetilde{\mathbf{x}}};\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}} ]),\\ p(\hat{\ell}_{ud}^{X}=0|\mathcal{C}_{\mathbf{x}_{u}}, \widetilde{\mathbf{x}})&=1-p(\hat{\ell}_{ud}^{X}=1| \mathcal{C}_{\mathbf{x}_{u}},\widetilde{\mathbf{x}}),\end{split} \tag{9}\]
where \(\mathbf{W}_{ud}\) is the trainable parameter matrix of the UD classifier and the bias term is omitted for simplicity.
#### 3.1.2 Speaker Discrimination (SD)
Generally, a dialogue may involve speakers with different characteristics, which is a salient conversational property. Therefore, we design the SD task to incorporate the modelling of speaking style into the NCT model, making the translated utterance more speaker-consistent.
As shown in Fig. 3(b), the SD task is to discriminate whether a given utterance and a piece of speaker-specific dialogue history contexts are spoken by the same speaker. Similarly, we construct positive and negative training samples from the monolingual and bilingual dialogues. Specifically, an SD training sample (\(\mathcal{C}_{\mathbf{x}_{u}}^{s}\), \(\mathbf{x}_{u}\)) is comprised of the speaker \(s\)-specific dialogue history context (\(s\in\{s1,s2\}\)) and the utterance \(\mathbf{x}_{u}\) with the corresponding label \(\ell_{sd}^{X}\). For a positive sample with label \(\ell_{sd}^{X}=1\), the dialogue history context is specific to the speaker \(s1\) (\(\mathbf{x}_{u}\) is spoken by \(s1\)), while for a negative sample with label \(\ell_{sd}^{X}=0\), it is specific to the other speaker \(s2\). Formally, the training objective of SD is defined as follows:
\[\mathcal{L}_{sd}^{X}(\theta_{nct},\theta_{sd})=-\mathrm{log}(p(\hat{\ell}_{sd }^{X}=\ell_{sd}^{X}|\mathcal{C}_{\mathbf{x}_{u}}^{s},\mathbf{x}_{u})), \tag{10}\]
where \(\theta_{nct}\) and \(\theta_{sd}\) are the trainable parameters of the NCT model and SD classifier, respectively.
Analogously, we use the NCT encoder to obtain the representations \(\mathbf{H}_{\mathbf{x}_{u}}\) of \(\mathbf{x}_{u}\) and \(\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}}\) of \(\mathcal{C}_{\mathbf{x}_{u}}^{s}\), where \(\mathbf{H}_{\mathbf{x}_{u}}\!=\!\!\frac{1}{|\mathbf{x}_{u}|}\sum_{i=1}^{| \mathbf{x}_{u}|}\mathbf{h}_{e,i}^{(L)}\) and the \(\mathbf{h}_{e,0}^{(L)}\) of \(\mathcal{C}_{\mathbf{x}_{u}}^{s}\) is used as \(\mathbf{H}_{\mathcal{C}_{\mathbf{x}_{u}}^{s}}\). Then, to estimate the probability in Eq. 10, the concatenation
Fig. 3: Overview of the auxiliary tasks and the MMT training framework. To show the two auxiliary tasks, we just take the source-language dialogue \(X\!=\!\!\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_{4},..., \mathbf{x}_{u-1},\mathbf{x}_{u}\) for instance, which can be analogously generalized to other types of dialogues (\(Y\), \(\overline{X}\) and \(\overline{Y}\)). (a): The utterance discrimination (UD) task. (b): The speaker discrimination (SD) task. (c): The three training stages of our proposed framework. Note that the NCT encoder is shared across the chat translation and the two auxiliary tasks.
of \(\mathbf{H}_{\mathbf{x}_{u}}\) and \(\mathbf{H}_{\mathcal{C}^{s}_{u}}\) is fed into a binary SD classifier, which is another fully-connected layer on top of the NCT encoder:
\[\begin{split} p(\hat{\ell}_{sd}^{X}=1|\mathcal{C}^{s}_{\mathbf{x}_ {u}},\mathbf{x}_{u})&=\mathrm{sigmoid}(\mathbf{W}_{sd}[\mathbf{H} _{\mathbf{x}_{u}};\mathbf{H}_{\mathcal{C}^{s}_{u}}]),\\ p(\hat{\ell}_{sd}^{X}=0|\mathcal{C}^{s}_{\mathbf{x}_{u}}, \mathbf{x}_{u})&=1-p(\hat{\ell}_{sd}^{X}=1|\mathcal{C}^{s}_{ \mathbf{x}_{u}},\mathbf{x}_{u}),\end{split} \tag{11}\]
where \(\mathbf{W}_{sd}\) is the trainable parameter matrix of the SD classifier and the bias term is omitted for simplicity.
### _Three-stage Training_
Then, we elaborate the procedures of our proposed MMT training framework. The training totally consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with gradual transition. During inference, the auxiliary tasks (UD and SD) are not involved and only the NCT model (\(\theta_{nct}\)) is used to conduct chat translation.
#### 3.2.1 Stage 1: Sentence-level Pre-training on Large-scale Parallel Corpus
As described in Section 2.3.1, the first stage is to grant the NCT model the basic capability of translating sentences. Given the large-scale parallel corpus \(D_{sent}\), we pre-train the model \(M_{1}\) using the same training objective as Eq. 5, _i.e._, \(\mathcal{L}_{1}\)=\(\mathcal{L}_{sent}(\theta_{nct})\).
#### 3.2.2 Stage 2: Intermediate Training with Auxiliary Tasks using Additional Monolingual Dialogues
Under our proposed training framework, the second stage serves as an intermediate phase that involves additional monolingual dialogues, endowing the original context-agnostic model with the preliminary capability of capturing dialogue context. Using the pre-trained \(M_{1}\) for model initialization, we continue to train the model through the previous sentence-level translation along with the two designed auxiliary tasks (UD and SD) using additional monolingual dialogues, obtaining the model \(M_{2}\).
Concretely, for UD and SD tasks, we construct training instances from \(\overline{X}\in D_{\overline{X}}\) and \(\overline{Y}\in D_{\overline{Y}}\) in the way described in Section 3.1.1 and Section 3.1.2. Together with the sentence-level translation, the training objective at this stage can be written as
\[\mathcal{L}_{2}=\mathcal{L}_{sent}+\alpha_{1}\mathcal{L}_{\overline{ ud}}+\beta_{1}\mathcal{L}_{\overline{sd}}, \tag{12}\] \[\text{where}\quad\mathcal{L}_{\overline{ud}}=\mathcal{L}_{ud}^{ \overline{X}}(\theta_{nct},\theta_{ud})+\mathcal{L}_{ud}^{\overline{Y}}( \theta_{nct},\theta_{ud}),\] \[\mathcal{L}_{\overline{sd}}=\mathcal{L}_{sd}^{X}(\theta_{nct}, \theta_{sd})+\mathcal{L}_{sd}^{\overline{Y}}(\theta_{nct},\theta_{sd}),\]
and \(\alpha_{1}\) and \(\beta_{1}\) are balancing hyper-parameters for the trade-off between \(\mathcal{L}_{sent}\) and the other auxiliary objectives. Here, as similarly defined in Eq. 8 and Eq. 10, \(\mathcal{L}_{ud}^{\overline{X}}(\theta_{nct},\theta_{ud})\) and \(\mathcal{L}_{ud}^{\overline{Y}}(\theta_{nct},\theta_{ud})\) represent the training objectives of the UD task on the source-language monolingual dialogue \(\overline{X}\) and target-language monolingual dialogue \(\overline{Y}\) respectively, which is analogous to \(\mathcal{L}_{sd}^{\overline{X}}(\theta_{nct},\theta_{ud})\) and \(\mathcal{L}_{sd}^{\overline{Y}}(\theta_{nct},\theta_{ud})\) of the SD task.
In this way, the tasks of UD and SD introduce the modelling of dialogue coherence and speaker characteristic into the sentence-level translation model. Meanwhile, we still use the objective \(\mathcal{L}_{sent}\) so as to avoid undermining the pre-trained translation capability of the model, providing a better starting point for the subsequent NCT fine-tuning.
#### 3.2.3 Stage 3: Context-aware Fine-tuning with Gradual Transition
Using the bilingual chat translation dataset \(D_{bct}\), the third stage is to obtain the final NCT model \(M_{3}\) through context-aware fine-tuning, where the two auxiliary tasks (UD and SD) are still involved. Particularly, different from the second stage, we construct the training instances of UD and SD tasks from \(X\) and \(Y\).
Given a bilingual dialogue pair \((X,Y)\in D_{bct}\), we optimize the model (initialized by \(M_{2}\)) through the following objective:
\[\mathcal{L}_{3}=\mathcal{L}_{nct}+\alpha_{2}\mathcal{L}_{ud}+\beta_{2} \mathcal{L}_{sd}, \tag{13}\] \[\text{where}\quad\mathcal{L}_{ud}=\mathcal{L}_{ud}^{X}(\theta_{nct},\theta_{ud})+\mathcal{L}_{ud}^{Y}(\theta_{nct},\theta_{ud}),\] \[\mathcal{L}_{sd}=\mathcal{L}_{sd}^{X}(\theta_{nct},\theta_{sd})+ \mathcal{L}_{sd}^{Y}(\theta_{nct},\theta_{sd}),\]
and \(\alpha_{2}\) and \(\beta_{2}\) are also the hyper-parameters controlling the balance between \(\mathcal{L}_{nct}\) and the other auxiliary objectives analogously defined as in Eq. 8 or Eq. 10. Notably, under our proposed training framework, UD and SD tasks exist both at the second and the third stages, which can benefit the NCT model in the following two aspects. On the one hand, the two auxiliary tasks maintain the training consistency, making the transition from sentence-level pre-training to context-aware fine-tuning smoother. On the other hand, because the model has acquired the preliminary capability of capturing dialogue context obtained at the second stage, it can be more effectively fine-tuned on \(D_{bct}\) with only a small number of annotated bilingual dialogues.
However, although the above strategy maintains the training consistency to some extent, the transition of training stage is still abrupt because the NCT model is trained with the two auxiliary tasks using totally different data at the second and third stages. To further alleviate the training discrepancy, we propose to train the NCT model by gradually transiting from using monolingual to bilingual dialogues. Specifically, we keep on using the additional monolingual dialogues (\(\overline{X}\) and \(\overline{Y}\)) to accomplish a smoother transition of training stages. Therefore, the training objective of this stage can be formalized as
\[\begin{split}\mathcal{L^{\prime}}_{3}=\mathcal{L}_{nct}& +\lambda(\alpha_{2}\mathcal{L}_{ud}+\beta_{2}\mathcal{L}_{sd})\\ &+(1-\lambda)(\alpha_{1}\mathcal{L}_{\overline{ud}}+\beta_{1} \mathcal{L}_{\overline{sd}}),\end{split} \tag{14}\]
where \(\lambda\)=\(n/N\) denotes the coefficient controlling the balance between monolingual and bilingual dialogues with \(n\) being the current training step at the third stage and \(N\) being the maximum steps of this stage. Note that \(\alpha_{1}\) and \(\beta_{1}\) are kept fixed as the values in Eq. 12. Considering that the additional monolingual dialogues are much more than the available annotated bilingual dialogues, they can function as a supplement to the scarce annotated bilingual dialogues, helping the model learn to better exploit dialogue context.
## 4 Experiments
To investigate the effectiveness of our proposed training framework, we conducted experiments on English\(\Leftrightarrow\)German (En\(\Leftrightarrow\)De) and English\(\Leftrightarrow\)Chinese (En\(\Leftrightarrow\)Zh) chat translation datasets.
### _Datasets_
As described in Section 3.2, our proposed training framework consists of three stages, involving the large-scale sentence-level parallel corpus (WMT20), the additional monolingual dialogues (Taskmaster-1) and the annotated bilingual dialogues (BConTrasT and BMELD). Table II lists the statistics of the involved datasets corresponding to different usages and translation directions.
**WMT20.4** This large-scale sentence-level parallel corpus is used to at the first and second stages under our framework. For English\(\Leftrightarrow\)German, we use and combine six corpora including Europaral, ParaCrawl, CommonCrawl, TildeRapid, NewsCommentary, and WikiMatrix. For En\(\Leftrightarrow\)Zh, the corpora we use contain News Commentary v15, Wiki Titles v2, UN Parallel Corpus V1.0, CCMT Corpus, and WikiMatrix. We first filter out duplicate sentence pairs and remove those whose length exceeds 80. Then, we employ a series of open-source/in-house scripts, including full-/half-width conversion, unicode conversion, punctuation normalization, and tokenization [21] to pre-process the raw data. Finally, we apply byte-pair-encoding (BPE) [22] with 32K merge operations to tokenize the sentences into subwords. By doing so, we obtain 45,541,367 sentence pairs for En\(\Leftrightarrow\)De and 22,244,006 sentence pairs for En\(\Leftrightarrow\)Zh, respectively.
Footnote 4: [http://www.statmt.org/wmt20/translation-task.html](http://www.statmt.org/wmt20/translation-task.html)
**Taskmaster-1 [23].5** The dataset [23] consists of English dialogues created via two distinct procedures, either the "Wizard of Oz" (WOz) approach in which trained agents and crowd-sourced workers interact with each other or the "self-dialog" where crowd-sourced workers write the entire dialog themselves. Given these monolingual dialogues in English, we first pre-process them using the same procedures as in WMT20. Then, because we do not have the needed German/Chinese monolingual dialogues in our En\(\Leftrightarrow\)De/En\(\Leftrightarrow\)Zh experiments, we use in-house En\(\Rightarrow\)De and En\(\Rightarrow\)Zh translation models to obtain the German/Chinese translations of those original English monolingual dialogues.
Footnote 5: [https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019)
**BConTrasT [24].6** This dataset is based on the monolingual Taskmaster-1 corpus [23] and is provided by WMT20 Shared Task on Chat Translation [24], containing chats for the English-German language pair. A subset of dialogues in Taskmaster-1 are first automatically translated from English into German and then manually post-edited by native German speakers on Unbabel.7 The conversations in BConTrasT involve two speakers of different languages, where one (customer) speaks in German and the other (agent) responds in English.
Footnote 6: [https://github.com/Unbabel/BConTrasT](https://github.com/Unbabel/BConTrasT)
**BMLED.** It is a recently released English-Chinese bilingual chat translation dataset. Based on the original English dialogues in MELD8 (Multimodal EmotionLines Dataset) [25], the dataset authors first crawl the corresponding Chinese translations from a movie subtitle website 9 and then manually post-edit these crawled translations by native post-graduate Chinese students majoring in English. Finally, following [24], they assume 50% of utterances are originally spoken by the Chinese speakers to keep data balance for Zh\(\Rightarrow\)En translations and build the bilingual MELD (BMELD). For the Chinese utterances, we follow the authors to segment the sentences using Stanford CoreNLP toolkit.10
Footnote 7: www.unbabel.com
Footnote 8: The MELD is created by enhancing and extending EmotionLines dataset. It contains the same available dialogue instances in EmotionLines while encompassing audio and visual modality along with text.
### _Contrast Models_
We compare the Flat-NCT model trained under our proposed MMT training framework with baseline sentence-level NMT models and several existing context-aware NMT models.
**Sentence-level NMT Models.**
* **Transformer**[3]: The vanilla Transformer model trained on the sentence-level NMT corpus.
* **Transformer+FT**[3]: The vanilla Transformer model that is first pre-trained on the sentence-level NMT corpus and then directly fine-tuned on the bilingual chat translation dataset.
**Context-Aware NMT Models.**
* **Dia-Transformer+FT**[26]: The original model is RNN-based document-level NMT model with an additional encoder to incorporate the mixed-language dialogue history. We re-implement it based on Transformer, where an additional encoder layer is used
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Dataset/Split** & Train & Valid & Test \\ \hline WMT20 (En\(\Leftrightarrow\)De) & 45,541,367 & - & - \\ \hline WMT20 (En\(\Leftrightarrow\)Zh) & 22,244,006 & - & - \\ \hline Taskmaster-1 (En) & 153,774 & - & - \\ \hline BConTrasT (En\(\Leftrightarrow\)De) & 7,629 & 1,040 & 1,133 \\ \hline BConTrasT (De\(\Rightarrow\)En) & 6,216 & 862 & 967 \\ \hline BMELD (En\(\Rightarrow\)Zh) & 5,560 & 567 & 1,466 \\ \hline BMELD (Zh\(\Rightarrow\)En) & 4,427 & 517 & 1,135 \\ \hline \end{tabular}
\end{table} TABLE II: Dataset Statistics
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline
**Methods** & **En\(\Rightarrow\)De** & **De\(\Rightarrow\)En** & **En\(\Rightarrow\)Zh** & **Zh\(\Rightarrow\)En** \\ \hline Transformer (Base) & 39.88 & 40.72 & 32.55 & 24.42 \\ \hline Transformer (Big) & 41.35 & 41.56 & 33.85 & 24.86 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Model Performance after Sentence-level Pre-training
to incorporate the dialogue history into the NMT model.
* **Gate-Transformer+FT**[27]: A document-aware Transformer model that uses a gate to incorporate the context information.
* **Flat-NCT+FT**: The Flat-NCT model trained through sentence-level pre-training (Section 2.3.1) and context-aware fine-tuning (Section 2.3.2). Please note that it is our most related baseline.
**Our Model.**
* **Flat-NCT+MMT**: It is the Flat-NCT model trained under our proposed MMT training framework with Eq. 14 used at the third stage, _i.e._, gradually transiting from monolingual to bilingual dialogues.
### _Implementation Details_
We develop our NCT model based on the open-source toolkit THUMT.11[28] In experiments, we adopt the settings of _Transformer-Base_ and _Transformer-Big_ as [3]. In _Transformer-Base_, we use 512 as hidden size (_i.e._, _d_), 2,048 as filter size and 8 heads in multi-head attention. In _Transformer-Big_, we use 1,024 as hidden size, 4,096 as filter size, and 16 heads in multi-head attention. Both _Transformer-Base_ and _Transformer-Big_ contain \(L{=}6\) encoder layers and the identical number of decoder layers. As for the number of training steps for each stage, following the implementation in [29], we set the training steps of the first and second stages to 200,000 and 5,000, respectively. For the third stage, we conduct trial experiments on the En\(\Rightarrow\)De validation set, where the performance is no longer improved after about 5,000 steps. Therefore, we set the total training steps of the third training stage to 5,000, (_i.e._, \(N{=}5\),000 in Eq. 14).
Footnote 11: [https://github.com/THUNLP-MT/THUMT](https://github.com/THUNLP-MT/THUMT)
During training, we allocate 4,096 tokens to each NVIDIA Tesla V100 GPU. At the first stage, we use 8 GPUs to pre-train the model in parallel, resulting in 8*4,096 tokens per update. To test the performance of the pre-trained model, we measure its BLEU scores on _newstest2019_. The results are shown in Table III. At the second and third stages, we only use 4 GPUs, resulting in about 4*4,096 tokens per update for all experiments at these two stages. All models are optimized using Adam [30] with the learning rate being 1.0 and label smoothing set to 0.1. The dropout rates for _Transformer-Base_ and _Transformer-Big_ are set to 0.1 and 0.3, respectively. The results are reported with the statistical significance test [31].
### _Effects of Hyper-parameters_
For the Flat-NCT model under our proposed training framework, the context length for \(\mathcal{C}_{\mathbf{x}_{u}}\) and the balancing factors (\(\alpha_{1}\), \(\beta_{1}\), \(\alpha_{2}\) and \(\beta_{2}\), see Eq.12 and Eq. 14) of auxiliary tasks are the hyper-parameters we need to manually tune.
#### 4.4.1 Context Length
In practice, for each \(\mathbf{x}_{u}\), the NCT model only takes a fixed length of preceding utterances as its dialogue history context \(\mathcal{C}_{\mathbf{x}_{u}}\). We investigate the effect of context length using the Flat-NCT+FT model with the _Transformer-Base_ setting. Fig. 4 shows that the model achieves the best performance on the En\(\Rightarrow\)De validation set when the number of preceding source utterances for dialogue history context is set to 3. However, taking in more preceding utterances not only increases computational costs and but also adversely affects the performance. The underlying reason is that distant dialogue utterances usually have a low correlation with the current utterance and are likely to bring harmful noise. Therefore, we set the context length to 3 in all subsequent experiments.
#### 4.4.2 Balancing Factors of Auxiliary Tasks
To determine the best balancing factors (\(\alpha_{1}\),\(\beta_{1}\),\(\alpha_{2}\),\(\beta_{2}\)) of auxiliary tasks, we evaluate the model performance on corresponding validation sets using the grid-search strategy. First, at the second training stage, we vary \(\alpha_{1}\) and \(\beta_{1}\) from 0 to 1.0 with the interval 0.1. Then, at the third training stage, given the selected \(\alpha_{1}\) and \(\beta_{1}\), we also search \(\alpha_{2}\) and \(\beta_{2}\) by drawing values from 0 to 1.0 with the interval 0.1. Finally, we obtain the sets of determined balancing factors for different translation directions (En\(\Rightarrow\)De, De\(\Rightarrow\)En, En\(\Rightarrow\)Zh and Zh\(\Rightarrow\)En), as listed in Table IV.
### _Overall Performance_
In Table V, we report the experimental results on En\(\Leftrightarrow\)De and En\(\Leftrightarrow\)Zh using _Transformer-Base_ and _Transformer-Big_ settings.
#### 4.5.1 Sentence-level Models v.s. Context-aware Models
From Table V, in terms of both BLEU and TER, we can observe that the sentence-level model "Transformer+FT" achieves comparable or even better results compared with those existing context-aware models
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & \(\alpha_{1}\) & \(\beta_{1}\) & \(\alpha_{2}\) & \(\beta_{2}\) \\ \hline En\(\Rightarrow\)**De** & 1.0 & 0.2 & 0.2 & 0.6 \\ \hline
**De\(\Rightarrow\)En** & 0.8 & 0.1 & 0.7 & 0.7 \\ \hline
**En\(\Rightarrow\)Zh** & 0.5 & 0.1 & 0.5 & 0.1 \\ \hline
**Zh\(\Rightarrow\)En** & 0.5 & 0.3 & 0.8 & 0.3 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Balancing Factor Determination
Fig. 4: The effect of the context length for \(\mathcal{C}_{\mathbf{x}_{u}}\). The BLEU scores of the Flat-NCT+FT model on the En\(\Rightarrow\)De validation set (under the _Transformer-Base_ setting).
('Dia-Transformer+FT", "Gate-Transformer+FT" and "Flat-NCT+FT") which are originally proposed for document-level translation. This suggests that if conventional approaches of exploiting context are not well adapted to the chat scenario, the NCT model would be negatively affected. This may be because when the size of training data for chat translation is extremely small, the NCT model is insufficiently trained and its poor use of dialogue history context adversely brings harmful noise.
5.2 Results on En\(\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtie\bowtietie\bowtie\bowtietie\bowtietie\bow
#### 4.6.1 Effects of Monolingual Dialogues
In our proposed training framework, we use both source- and target-language additional monolingual dialogues (\(\overline{X}\) and \(\overline{Y}\)) at the second and third stages.
First, we investigate the effect of monolingual dialogues on En\(\Leftrightarrow\)De validation set by partially removing different groups of them. From Table VI, according to training stages, we can observe that the removal of monolingual dialogues at either the second or the third stage results in performance drops (Rows 1 and 2). This indicates that the additional monolingual dialogues benefit the NCT model at both training stages. Next, according to languages, when we totally remove one of the source-language and target-language monolingual dialogues at the two stages, the model performance also declines (Rows 3 and 4). These two results show that both the source- and target-language monolingual dialogues take positive effects during training. Lastly, if there is no monolingual data used during the whole training process, the performance degrades more drastically (Row 5), echoing those aforementioned findings again.
Then, we investigate how the amount of additional monolingual dialogues affects the NCT model. Fig. 5 illustrates the model performance with different proportions (100%, 50%, 10% and 0%) of used monolingual dialogues. The results show that the performance of the NCT model consistently declines with fewer monolingual dialogues used in our proposed training framework. All these results demonstrate the effectiveness and necessity of using relatively abundant monolingual dialogues in our framework.
#### 4.6.2 Effects of Auxiliary Tasks
The two auxiliary tasks (UD and SD) play an important role in our proposed training framework. Therefore, we investigate their effects by ablating them with different settings. Table VII lists the results on the validation set of BConTrasT (En\(\Leftrightarrow\)De) with ablations of UD/SD tasks.
First, we successively exclude the objectives of UD/SD task on monolingual dialogues from the MMT training of our NCT model. When only one of \(\mathcal{L}_{ud}^{\overline{X}}\), \(\mathcal{L}_{ud}^{\overline{Y}}\), \(\mathcal{L}_{sd}^{\overline{X}}\) and \(\mathcal{L}_{sd}^{\overline{Y}}\) is excluded, the performance drops (Rows 1 and 2) compared to "Flat-NCT+MMT" (Row 0). Moreover, if we exclude the UD or SD task on both source- and target-language monolingual dialogues at a time, the NCT model mostly performs worse than the above results (_i.e._, Row 3 v.s. Rows 0,1,2). It is also notable that the ablations of UD/SD tasks have a greater influence on En\(\Rightarrow\)De direction than on De\(\Rightarrow\)En. We conjecture that German monolingual dialogues are manually translated from English by in-house sentence-level NMT models, losing their original conversational properties to some extent. Thus, the two dialogue-related auxiliary tasks bring smaller improvements in the process of MMT training. These results show both UD and SD tasks on source- and target-language monolingual dialogues bring improvements, indicating that the preliminary capability of
\begin{table}
\begin{tabular}{l l|c c|c c c|c c|c c} \hline \hline \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{**UD**} & \multicolumn{3}{c|}{**SD**} \\ \cline{2-10} \multicolumn{3}{c|}{} & \multicolumn{2}{c|}{**Models (Base)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)De**} & \multicolumn{2}{c|}{**De\(\Rightarrow\)En**} & \multicolumn{2}{c|}{**Models (Base)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)De**} & \multicolumn{2}{c}{**De\(\Rightarrow\)En**} \\ \multicolumn{3}{c|}{} & \multicolumn{2}{c|}{BLEU\(\uparrow\) TER\(\downarrow\)} & BLEU\(\uparrow\) TER\(\downarrow\) & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{BLEU\(\uparrow\) TER\(\downarrow\)} & \multicolumn{2}{c|}{} & BLEU\(\uparrow\) TER\(\downarrow\) & BLEU\(\uparrow\) TER\(\downarrow\) & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ \hline
0 & Flat-NCT+MMT & **60.86** & **24.6** & **60.94** & **25.3** & Flat-NCT+MMT & **60.86** & **24.6** & **60.94** & **25.3** \\
1 & w/o. \(\mathcal{L}_{ud}^{\overline{X}}\) & 60.80 & 24.7 & 60.72 & 25.7 & w/o. \(\mathcal{L}_{sd}^{\overline{Y}}\) & 60.51 & 25.0 & 60.43 & 26.1 \\
2 & w/o. \(\mathcal{L}_{ud}^{\overline{Y}}\) & 60.47 & 24.9 & 60.43 & 26.1 & w/o. \(\mathcal{L}_{sd}^{\overline{Y}}\) & 60.29 & 24.7 & 60.83 & 25.6 \\
3 & w/o. \(\mathcal{L}_{ud}^{\overline{Y}}\)\(\mathcal{L}_{ud}^{\overline{Y}}\) & 59.96 & 25.3 & 60.41 & 25.9 & w/o. \(\mathcal{L}_{sd}^{\overline{X}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\) & 60.13 & 25.0 & 60.66 & 25.6 \\
4 & w/o. \(\mathcal{L}_{ud}^{X}\) & 60.43 & 24.9 & 60.20 & 26.1 & w/o. \(\mathcal{L}_{sd}^{\overline{X}}\) & 60.36 & 25.2 & 60.76 & 26.0 \\
5 & w/o. \(\mathcal{L}_{ud}^{Y}\) & 60.25 & 24.8 & 60.56 & 25.5 & w/o. \(\mathcal{L}_{sd}^{Y}\) & 60.22 & 25.0 & 60.47 & 26.0 \\
6 & w/o. \(\mathcal{L}_{ud}^{X}\)\(\mathcal{L}_{ud}^{Y}\) & 59.89 & 25.1 & 60.25 & 25.7 & w/o. \(\mathcal{L}_{sd}^{X}\)\(\mathcal{L}_{sd}^{Y}\) & 60.27 & 25.3 & 60.56 & 25.3 \\
7 & w/o. \(\mathcal{L}_{ud}^{\overline{X}}\)\(\mathcal{L}_{ud}^{\overline{Y}}\)\(\mathcal{L}_{ud}^{\overline{X}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\) & 59.86 & 25.3 & 60.04 & 26.0 & w/o. \(\mathcal{L}_{sd}^{\overline{X}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\) & 59.97 & 25.5 & 60.39 & 25.9 \\
8 & w/o. any UD/SD task & 59.79 & 25.5 & 59.97 & 26.5 & w/o. any UD/SD task & 59.79 & 25.5 & 59.97 & 26.5 \\ \hline \hline \end{tabular} Results (BLEU\(\uparrow\)/TER\(\downarrow\)) on the validation set of BConTrasT (En\(\Leftrightarrow\)De) with ablations of UD/SD tasks. The left half lists ablation results of the UD task while the right lists those of the SD task. “w/o.”: the specified training objectives are ablated in our proposed training framework. For instance, “w/o. \(\mathcal{L}_{sd}^{\overline{X}}\)” means the objective of the UD task \(\mathcal{L}_{ud}^{\overline{X}}\) on source-language monolingual dialogues \(\overline{X}\) is ablated in Eq. 12 and Eq. 14 at the second and third training stages. The last row (Row 8) corresponds to the setting that all the training objectives of auxiliary tasks are ablated, _i.e._, w/o. \(\mathcal{L}_{ud}^{\overline{X}}\)\(\mathcal{L}_{ud}^{\overline{Y}}\)\(\mathcal{L}_{ud}^{X}\)\(\mathcal{L}_{ud}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\)\(\mathcal{L}_{sd}^{\overline{Y}}\).
\end{table} TABLE VII: Performance with Ablations of UD/SD Tasks
Fig. 5: Results (Left: BLEU\(\uparrow\) / Right: TER\(\downarrow\)) on the validation set of BConTrasT (En\(\Leftrightarrow\)De) using different proportions of used monolingual dialogues (under the _Transformer-Base_ setting).
capturing dialogue context acquired from additional monolingual dialogues actually enhances the NCT model.
Then, we turn to successively exclude the objective of UD/SD task on bilingual dialogues. We can obtain the similar conclusion that the exclusions of \(\mathcal{L}^{X}_{ud}\) and \(\mathcal{L}^{Y}_{ud}\) lead to the performance decline (_i.e._, Row 0 v.s. Rows 4,5,6). Similarly, the two auxiliary tasks on source- and target-language bilingual dialogues take greater effects in most cases on En\(\Rightarrow\)De direction than on De\(\Rightarrow\)En, supporting the above-mentioned conjecture again.
Lastly, we completely ablate either the UD or SD task from the MMT training. We can observe that the performance drops more severely (Row 7). Moreover, if we totally remove all auxiliary objectives of UD and SD tasks, the training of our NCT model degenerates into the conventional two-stage training, thus obtaining the worst performance (Row 8). These ablation results with different settings strongly confirm that the two auxiliary tasks take considerable effects during the MMT training by incorporating the modelling of conversational properties into our NCT model.
#### 4.6.3 Effects of Pseudo/Authentic Monolingual Dialogues
In our previous experiments, since most German and Chinese dialogue datasets do not contain annotated speaker labels, they are not suitable for our Flat-NCT model to accomplish SD task. Therefore, we use in-house NMT models to obtain pseudo German/Chinese monolingual dialogues from authentic English Taskmaster-1 dataset that has available speaker labels. To investigate how the authenticity of monolingual dialogues would affect our proposed training framework, we turn to use totally authentic monolingual dialogues.
Specifically, besides the authentic English Taskmaster-1 dataset, we introduce the authentic Chinese dialogues from the recently-released MSCTD dataset [32].12 When using MSCTD dataset, as it still has no speaker label for SD task, we only include the UD task, _i.e._, excluding \(\mathcal{L}^{X}_{sd}\), \(\mathcal{L}^{Y}_{sd}\), \(\mathcal{L}^{Y}_{sd}\) from MMT training, which is denoted as "Flat-NCT+MMT(Authentic) w/o. SD". Table VIII gives its comparison with the model using pseudo Chinese monolingual dialogues, _i.e._, "Flat-NCT+MMT(Pseudo) w/o. SD". From the table, we can see that "Flat-NCT+MMT(Authentic) w/o. SD" outperforms "Flat-NCT+MMT(Pseudo) w/o. SD" under both the _Transformer-Base_ and _Transformer-Big_ settings. This shows authentic monolingual dialogues are indeed more beneficial to the NCT model, indicating that our MMT training framework has the potential to further boost model performance if there are suitable monolingual dialogue datasets with speaker labels on both source and target languages.
Footnote 12: MSCTD dataset has a total of 132,741 Chinese utterances.
#### 4.6.4 Effects of BT-augmented Chat Translation Corpus
Instead of just being used for the auxiliary tasks, the additional monolingual dialogues can be alternatively used to augment the bilingual chat translation dataset \(D_{bct}\) for the context-aware fine-tuning of all contrast models and ours. To further validate the effectiveness of our proposed training framework, we make comparisons between MMT training and conventional two-stage pretrain-finetune paradigm using BT-augmented bilingual chat translation dataset.
Concretely, as a common technique, we employ back-translation to augment the original dataset \(D_{bct}\) to \(D^{\prime}_{bct}\). For En\(\Rightarrow\)Zh, the target-side additional Chinese dialogues from MSCTD dataset are translated into English. Conversely, for Zh\(\Rightarrow\)En, the target-side English additional dialogues from Taskmaster-1 dataset are translated into Chinese. Due to the lack of speaker labels in MSCTD dataset, we also exclude all SD objectives in MMT training and compare "Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD" with the sentence-level "Transformer+FT(\(D^{\prime}_{bct}\))" and "Gate-Transformer+FT(\(D^{\prime}_{bct}\))". 13 From Table IX, we can observe "Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD" outperforms "Transformer+FT(\(D^{\prime}_{bct}\))" and "Gate-Transformer + FT(\(D^{\prime}_{bct}\))" under both _Transformer-Base_ and _Transformer-Big_ framework can still take notable effects when the bilingual
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{**Models (Base)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT(Pseudo) w/o. SD & 27.35 & 60.6 & 22.12 & 56.4 \\ Flat-NCT+MMT(Authentic) w/o. SD & 27.80 & 59.7 & 22.82 & 55.8 \\ \hline \multirow{2}{*}{**Models (Big)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT(Pseudo) w/o. SD & 28.31 & 59.7 & 22.87 & 55.3 \\ Flat-NCT+MMT(Authentic) w/o. SD & 28.55 & 59.0 & 23.36 & 54.0 \\ \hline \hline \end{tabular} Results on the test set of BMELD (En\(\Leftrightarrow\)Zh) in terms of BLEU (\(\%\)) and TER (\(\%\)). “Flat-NCT+MMT(Pseudo) w/o. SD” represents using the BT-augmented dataset \(D^{\prime}_{bct}\) to fine-tune the Transformer model and Gate-Transformer model _i.e._, “Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD” represents using \(D^{\prime}_{bct}\) to train the Flat-NCT model through MMT training framework without any SD objective.
\end{table} TABLE VIII: Performance with Pseudo/Authentic Monolingual Dialogues
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Models (Base)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Transformer + FT(\(D^{\prime}_{bct}\)) & 26.04 & 61.7 & 21.77 & 56.2 \\ Gate-Transformer + FT(\(D^{\prime}_{bct}\)) & 26.36 & 61.2 & 21.61 & 55.8 \\ Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD & 28.15 & 59.6 & 22.44 & 55.6 \\ \hline \multirow{2}{*}{**Models (Big)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Transformer + FT(\(D^{\prime}_{bct}\)) & 27.29 & 60.3 & 22.38 & 55.9 \\ Gate-Transformer + FT(\(D^{\prime}_{bct}\)) & 27.65 & 59.9 & 22.45 & 55.6 \\ Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD & 28.81 & 58.7 & 23.17 & 55.1 \\ \hline \hline \end{tabular} Results on the test set of BMELD (En\(\Leftrightarrow\)Zh) in terms of BLEU (\(\%\)) and TER (\(\%\)). “Flat-NCT+MMT(Authentic) w/o. SD” represents using authentic + FT(\(D^{\prime}_{bct}\))” represents using the BT-augmented dataset \(D^{\prime}_{bct}\) to fine-tune the Transformer model and Gate-Transformer model _i.e._, “Flat-NCT+MMT(\(D^{\prime}_{bct}\)) w/o. SD” represents using \(D^{\prime}_{bct}\) to train the Flat-NCT model through MMT training framework without any SD objective.
\end{table} TABLE IX: Performance with Pseudo/Authentic Monolingual Dialogues
chat translation corpus for context-aware fine-tuning is adequately augmented.
#### 4.6.5 Effects of Gradual Transition Strategy
At the third stage of our proposed framework, the Flat-NCT model is trained through Eq. 14, _i.e._, gradually transiting from using monolingual to bilingual dialogues. This strategy makes the transition from the second to the third stage smoother, which further alleviates the training discrepancy described in Section 3.2.3.
To investigate its effectiveness, we also train the NCT model through Eq. 13, _i.e._, without the strategy of gradual transition. As shown in Table XIX, under the _Transformer-Big_ setting, the performance of "Flat-NCT+MMT w/o. GT" is significantly worse than those of "Flat-NCT+MMT" across all translation directions. These results indicate that the gradual transition strategy makes better use of additional monolingual dialogues, benefiting the training of our NCT model.
### _Evaluation of Translation Quality_
To further verify the benefits of our proposed training framework, we assess the quality of translations generated by different NCT models using automatic and human evaluations.
#### 4.7.1 Automatic Evaluation of Dialogue Coherence
Following [18, 33], we use the cosine similarity between each translated utterance \(\mathbf{x}_{u}\) and its corresponding dialogue context \(\mathcal{C}_{\mathbf{x}_{u}}\) to automatically measure dialogue coherence, which is defined as
\[sim(\mathbf{x}_{u},\mathcal{C}_{\mathbf{x}_{u}})=\cos\_\text{sim}(f(\mathbf{x }_{u}),f(\mathcal{C}_{\mathbf{x}_{u}})),\]
where \(f(\cdot)\) denotes the sequence representation obtained by averaging the word vectors of its included tokens. We use Word2Vec14[34] trained on Taskmaster-15 to obtain the distributed word vectors whose dimension is set to 100.
Footnote 14: [https://code.google.com/archive/p/word2vec/](https://code.google.com/archive/p/word2vec/)
Footnote 15: The English utterances in BConTrasT comes from Taskmaster-1.
Table XI shows the measured coherence of translated utterances with their corresponding dialogue context on the De\(\Rightarrow\)En test set of BConTrasT. It shows that our "Flat-NCT+MMT" produces more coherent translations compared to other contrast models (significance test, \(p<0.01\)).
#### 4.7.2 Human Evaluation
Table XII lists the results of human evaluation on the test set of BMELD (Zh\(\Rightarrow\)En). Following [24, 35], we conduct evaluations using three criteria: 1) **Dialogue Coherence (DC.)** measures whether the translation is semantically coherent with the dialogue history context in a chat; 2) **Speaker Consistency (SC.)** evaluates whether the translation preserves the characteristic of its original speaker; 3) **Fluency (Flu.)** measures whether the translation is fluent and grammatically correct.
First, we randomly sample 200 dialogues from the test set of BMELD in Zh\(\Rightarrow\)En direction. Then, we use each of the models in Table XII to generate the translations of these sampled dialogues. Finally, we assign these translated utterances and their corresponding dialogues in the target language to three postgraduate evaluators who are native Chinese speakers majoring in English with qualified certificates, and ask them to assess the translations according to the above three criteria.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Models (Big)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)De**} & \multicolumn{2}{c}{**De\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT & 60.11 & 25.8 & 61.04 & 25.0 \\ Flat-NCT+MMT w/o. GT & 59.62 & 26.2 & 60.76 & 25.2 \\ \hline \multirow{2}{*}{**Models (Big)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT & 28.62 & 59.6 & 23.08 & 54.9 \\ Flat-NCT+MMT w/o. GT & 28.18 & 59.8 & 22.50 & 55.9 \\ \hline \hline \end{tabular} Results on the test sets of BConTrasT (En\(\Rightarrow\)De) and BMELD (En\(\Rightarrow\)Zh) in terms of BLEU (\(\%\)) and TER (\(\%\)). “Flat-NCT+MMT”: the Flat-NCT model trained using the gradual transition strategy from monolingual to bilingual dialogues (Eq. 14). “Flat-NCT+MMT w/o. GT”: the Flat-NCT model trained without using the gradual transition strategy (Eq. 13).
\end{table} TABLE X: Performance with/without Gradual Transition Strategy
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Models (Base)** & **DC.** & **SC.** & **Flu.** \\ \hline Transformer & 0.540 & 0.485 & 0.590 \\ Transformer+FT & 0.590 & 0.530 & 0.635 \\ Dia-Transformer+FT & 0.580 & 0.525 & 0.625 \\ Gate-Transformer+FT & 0.605 & 0.540 & 0.635 \\ Flat-NCT+FT & 0.595 & 0.525 & 0.630 \\ Flat-NCT+MMT & **0.640** & **0.570** & **0.665** \\ \hline \hline \end{tabular} Results on the test set of BMELD (Zh\(\Rightarrow\)En) under the _Transformer-Base_ setting. “**DC.**”: Dialogue Coherence. “**SC.**”: Speaker Consistency. “**Flu.**”: “Flu.”: Fluency. The values for these three criteria range from 0 to 1.
\end{table} TABLE XII: Human Evaluation
\begin{table}
\begin{tabular}{l|c c|c|c} \hline \hline \multirow{2}{*}{**Models (Base)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)De**} & \multicolumn{2}{c}{**De\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT & 60.11 & 25.8 & 61.04 & 25.0 \\ Flat-NCT+MMT w/o. GT & 59.62 & 26.2 & 60.76 & 25.2 \\ \hline \multirow{2}{*}{**Models (Big)**} & \multicolumn{2}{c|}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} \\ & BLEU\(\uparrow\) & TER\(\downarrow\) & BLEU\(\uparrow\) & TER\(\downarrow\) \\ \hline Flat-NCT+MMT & 28.62 & 59.6 & 23.08 & 54.9 \\ Flat-NCT+MMT w/o. GT & 28.18 & 59.8 & 22.50 & 55.9 \\ \hline \hline \end{tabular} Results on the test sets of BConTrasT (En\(\Rightarrow\)De) and BMELD (En\(\Leftrightarrow\)Zh) in terms of BLEU (\(\%\)) and TER (\(\%\)). “Flat-NCT+MMT”: the Flat-NCT model trained using the gradual transition strategy (Eq. 13).
\end{table} TABLE X: Results on the test set of BMELD (Zh\(\Rightarrow\)En) under the _Transformer-Base_ setting. “**DC.**”: Dialogue Coherence. “**SC.**”: Speaker Consistency. “**Flu.**”: “Fluency. The values for these three criteria range from 0 to 1.
The results in Table XII show that the generated translation of our model ("Flat-NCT+MMT") is more coherent to corresponding dialogue context, better preserves the characteristic of original speakers and is more fluent as well, indicating the superiority of our model. The inter-annotator agreements calculated by the Fleiss' kappa [36] are 0.535, 0.507, and 0.548 for DC, SC. and Flu., respectively.
#### 4.7.3 Case Study
In Fig. 6, we deliver illustrative case examples from the test set of BMELD (En\(\Rightarrow\)Zh) to compare translations generated by different models.
**Dialogue Coherence.** In the first example of Fig. 6, all contrast models translate the word "_game_" into its surface meaning "_you xi"_ in Chinese. However, considering that the word "_antique_" in dialogue history generally refers to physical assets rather than virtual objects, what the speaker \(s1\) really means is "_you xi i?_" (_aracade game machine_") as in the reference, which is correctly translated by our "Flat-NCT+MMT" model. From the second example, we find that the translations generated by all contrast models neglect the crucial item "_boat_" (_"chuan"_) inside the dialogue. On the contrary, our model "Flat-NCT+MMT" successfully generates the translation of "_boat_" that only exists in dialogue history context but not in the current utterance, which makes the whole translated utterance more coherent to the whole dialogue.
For the above two examples, the underlying reason for our model to generate more coherent translations is that the UD task in our proposed training framework introduces the modelling of dialogue coherence into the NCT model.
**Speaker Characteristic.** We also observe that the translation generated by our model "Flat-NCT+MMT" can better preserve the characteristic of its original speaker. Specifically, in the second example of Fig. 6, the speaker \(s1\) is highly excited and obviously in a tone of showing off. Consequently, our model converts the translation of the second "_What?_" from its Chinese surface meaning "_shen me?_" into a more speaker-consistent Chinese expression "_bi xin?_" (actually means "_don't you believe?_"), which makes the translated utterance more vivid and closer to the reference as well. This may be credited to the SD task that introduces the modelling of speaker characteristic into the NCT model during training.
The above case examples indicate that our proposed training framework makes the NCT model more capable of capturing important conversational properties of dialogue context, showing its superiority over other contrast models.
## 5 Related Work
The most related work to ours include the studies of neural chat translation and context-aware NMT, which will be described in the following subsections.
### _Neural Chat Translation_
Due to the lack of publicly available annotated bilingual dialogues, there are only few relevant studies on this task. To address the data scarcity issue, some researches [37, 26, 38] design methods to automatically construct subtitles corpus that may contain low-quality bilingual dialogue utterances. Recently, Farajian et al., [24] organize the competition of WMT20 shared task on chat translation and first provide a chat corpus post-edited by human annotators. In the competition, the submitted NCT systems [39, 21, 40] are trained with some typical engineering techniques such as ensemble for higher performances. All these systems adhere to the conventional two-stage pretrain-finetune paradigm, mainly including fine-tuning the existing models or using the large pre-trained language models such as BERT [15]. During pre-training on the large-scale parallel corpus, they either use all the available data or adopt data selection methods to select more in-domain data for training. More recently, Wang et al. [41] propose to utilize context to translate dialogue utterances along with jointly identifying omission and typos in the process of translating. Different from these work, our proposed framework focuses on utilizing additional monolingual dialogues and introducing an intermediate stage to alleviate training discrepancy.
### _Context-aware NMT_
In a sense, NCT can be viewed as a special case of context-aware NMT that has recently attracted much attention [4, 14, 27, 42, 43, 44, 45, 46, 47]. Typically, dominant approaches mainly resorted to extending extend conventional NMT models by incorporating cross-sentence global context, which can be roughly classified into two common categories: 1) concatenating the context and the current sentence to construct context-aware inputs [4, 14, 44]; 2) using additional modules or modifying model architectures to encode context sentences [9, 27, 42, 43, 45, 45]. Besides, Kang et al. [46] considered the relevance of context sentences to the source sentence in document-level NMT and proposed to dynamically select relevant contextual sentences for each source sentence via reinforcement learning. Although these context-aware NMT models can be directly applied to the scenario of chat translation, they cannot overcome the previously-mentioned limitations of NCT models.
Apart from improving context-aware NMT models, some researches [10, 47] investigated the effect of context in the process of translation. Voita et al., [10] concerned about the issue that the plausible translations of isolated sentences produced by context-agnostic NMT systems often end up being inconsistent with each other in a document. They investigated various linguistic phenomena and identified deixis, ellipsis and lexical cohesion as three main sources of inconsistency. Li et al. [47] looked into how the contexts bring improvements to conventional document-level multi-encoder NMT models. They found that the context encoder behaves as a noise generator and improves NMT models with robust training especially when the training data is small.
Not only are these findings suitable for context-aware NMT models in document translation, they also inspire follow-up researches on NCT to explore better ways of utilizing dialogue contexts such as explicitly modelling conversational properties of utterances.
## 6 Conclusion
In this paper, we have proposed a multi-task multi-stage transitional training framework for neural chat translation, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. Particularly, we design UD and SD tasks to incorporate the modelling of dialogue coherence and speaker characteristic into the NCT model, respectively. Moreover, our proposed training framework consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with with gradual transition. Experimental results and in-depth analysis demonstrate the effectiveness of our proposed training framework.
## Acknowledgments
The project was supported by National Natural Science Foundation of China (No. 62036004, No. 61672440), Natural Science Foundation of Fujian Province of China (No. 2020J06001), and Youth Innovation Fund of Xiamen (No. 3502Z20206059). We also thank the reviewers for their insightful comments. Work done while Chulun Zhou was an intern at Pattern Recognition Center, WeChat AI, Tencent Inc., Beijing, China.
|
2304.03593 | Deep Reinforcement Learning-Based Mapless Crowd Navigation with
Perceived Risk of the Moving Crowd for Mobile Robots | Current state-of-the-art crowd navigation approaches are mainly deep
reinforcement learning (DRL)-based. However, DRL-based methods suffer from the
issues of generalization and scalability. To overcome these challenges, we
propose a method that includes a Collision Probability (CP) in the observation
space to give the robot a sense of the level of danger of the moving crowd to
help the robot navigate safely through crowds with unseen behaviors. We studied
the effects of changing the number of moving obstacles to pay attention during
navigation. During training, we generated local waypoints to increase the
reward density and improve the learning efficiency of the system. Our approach
was developed using deep reinforcement learning (DRL) and trained using the
Gazebo simulator in a non-cooperative crowd environment with obstacles moving
at randomized speeds and directions. We then evaluated our model on four
different crowd-behavior scenarios. The results show that our method achieved a
100% success rate in all test settings. We compared our approach with a current
state-of-the-art DRL-based approach, and our approach has performed
significantly better, especially in terms of social safety. Importantly, our
method can navigate in different crowd behaviors and requires no fine-tuning
after being trained once. We further demonstrated the crowd navigation
capability of our model in real-world tests. | Hafiq Anas, Ong Wee Hong, Owais Ahmed Malik | 2023-04-07T11:29:59Z | http://arxiv.org/abs/2304.03593v2 | Deep Reinforcement Learning-Based Mapless Crowd Navigation with Perceived Risk of the Moving Crowd for Mobile Robots
###### Abstract
Classical map-based navigation methods are commonly used for robot navigation, but they often struggle in crowded environments due to the Frozen Robot Problem (FRP). Deep reinforcement learning-based methods address the FRP problem, however, suffer from the issues of generalization and scalability. To overcome these challenges, we propose a method that uses Collision Probability (CP) to help the robot navigate safely through crowds. The inclusion of CP in the observation space gives the robot a sense of the level of danger of the moving crowd. The robot will navigate through the crowd when it appears safe but will take a detour when the crowd is moving aggressively. By focusing on the most dangerous obstacle, the robot will not be confused when the crowd density is high, ensuring scalability of the model. Our approach was developed using deep reinforcement learning (DRL) and trained using the Gazebo simulator in a non-cooperative crowd environment with obstacles moving at randomized speeds and directions. We then evaluated our model on four different crowd-behavior scenarios with varying densities of crowds. The results shown that our method achieved a 100% success rate in all test settings. We compared our approach with a current state-of-the-art DRL-based approach, and our approach has performed significantly better. Importantly, our method is highly generalizable and requires no fine-tuning after being trained once. We further demonstrated the crowd navigation capability of our model in real-world tests.
## I Introduction
Autonomous mobile robots are increasingly being deployed in human living spaces to provide services such as serving, delivering, and guiding. In these situations, robots must navigate crowded environments with moving humans at varying speeds. This is known as crowd navigation, or social navigation in some cases, where the objective is to navigate between two arbitrary locations in a dynamic environment with many people [1].
The classical navigation approach of using global and local planners has been proven effective for complex environments with both static and dynamic obstacles. A global planner determines the slow and long-term path trajectory to the goal location that the robot should follow, while a local planner determines the fast and short-term path trajectory to avoid obstacles. Popular local planner approaches such as Artificial Potential Fields [2] and Dynamic-Window Approaches [3] made use of an internal map that stores rich environmental information including the position of static obstacles such as walls and dynamic obstacles such as humans. However, these approaches rely heavily on map information which often causes the Frozen Robot Problem (FRP) [4] in crowded environments, where the robot becomes stuck and unable to move in the presence of dynamic obstacles. To address the limitations of classical planning approaches in crowded environments, recent research works have focused on deep reinforcement learning (DRL) methods. In DRL-based methods, the navigation control strategy is learned by optimizing the parameters that map sensor inputs to velocity commands when the robot is interacting with the environment without the need of a map. Many recent DRL-based works are mapless and have empirical evidence that demonstrates the capability of DRL-based navigation with 2D laser scans for crowd navigation. By removing the reliance on maps, DRL-based methods can help mitigate the FRP problem. DRL-based approaches are however prone to the problem of performing poorly in new unseen scenarios, i.e. the generalization issue. Autonomous mobile robots currently deployed in real-world applications remain dependent on cooperation from the crowd (humans) during navigation.
In this paper, we propose a DRL-based approach using 2D laser scans to remove map reliance to solve the FRP problem in crowd navigation as well as improve the generalization and scalability of the learned model. Our approach differs from the classical approach in that it solves both global and local navigation problems collectively by designing the observation space to incorporate relative goal information and risk perception of the moving obstacles in the crowd. We evaluated our approach in different crowd behavior settings with varying numbers of obstacles at different speeds to evaluate its generalization capability, and compared our results with the results of a recent state-of-the-art DRL-based approach [5] under the same set of test conditions. The main contributions of this work are the ideas of including the risk perception in the observation space to ensure generalization, and focusing on the most dangerous obstacle to ensure scalability of the model. We have also verified the ideas through successful implementations in both simulation and real-world settings.
## II Related Works
### _Frozen Robot Problem_
The Frozen Robot Problem [4] is a prevalent issue in crowd navigation using map-based approaches from classical methods [2][3]. These methods rely heavily on the accuracy of the map to compute a collision-free trajectory based on the relative position of detected obstacles from the robot's velocity and position. However, the introduction of moving obstacles in crowded environments constantly alters the perceived free space on the map, requiring the
planner to frequently replan new navigation routes. As crowd density increases, the available free space to generate a safe navigation path decreases, leading to the robot getting stuck in an endless loop of replanning and causing the Frozen Robot Problem.
### _DRL-based Mapless Navigation with 2D Laser Scans_
Deep reinforcement learning (DRL) based mapless approaches in navigation have shown potential in mitigating the Frozen Robot Problem. Looking at the past approaches [5][6][7][8] which are mapless, we have observed that they have used 2D laser scans with different training methods to address different purposes. 2D laser scans have been shown to provide sufficient spatial information to learn useful policies in DRL-based approaches. For example, Tai et al. [6] trained a robot using 10 sparse laser scans and the relative target goal position in a simulated static environment. The trained model was then tested in the real world and demonstrated robustness in crowd-less indoor navigation. They compared their mapless approach with a map-based method called Move Base and showed that their approach was collision-free. Similarly, Zhelo et al. [7] used 72 laser scans and the relative distance and orientation of the target goal position to train their robot in a static environment with similar observation states. They used the Intrinsic Curiosity Model (ICM) [9] to model intrinsic rewards from prediction loss and encourage the robot to find new states for better exploration. The addition of ICM significantly improved navigation success rates and generalization performance in different crowd-less environments. While these works demonstrated the potential of DRL-based mapless navigation, they were developed only for crowd-less environments.
To address the crowd navigation problem, the Crowd-Move implementation [8] was proposed. CrowdMove was trained and tested in multiple dynamic environments using commonly used observation states, such as the robot's own velocity and relative target goal position. Unlike the sparse laser scans used in previous studies, CrowdMove used 512 laser scans. The authors concluded that their robot was able to avoid moving obstacles in real-world tests and that their trained model could be generalized to different environment settings unseen during training. We note that their approach relied on providing sufficient variation in the training data of multiple dynamic environments to improve generalization.
Lastly, Jin et al. [5] proposed that a robot moving in a crowded environment should have human-awareness competencies. Therefore, they implemented this through their reward setup by incorporating two conditions: ego-safety and social-safety violations. Using this reward setting, they trained their robot in one crowded environment and tested it in four different crowd behavior environments with varying number of moving obstacles. They achieved significant performance improvement over the then state-of-the-art DRL-based crowd navigation method, CADRL [10]. They achieved success rates between 60% to 100% in various test environments. In their real-world tests, they verified that their robot can safely navigate to a goal position without a collision. To the best of our knowledge, their work is the current state-of-the-art in crowd navigation using 2D laser scans.
### _Our Work_
Our proposed method builds upon the techniques used in the DRL-based methods discussed above, with an addition of perceived risk in the observation states and prioritizing the most dangerous obstacle within a crowd. To determine which obstacle to prioritize, we compute the collision probability of all tracked moving obstacles within the robot's field of view (FOV) and focus on the obstacle with the highest probability of collision. Unlike the method used by Jin et al. [5], we incorporate perceived risk or human-awareness into the observation states, which allows the robot to perceive potential risk during testing or deployment, without the need for extensive training. Jin et al. [5] used ego and social scores in their reward function to model human-awareness, but this approach is limited by the lack of access to such information during deployment. In this sense, their model will require large amount of training to infer the perceived risk from the typical observation states in different scenarios. In addition, we have tested our robot at a significantly higher relative speed of the obstacles to the speed of the robot than in [5].
## III Approach
### _Problem Formation_
Our approach models the environment as a Partially-Observable Markov Decision Process (POMDP), which is a standard framework for decision-making under uncertainty. Formally, a POMDP is defined by six components: \(S\) is the state space, \(A\) is the action space, \(P\) is the transition probability that specifies the probability of transitioning from one state to another when an action is taken, \(R\) is the reward function, \(\Omega\) is the observation space, and \(O\) is the observation probability that specifies the probability of observing a certain observation when the environment is in a certain state. In reinforcement learning, the objective is to learn a policy \(\pi(a|s)\), which is a mapping from states to actions that maximizes the expected sum of discounted rewards over time.
Fig. 1 shows the overview of our deep reinforcement learning (DRL) system. The components in the DRL system are described in the following subsections.
#### Iii-A1 **Observation space**
The observation space contains input features to learn as well as perform crowd navigation behavior that solves both local and global navigation. To solve the global navigation problem, the information of relative distance to goal (DTG) and orientation (heading to goal, HTG) of the target goal location are used as the goal-related observations \(o_{g}\). Meanwhile, \(o_{l}\) contains distance information from the 2D laser scan sensor that describe the static environment of the robot and is used to solve the local navigation problem. Given that a crowded environment is associated with moving obstacles, we added agent-related observations \(o_{a}\) and critical obstacle observation \(o_{co}\) to the observation space. \(o_{a}\) contains the robot's position (\(R_{xy}\)) and
velocity (\(R_{v}\)) estimated from its encoder and inertia sensor. \(o_{co}\) describes the position (\(obs_{x,y}\)) and velocity (\(obs_{v}\)) of the most dangerous moving obstacle (critical obstacle). We define an observation as \(o\) = [\(o_{l}\), \(o_{g}\), \(o_{a}\), \(o_{co}\)] which describes the partial environment the robot can observe at a given time.
Obstacles tracking was implemented for obstacle velocity estimation and computation of Collision Probability (CP). Using the 2D laser scans \(o_{l}\), the robot can differentiate between the wall and the moving obstacles, and hence moving obstacles can be tracked. First, the 2D laser scans' values in \(o_{l}\) are converted to cartesian coordinates using the robot's position and orientation. We use Kuhn-Munkres algorithm [11][12] to segment the coordinate values of the scans into N number of groups that correspond to the possible number of obstacles seen by the robot. Then for each group, we compute the gradient of each pair of laser scans' coordinates to determine the object type (wall or obstacle). A scan group is determined to be a wall object if all computed gradients are close to zero. While an obstacle type is confirmed otherwise. Finally, we separate the scans that belong to different object types and use the center scan of each group as the position of the object for tracking, velocity estimation, and CP computation.
We define the Collision Probability (CP) as the sum of two component probabilities: the probability of collision based on the time to collision (\(P_{c-ttc}\)) and the probability of collision based on the distance to the obstacle (\(P_{c-dto}\)). We argue that the addition of distance to obstacle (\(dto\)) information allows the robot to better perceive the collision probability with a moving obstacle in the crowd. For example, a tracked obstacle moving slowly alongside the robot at a close distance can still pose a reasonable amount of danger while a tracked obstacle moving fast toward the robot from a far distance is less dangerous. Therefore, a balance between the two CP components is made as given in (1).
\[CP=\alpha\cdot P_{c-ttc}+(1-\alpha)\cdot P_{c-dto} \tag{1}\]
where \(\alpha\in\) [0, 1] is the parameter that decides the weight of collision probabilities \(P_{c-dto}\) and \(P_{c-ttc}\). We have set \(\alpha=0.5\) in the experiments reported in this paper.
The calculation of collision probabilities uses the Collision Cone (CC) concept in [13][14]. Fig. 2 shows an illustration of the CC and the related information that are used to estimate the two components of CP. \(P_{c-ttc}\) is computed based on time to collision as defined in (2). \(P_{c-dto}\) is computed based on relative distance to obstacle and defined in (3).
\[P_{c-ttc}=\begin{cases}\min(1,\frac{0.15}{t}),&\text{if }V_{r}^{\prime}\in CC_{ro} \\ 0&\text{otherwise}\end{cases} \tag{2}\]
where \(t\) is the time-to-collision (TTC) when the relative velocity \(V_{r}^{\prime}\) between the robot and the obstacle lies within the Collision Cone \(CC_{ro}\). \(V_{r}^{\prime}=V_{r}-V_{o}\) is the resultant velocity between the robot velocity \(V_{r}\) and obstacle velocity \(V_{o}\). \(t=Dist_{o}/V_{r}^{\prime}\) is the time to collision. \(CC_{ro}\) is the collision cone area between the robot and obstacle. Finally, 0.15 corresponds to the timestep value of the robot in seconds for executing its velocity commands.
\[P_{c-dto}=\begin{cases}\frac{l_{max}-Dist_{u}}{l_{max}-l_{min}},&\text{if } Dist_{o}<l_{max}\\ 0&\text{otherwise}\end{cases} \tag{3}\]
where \(l_{max}\) and \(l_{min}\) are the maximum and minimum range of the laser scan respectively. \(Dist_{o}\) is the distance from the robot to the obstacle of interest.
CP is computed for each obstacle in the list of tracked obstacles and the position (\(obs_{x,y}\)) and velocity (\(obs_{v}=V_{o}\)) of the obstacle with the highest CP is included in the observation space as \(o_{co}\). This obstacle is seen as most probable to be in collision with the robot.
#### Iii-B2 **Action space**
An action is defined as \(a\) = [\(V_{l}\), \(V_{w}\)] which is sampled from a stochastic policy \(\pi\) given observation \(o\) : \(a\sim\pi(a\mid o)\) where \(V_{l}\) is the linear velocity within the range [0, 0.22] \(ms^{-1}\) and \(V_{w}\) is the angular velocity within the range [-2.0, 2.0] \(rad.s^{-1}\).
#### Iii-B3 **Reward functions**
Our objective is to ensure collision-free navigation in a crowded environment. During training, the robot accumulates rewards and penalties from the Gazebo simulator. Overall, the reward function consists of the following terms:
Fig. 1: Deep Reinforcement Learning system structure.
Fig. 2: The Collision Cone and related information.
\[R=R_{step}+R_{dtg}+R_{htg}+R_{goal}+R_{col} \tag{4}\]
\(R_{step}=-2\) is the negative reward given to the robot for every step and serves to encourage the robot to avoid abusing the \(R_{dtg}\) and \(R_{htg}\) rewards by oscillating around the goal location without reaching it.
\[R_{dtg}=\begin{cases}+1,&\text{if }\operatorname{d}(r,g)_{t}<\operatorname{d}(r,g)_{t-1}\\ 0&\text{otherwise}\end{cases} \tag{5}\]
\(R_{dtg}\) is the positive reward given to the robot whenever the distance from the robot to the target goal location \(\operatorname{d}(r,g)\) has reduced between the current and previous timestep.
\[R_{htg}=\begin{cases}+1,&\text{if }\operatorname{\theta}(r,g)_{t}<\operatorname{ \theta}(r,g)_{t-1}\\ 0&\text{otherwise}\end{cases} \tag{6}\]
Similarly, \(R_{htg}\) is the positive reward given whenever the relative heading \(\operatorname{\theta}(r,g)\) has decreased.
\(R_{goal}=+200\) is the large positive reward given to the robot when it reaches the target goal location. If a collision occurs, a penalty \(R_{col}=-200\) is given instead.
### _Deep Reinforcement Learning_
Twin Delayed Deep Policy Gradient (TD3) [15] algorithm was chosen over Deep Deterministic Policy Gradient (DDPG) [16] because it solves the overestimation bias problems of DDPG by having two dueling critic networks instead of one. The default parameters were used for TD3.
#### Iii-B1 **Model training**
The robot was trained in the Gazebo simulator using Robotis TurtleBot3 Burger platform that is equipped with a LDS-01 360-degree 2D laser scanner and XL430-W250 encoder motors. The resolution of the laser scanner is 360 with a minimum and maximum range set to 0.105m and 0.6m respectively. The training process was done once in a 2m x 2m space with walls and 14 moving obstacles moving at a random speed of up to 0.2m/s in random directions. The moving obstacles were non-cooperative so they will ignore the robot's presence and can collide with the robot. The model was trained for 3000 episodes with the stopping criteria of collision with an obstacle or having reached the goal.
#### Iii-B2 **Model testing**
The robot was trained in one simulation setting using TD3 and tested in different crowd behavior settings. The crowd was non-cooperating.
## IV Evaluation
### _Frozen Robot Problem_
We compared the performance of the robot running our model and running the ROS Navigation stack under the same test environments in the Gazebo simulator.
### _Crowd Navigation_
We evaluated our robot in different crowd behavior environments similar to [5]: crossing, towards, ahead and random. For each crowd behavior, the model was tested with three different crowd densities of four, eight and twelve obstacles, except the random crowd behavior was only tested with twelve moving obstacles. This gave a total of ten test settings. For each crowd behavior setting, we computed the average of each metric over 10 separate runs. Fig. 3 shows the four crowd behavior settings with 12 moving obstacles.
To quantify performance, we used the same evaluation metrics from Jin et al.'s [5] work: success rate (%), arriving time (s), ego score (0-100) and social score (0-100). Let \(k\) be the number of ego-safety violation steps, and N be the total steps to reach the goal, then \(Ego\_Score=(1-k/N)*100\). Let \(m\) be the number of social-safety violation steps, then \(Social\_Score=(1-m/N)*100\).
An ego-safety violation is determined when an obstacle comes close to the robot within the ego radius of the robot. We have set the ego radius according to the ratio of the largest width of their Jackal robot to the largest width of our Turtlebot3. Their robot's ego radius was 0.4m for the robot width of 0.508m giving a ratio of 0.787. With the same ratio, we obtained an ego radius of 0.14m for our robot's largest width of 0.178m. In [5], they have determined the social-safety violation when two rectangular spaces computed from the speed of the robot and the speed of an obstacle intersect. The rectangular spaces are similar to the concept of Collision Cone in our case. For the social-safety violation, we have used the CP to determine if our robot is in a collision trajectory course with an obstacle when the CP value is greater than 0.4.
For comparison purposes, we have determined by watching Jin et al.'s demonstration video [5] that their obstacles were moving about 5 times slower than the max speed of their robot (1.5m/s) by observing the time taken for their robot and obstacles to traverse 1 square space (1m)
Fig. 3: The four crowd behavior settings with 12 obstacles in Gazebo. **Crossing** The robot has to navigate through the crowd moving in the crossing directions. **Towards**: The crowd is moving toward the general direction of the robot. **Ahead**: The crowd is moving ahead of the robot. **Random**: The crowd is moving in random directions.
in Gazebo. In our case, we performed two separate tests with slow-moving and fast-moving obstacles. The slow-moving obstacles moved at a speed that is 2 times slower (0.1m/s) than our robot's max speed (0.22m/s). The fast-moving obstacles moved at a speed (0.2m/s) that is nearly the same as our robot's speed. We believe that in real-world situations, the crowd would be moving at a speed close to each other.
We deployed our model on Turtlebot3 and conducted real-world tests. The robot was tested in the four crowd behaviors with four obstacles. We have used mobile robots with similar size to the Turtlebot3 as the moving obstacles. The moving obstacles were manually teleoperated by humans. It was difficult to teleoperate the obstacles when there are many of them, hence we have limited the real-world tests to four obstacles.
## V Results and Discussion
### _Frozen Robot Problem_
We observed that our approach did not exhibit FRP during training and testing as the robot could navigate smoothly to the goal positions without getting stuck in the crowd. In comparison, frequent freezing was observed when using the ROS Navigation stack in the crowded environments. This was because the perception of free space on the map was continuously changing due to the existence of dense moving obstacles resulting in frequent replanning in map-based navigation. This performance comparison can be seen in the supplementary video of this paper. We can see that the ROS Navigation Stack froze after encountering many moving obstacles, which had cluttered the free space in the map.
### _Crowd Navigation_
Fig. 4 shows the evaluation results of our method in comparison to the results Jin et al. [5]. Our robot was able to avoid obstacles with smooth and fast maneuver that resulted in 100% Success Rates (SR) in all test environments. Jin et al.'s [5] success rates were between 60% to 100% with only 10 out of 20 tests achieved 100% success rate.
Our robot took a longer time to reach the goal when the crowd was more dangerous. The arrival times for the test environments with fast moving obstacles (more dangerous) were in general longer than with slow moving obstacles. Likewise, higher crowd density (more dangerous) resulted in longer arrival time. Exceptions were seen in the ahead crowd behavior, where the arrival times with fast moving obstacles ahead were shorter than with slow moving obstacles. The fast moving obstacles were traveling at a speed close to the speed of the robot. In this case, when the obstacles were moving fast ahead of the robot, there was very little chance of the robot being confronted by the obstacles. The ahead environments were quite safe. Consequently, there were very few safety violations as seen from the high ego and social scores in the results of ahead crowd behavior in Fig. 4. While the arrival times of our robot were longer than the results of [5], we note that their robot was traveling at a speed (1.5m/s) about 7 times faster than our robot (0.22m/s). Taking into account the speed difference, our approach has performed relatively faster and with higher success rate than the approach of [5].
In addition, we have observed an interesting policy learned by our model. In cases where there were too dense obstacles in the way of the robot, the robot would take a detour and avoid the crowd cluster to reach the goal. However, in most cases, the robot navigated through the crowd. Fig. 5 illustrates the robot's behaviors during navigation. Ego-safety and social-safety violations do not necessary result in collisions. They are measures of how risky the robot was navigating in the crowd. Given that in cases where the ego and social scores were low and the robot was able to successfully navigate through the crowd, it demonstrated that our robot was during in taking a higher risk to reach the goal faster.
Finally, in real-world tests, we observed similar crowd navigation capability as in the simulation. The robot could navigate around the other moving robots to reach the goal in all the four test cases. However, the physical robot could not move smoothly at the velocities setting used in the simulation. The supplementary video shows the performance of the robot in real-world tests.
### _Ablation Study_
To investigate the effect of the Collision Probability (CP) (1) and its two components, we trained the model with two variations: one without \(P_{c-dto}\) (distance to obstacle CP) component (Model-CP-ttc), and one without CP completely (Model-no-CP). The results are shown in bottom two rows in Fig. 4.
As anticipated the Model-no-CP achieved a lower success rate and was four times slower in arrival time on average than the model with complete risk perception (full CP). During the tests, the robot was observed to avoid obstacles altogether by trying to detour. Without CP, the model was not able to estimate collision risk, so it learned that the best way to avoid collision was by avoiding the obstacles completely. Surprisingly, the Model-CP-ttc could only achieve 100% success rate in 2 out of 10 test conditions. The success rate was lower than the Model-no-CP. At first sight, it may seem that CP was not helpful. However, by observing the robot, we noticed that the robot was attempting to traverse through the crowd but collided with the obstacles when it was too close to an obstacle. This resulted in a lower success rate, however with a significantly faster arrival time than the Model-no-CP. The \(P_{c-ttc}\) (time to collision CP) alone underestimated the danger level of the moving obstacles and was insufficient to perceive the risk during crowd navigation. This caused the robot to be in a situation where it found itself unable to avoid a collision which caused the lower average success rate, especially in higher-density crowd tests (obstacle-12, ahead-12). The addition of \(P_{c-dto}\) (distance to obstacle CP) has improved the risk estimation as shown in the superior performance of the model with full CP (3rd and 4th rows) with both slow and fast moving obstacles.
## VI Conclusions
We have developed a navigation approach for mobile robots using 2D laser scans to improve their performance in crowded environments. Our experiments have shown that our deep reinforcement learning-based mapless approach is effective in circumventing the Frozen Robot Problem (FRP) in comparison to the ROS Navigation Stack when navigating in a crowded environment. In addition, we have shown that the inclusion of the Collision Probability of the most dangerous moving obstacle to the observation space has achieved outstanding performance in crowd navigation, and outperformed the state-of-the-art approach of Jin et al. [5]. Our model was trained in one crowd environment setting and tested on 10 different crowd environment settings. We achieved 100% success rate in all the 10 environment settings including the settings in which the obstacles were moving as fast as the robot. We observed that our model has learned an interesting crowd navigation policy to use different navigation strategies depending on the perceived risk level. Besides the superior performance in the simulated environment, we have also demonstrated the crowd navigation capability of our model in real-world tests. The robot has shown promising performance although not as dexterous as in the simulation. We plan to expand the real-world tests and improve the real-world performance in our future work. We will also investigate further ways to incorporate perceived risk or human-awareness in our crowd navigation approach.
The source code and video demonstration of this work are made publicly available on GitHub [17].
|
2303.08461 | Simulating prethermalization using near-term quantum computers | Quantum simulation is one of the most promising scientific applications of
quantum computers. Due to decoherence and noise in current devices, it is
however challenging to perform digital quantum simulation in a regime that is
intractable with classical computers. In this work, we propose an experimental
protocol for probing dynamics and equilibrium properties on near-term digital
quantum computers. As a key ingredient of our work, we show that it is possible
to study thermalization even with a relatively coarse Trotter decomposition of
the Hamiltonian evolution of interest. Even though the step size is too large
to permit a rigorous bound on the Trotter error, we observe that the system
prethermalizes in accordance with previous results for Floquet systems. The
dynamics closely resemble the thermalization of the model underlying the
Trotterization up to long times. We extend the reach of our approach by
developing an error mitigation scheme based on measurement and rescaling of
survival probabilities. To demonstrate the effectiveness of the entire
protocol, we apply it to the two-dimensional XY model and numerically verify
its performance with realistic noise parameters for superconducting quantum
devices. Our proposal thus provides a route to achieving quantum advantage for
relevant problems in condensed matter physics. | Yilun Yang, Arthur Christianen, Sandra Coll-Vinent, Vadim Smelyanskiy, Mari Carmen Bañuls, Thomas E. O'Brien, Dominik S. Wild, J. Ignacio Cirac | 2023-03-15T09:04:57Z | http://arxiv.org/abs/2303.08461v1 | # Simulating prethermalization using near-term quantum computers
###### Abstract
Quantum simulation is one of the most promising scientific applications of quantum computers. Due to decoherence and noise in current devices, it is however challenging to perform digital quantum simulation in a regime that is intractable with classical computers. In this work, we propose an experimental protocol for probing dynamics and equilibrium properties on near-term digital quantum computers. As a key ingredient of our work, we show that it is possible to study thermalization even with a relatively coarse Trotter decomposition of the Hamiltonian evolution of interest. Even though the step size is too large to permit a rigorous bound on the Trotter error, we observe that the system prethermalizes in accordance with previous results for Floquet systems. The dynamics closely resemble the thermalization of the model underlying the Trotterization up to long times. We extend the reach of our approach by developing an error mitigation scheme based on measurement and rescaling of survival probabilities. To demonstrate the effectiveness of the entire protocol, we apply it to the two-dimensional XY model and numerically verify its performance with realistic noise parameters for superconducting quantum devices. Our proposal thus provides a route to achieving quantum advantage for relevant problems in condensed matter physics.
## I Introduction
Quantum computers promise to have a great impact on scientific research. A particular example is the study of thermalization of quantum many-body systems. The problem is computationally challenging with classical methods [1, 2, 3] as it requires simulating the long-time dynamics of large systems. A fault-tolerant quantum computer would render this problem tractable by enabling quantum simulation [4, 5, 6, 7].
Despite impressive recent progress, present day quantum devices are still far from the regime of fault tolerance. Any current quantum simulation is therefore affected by noise and imperfections. In circuit-based quantum computers, continuous-time dynamics can be approximated using, for example, Trotterization [8]. With state-of-the-art gate errors [9, 10, 11, 12, 13, 14], it is however only possible to run simulations with a controlled Trotter error up to short times, which are insufficient to explore thermalization in classically intractable systems (50 or so qubits in two or more dimensions).
In this work, we demonstrate that thermalization can already be observed for much larger Trotter steps than needed to guarantee a bounded Trotter error, making it feasible to study this phenomenon on near-term quantum devices. In this regime, the system may be viewed as subject to a periodic Floquet drive [15, 16, 17], where one Trotter step corresponds to one period. The fate of Floquet systems at late times has been a subject of recent interest [18, 19, 20]. Even though the system generally heats up to infinite temperatures [21, 22], the heating time may be very long if the driving frequency is large compared to all local energy scales [23]. The system then _prethermalizes_[24, 25, 26, 27, 28, 29]: Before it heats up, its dynamics mirror the equilibration of a closed system. The prethermal regime is relatively easy to access in practice because the Floquet heating time increases exponentially with the driving frequency or, equivalently, the inverse Trotter step size (see Fig. 1).
With this in mind, we define the _prethermalized expectation value problem_ (PEVP): Given a Floquet unitary and a product initial state, what value does a local observable reach in the prethermal plateau? We find that this problem can be solved even in presence of realistic noise. Following a small circuit adjustment, the PEVP turns out to be amenable to a simple but
Figure 1: Different regimes of the dynamics of local observables depending on the Trotter step \(\tau\) and the evolution time \(T\). The black lines separate three regimes: bounded Trotter error (bottom), Floquet prethermalization (middle), and chaotic dynamics (top). The lower line scales as \(\mathcal{O}\left(\max\left\{\tau^{-p/(d+1)},N^{-1}\tau^{-p}\right\}\right)\), following error bounds for the \(p^{\text{th}}\)-order Trotter decomposition with system size \(N\). The upper line is determined by the Floquet heating time and scales as \(e^{O(1/\tau)}\). The blue shaded area indicates constant maximum circuit depth, relevant for noisy quantum computers. The grey area is excluded due to the constraint that \(T\geq\tau\). The red shading highlights where the total time \(T\) exceeds a system-dependent (pre)thermalization time scale \(T_{\text{th}}\). The prethermalized expectation value problem is experimentally accessible in the purple intersection.
highly effective error-mitigation scheme based on rescaling survival probabilities. Using this strategy, the error-mitigated PEVP reproduces the equilibrium properties of a model that is closely related to the Hamiltonian underlying the Trotterization. More precisely, the prethermal expectation values describe the diagonal ensemble of this model, which is equivalent to the microcanonical ensemble assuming that the eigenstate thermalization hypothesis (ETH) [30; 31; 32; 33] is valid. Besides its application to the study of thermalization, the PEVP may be viewed as a problem of independent computational interest in the context of demonstrating quantum advantage.
The paper is structured as follows. In Sec. II, we discuss thermalization in Floquet systems and present simulation results for the two-dimensional XY model as an example. We introduce our error mitigation strategy based on the rescaling of survival probabilities in Sec. III, where we also provide a thorough numerical analysis of its performance. Equipped with that, we demonstrate the suitability of the PEVP for near-term devices by simulating it with realistic noise parameters of superconducting quantum computers. We conclude in Sec. IV.
## II The prethermalized expectation value problem
### Time evolution on digital quantum computers
The time evolution under a Hamiltonian \(H\) can be reproduced on a digital quantum computer using the Suzuki-Trotter decomposition. In its simplest, first-order form, the decomposition approximates the time-evolution unitary \(U(\tau)=e^{-iH\tau}\) by
\[U_{\mathrm{Trotter}}(\tau)=\prod_{j=1}^{\Gamma}e^{-iH_{j}\tau}. \tag{1}\]
where \(H=\sum_{j=1}^{\Gamma}H_{j}\). Each \(H_{j}\) is a sum of mutually commuting local terms, such that \(e^{-iH_{j}\tau}\) can be efficiently implemented using local gates. The smaller the Trotter step \(\tau\), the more accurate the Trotter decomposition. For the \(p\)-th order Trotter decomposition [34], which generalizes the previous simple formula, the error of \(U_{\mathrm{Trotter}}(\tau)\) with respect to the desired unitary \(U(\tau)\) is bounded from above by \(\mathcal{O}(N\tau^{p+1})\), where \(N\) is the system size [8]. The dependence on \(N\) can be eliminated if all quantities of interest are local observables. According to the Lieb-Robinson bound, only a light cone with a radius proportional to the total evolution time \(T\) is relevant [35]. Therefore, the system size \(N\) can be replaced with the size of the light cone \(\sim T^{d}\) before it reaches the edges of the system, where \(d\) is the spatial dimension. We hence require that the Trotter step \(\tau\) be less than \(\mathcal{O}(\max\left\{T^{-(d+1)/P},\,(NT)^{-1/P}\right\})\) for the Trotterized time evolution of local observables to converge to the continuous evolution under \(H\).
We can now define the following computational problem.
**Problem 1** (The Trotter time-average problem).: _Given a unitary \(U_{\mathrm{Trotter}}(\tau)\), a state \(|\psi\rangle\), a local observable \(A\) and a time \(t=m\tau\) for positive integer \(m\), and a small positive constant \(\epsilon\), compute the time-averaged observable_
\[\langle A\rangle_{t}=\frac{1}{m+1}\sum_{n=0}^{m}\langle\psi|U_{ \mathrm{Trotter}}^{\dagger}(\tau)^{n}AU_{\mathrm{Trotter}}(\tau)^{n}|\psi\rangle \tag{2}\]
_within additive error \(\epsilon\|A\|\), where \(\|\cdot\|\) is the operator norm._
Note that the Trotterization is not uniquely defined by the Hamiltonian and \(U_{\mathrm{Trotter}}\) must be specified explicitly. The cost of solving this problem on a classical computer generically scales exponentially with either the number of Trotter steps \(m\) or the system size \(N\)[36], whereas on a fault-tolerant quantum computer, the effort increases at most polynomially with both. The hardness of the problem is further supported by the fact that it becomes BQP-complete at times \(t=\mathrm{poly}(n)\) if the Trotter error is negligible [37]. In section III, we present evidence that the problem is solvable on noisy quantum computers up to a maximum number of Trotter steps, which is independent of system size. We then show in section III.3 that noisy quantum devices may reach a classically intractable regime with realistic noise parameters, even when taken into account the overhead of our error mitigation strategy.
### Prethermalization
Problem 1 is not only interesting from the perspective of dynamics but it can also yield insight into equilibrium properties. In condensed matter or statistical physics, one would typically describe a system in equilibrium in terms of its temperature, or in case of the microcanonical ensemble, its internal energy. Under ETH, the microcanonical ensemble at the mean energy of the state \(|\psi\rangle\) can be approximated by solving Problem 1.
More precisely, in the limit of continuous time evolution, the long-time average of an observable is described by the diagonal ensemble. For a given initial state \(|\psi\rangle\) and an observable \(A\),
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left\langle\psi(t)|A|\psi(t)\right\rangle \mathrm{d}t=\sum_{k}|\left\langle k|\psi\right\rangle|^{2}\left\langle k|A|k \right\rangle, \tag{3}\]
where \(H=\sum_{k}E_{k}\left|k\right\rangle\left\langle k\right|\) is the spectral decomposition of a non-degenerate Hamiltonian [38]. Assuming ETH, the expectation value \(\left\langle k|A|k\right\rangle\) is a smooth function of the energy \(E_{k}\) up to a small, state-dependent correction [30]. The diagonal ensemble is then equivalent to the microcanonical ensemble at energy \(\left\langle\psi|H|\psi\right\rangle\) provided the energy variance of \(|\psi\rangle\) is sufficiently small. For observables that are an average of an extensive number of local terms, e.g., the total magnetization per site, we expect the microcanonical ensemble to vary significantly only on an extensive energy scale. It is thus possible to estimate expectation values in the microcanonical ensemble from the diagonal ensemble of states whose width in energy is subextensive. Product states satisfy this condition as their widths in energy are (under weak assumptions) proportional to \(\sqrt{N}\)[39].
The above discussion shows that it is possible to probe the microcanonical ensemble by solving problem 1 with product initial states at different mean energies. This is, however, challenging with current quantum devices for two reasons.
First, the maximum number of Trotter steps \(T/\tau\) is limited by the maximum circuit depth in the presence of noise, while the total time \(T\) required to reach equilibrium may be large. Therefore, noisy quantum devices are usually unable to reach long enough times with bounded Trotter error. Secondly, the finite calibration precision renders it challenging to get high relative precision in the angle of rotation for gates that are very close to the identity, bounding from below the size of \(\tau\).
We will now argue that it is nevertheless possible to study equilibrium phenomena. Using larger, experimentally feasible Trotter steps can be viewed as applying a periodic Floquet drive. The system can be described by the Floquet Hamiltonian \(H_{F}\), which is implicitly defined by
\[U_{\text{Trotter}}(\tau)=e^{-iH_{F}\,\tau}. \tag{4}\]
The Floquet Hamiltonian is not unique as its eigenvalues are only defined modulo \(\omega=2\pi/\tau\), the effective driving frequency. For large \(\tau\), (small \(\omega\)), i.e., outside the Trotter limit, the Floquet Hamiltonian is highly non-local and will cause a generic initial state to heat up to infinite temperature [21, 22]. Despite this, it is possible to observe (approximate) equilibration if the heating time scale is much greater than the equilibration time scale. This is known as Floquet prethermalization [40, 41, 24]. Fortunately for our purposes, Floquet prethermalization is relatively easy to access because Floquet heating occurs on a time scale \(t_{F}\propto e^{\mathcal{O}(\omega/kJ)}\), where \(k\) is the interaction range and \(J\) is the local energy scale, assuming \(\omega\gtrsim kJ\). We highlight the favorable exponential dependence of \(t_{F}\) on \(\omega/kJ\) and the fact that \(kJ\) is independent of the system size.
For times much less than \(t_{F}\), the system evolves approximately according to an effective Hamiltonian which is close to, but not the same as, the original Hamiltonian \(H\). More precisely, the effective Hamiltonian is local and it is given by the \(n_{0}\)-th order Magnus expansion [42, 43] of the Floquet Hamiltonian, where \(n_{0}=\mathcal{O}(\omega/kJ)\) (see Appendix B for details). Observables start to equilibrate under the effective Hamiltonian before eventually heating up. If the equilibration time \(t_{0}\) is much shorter than \(t_{F}\), then there exists a prethermal plateau \(t_{0}\leq t\ll t_{F}\), during which the expectation value of the observable is approximately constant. We provide a formal definition of a plateau in Appendix A.
The above observations motivate the definition of the PEVP:
**Problem 2** (Prethermalized expectation value problem).: _Given a unitary \(U_{\text{Trotter}}(\tau)\), a state \(\ket{\psi}\), and a local observable \(A\), assume that a prethermal plateau exists between times \(t_{1}\) to \(t_{2}\), such that \(\max_{t\in[t_{1},t_{2})}\bra{A}_{t}-\min_{t\in[t_{1},t_{2})}\bra{A}_{t}\leq \epsilon\|A\|\) for some positive constant \(\epsilon\). Find the value of \(\bra{A}_{t}\) to within additive error \(2\epsilon\|A\|\) for any \(t\in[t_{1},t_{2})\)._
This problem reduces to solving Problem 1 at time \(t=t_{1}\). In the following sections, we show using the example of the two-dimensional XY model that the prethermal plateau is indeed accessible and that the properties of the effective Hamiltonian closely resemble those of the initial Hamiltonian. We further demonstrate that the PEVP can be solved on a noisy quantum device with realistic parameters up to system sizes for which classical simulation of the dynamics is intractable.
### PEVP with the XY model
We focus on the two-dimensional quantum XY model on a square lattice for the remainder of this work. We emphasize, however, that the approach can be readily applied to many other models. The Hamiltonian of the XY model is given by
\[H_{\text{XY}}=-J\sum_{\langle ij\rangle}\left(S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j }^{y}\right), \tag{5}\]
where \(J\) is the interaction strength, \(S_{i}^{\alpha}\) (\(\alpha\in\{x,y,z\}\)) are spin-1/2 operators on site \(i\), and the sum runs over all pairs of nearest neighbors. The model is convenient for digital quantum computers as its two-site interaction generates a partial iSWAP gate,
\[e^{-iJ\left(S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}\right)\tau}=\text{iSWAP}_{ij }^{-J\,\tau/\pi}. \tag{6}\]
A single Trotter step in a first-order decomposition consists of applying a partial iSWAP gate to each nearest-neighbor pair of qubits. As non-overlapping gates can be performed in parallel, these operations can be carried out in a circuit whose depth is equal to the number of nearest neighbors (4 in the case of the square lattice).
The XY model in two dimensions can be solved with quantum Monte Carlo algorithms [44, 45] and thus serves as a good benchmark to our method. It is known to undergo the Kosterlitz-Thouless (KT) transition [45, 46] at nonzero temperature. This phase transition can be characterized by the mean-squared in-plane magnetization per site,
\[m_{x}^{2}+m_{y}^{2}=4\cdot\frac{\left(\sum_{i}S_{i}^{x}\right)^{2}+\left(\sum_ {i}S_{i}^{y}\right)^{2}}{N^{2}}, \tag{7}\]
which is an approximation to the in-plane susceptibility [45]. The mean-squared magnetization can be written as the sum of two-site correlators, which decay exponentially with the distance between the two sites at high temperature. Hence, \(m_{x}^{2}+m_{y}^{2}\) decreases with the system size as \(1/N\) in the thermodynamic limit. Below the critical temperature, the system exhibits quasi long-range order. The mean-squared magnetization decays only as \(1/N^{1/8}\) and its value remains non-negligible for moderately large systems [45].
In analogy to the long-time average that gives rise to the diagonal ensemble, we probe the prethermal plateaus using the Floquet time average as in Definition 1, where the Trotterization is shown in the appendix in Fig. 6a. We explore this quantity using exact diagonalization on a square lattice with \(N=4\times 4\) spins and open boundary conditions. Figure 2a shows the values of the mean-squared in-plane magnetization for the initial state \(\ket{\psi}=\ket{X+}=\left[\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right) \right]^{\otimes N}\). The different colors indicate the Trotter step size \(\tau\) or, equivalently, the driving frequency \(\omega=2\pi/\tau\). The initial state is close to the ground state of the XY Hamiltonian. We therefore expect the in-plane magnetization to remain high in the prethermal plateau, provided the effective Hamiltonian does not differ too much from the XY model.
We indeed observe prethermal plateaus for large driving frequencies (\(\omega\geq 8J\)), and these last for \(t>10^{3}/J\) when
\(\omega\geq 9J\). The plateau values approach the diagonal ensemble value (black dashed line) with increasing driving frequencies. They deviate only slightly due to the correction in the Magnus expansion, which will be discussed later in this subsection. This confirms that the dynamics with fast Floquet drive are similar to the dynamics of the original Hamiltonian in this prethermal regime. By contrast, no plateaus are observed at low driving frequencies, where the time average of the mean-squared magnetization quickly drops to expected value at infinite temperature, \(2/N\).
We may perform the same analysis for different initial states. We choose product states in which the spins on the two sublattices of the square lattice are in the respective states \(\ket{\theta,0}\) and \(\ket{\pi-\theta,\phi}\), where \(\ket{\theta,\phi}=\cos(\theta/2)\ket{0}+\sin(\theta/2)e^{i\phi}\ket{1}\) parametrizes an arbitrary state of a qubit (spin-1/2). This choice of states allows us to cover a wide range of the spectrum while ensuring that the total magnetization in the \(z\) direction vanishes. The latter constraint is convenient because the Hamiltonian conserves the total \(z\)-magnetization, \(m_{z}=\sum_{i=1}^{N}\sigma_{i}^{z}/N\). Thermalization therefore occurs in the eigenspaces of \(m_{z}\). Low-energy product states however are not eigenstates of \(m_{z}\). By choosing the expectation value of \(m_{z}\) to be zero, we maximize the overlap of the product state with the sectors of low \(z\)-magnetization, for which we expect similar equilibration dynamics.
We find that all product states of the above form exhibit prethermal plateaus at similar driving frequencies and evolution times. We evaluate the prethermal values of the in-plane magnetization by performing the Floquet time average up to time \(t=20/J\) with driving frequency \(\omega=8J\). The result is shown for various initial states as a function of their mean energy in Fig. 2b. For comparison, we also show the diagonal and microcanonical ensemble values of the initial XY model, as well as the diagonal ensemble one of the first-order Magnus expansion of Floquet Hamiltonian, given by
\[\begin{split} H_{\text{Magnus}}^{(1)}=&\frac{1}{ \tau}\int_{0}^{\tau}\mathrm{d}t_{1}H(t_{1})\\ &+\frac{1}{2i\tau}\int_{0}^{\tau}\mathrm{d}t_{1}\int_{0}^{t_{1}} \mathrm{d}t_{2}\left[H(t_{1}),H(t_{2})\right]\,.\end{split} \tag{8}\]
Here, \(H(t)\) is the piecewise constant Hamiltonian corresponding to the different terms of the Trotter expansion Eq. (1):
\[H(t)=\Gamma H_{j}\text{ for }(j-1)\tau/\Gamma\leq t<j\tau/\Gamma, \tag{9}\]
where \(1\leq j\leq\Gamma\). Definitions of the different ensembles and higher orders of the Magnus expansion can be found in App. A and App. B, respectively.
The values at the prethermal plateau are close to those of the diagonal ensemble \(H_{\text{Magnus}}^{(1)}\), indicating that the first-order truncation already serves as a good approximation for Floquet Hamiltonian in the prethermal regime. In Appendix B, we show that the higher orders lead to no significant improvement for \(\omega=8J\). The thermal equilibrium values of the initial XY Hamiltonian, in both the diagonal and the microcanonical ensemble, deviate slightly from the Floquet values. Nevertheless, the comparison indicates that the prethermal properties of the Floquet system can reveal nontrivial thermal properties of the XY Hamiltonian.
## III Error mitigation
### Rescaling of survival probabilities
Without mitigation, noise will frustrate any naive attempts to observe prethermal plateaus on current quantum hardware.
Figure 2: **(a)** Prethermal plateau of the 2D XY model for system size \(N=4\times 4\). The initial state is \(\ket{X+}\). The colored lines show the time averages of the mean-squared in-plane magnetization for different Trotter step sizes \(\tau\), corresponding to different driving frequencies \(\omega=2\pi/\tau\). The large circles stand for the starting and end points of the plateaus according to Definition 4 with tolerance \(\epsilon=0.05\) and a maximum value of \(t_{2}J\) of \(10^{3}\). The black dashed line represents the value in the diagonal ensemble of the initial Hamiltonian. **(b)** Comparison of the value at the prethermal plateau with the values in the microcanonical and diagonal ensemble values of the initial XY Hamiltonian and in the diagonal ensemble of the first-order Magnus expansion. The system size is \(4\times 4\). The driving frequency is as \(\omega=8J\) and the plateau value is taken from the time average at \(t=20/J\), which is on the prethermal plateau for all computed initial states with tolerance \(\epsilon=0.05\). For the microcanonical ensemble, we average over an energy window of width \(\delta=0.5J\) in the \(m_{z}=0\) subspace (see Appendix A).
As we show in Appendix C, noise provides an additional heating source to the Floquet driving already discussed; one that we expect to be far stronger with today's error rates, and one without favourable scaling in the system size. It is therefore desirable to develop an error mitigation technique to estimate the result of a noiseless quantum circuit from multiple measurements in a noisy circuit [47; 48; 49]. However, we do not see a reliable method for extracting the desired noiseless results from measurements of the noisy state as this would imply the ability of inferring low-temperature results from high-temperature ones.
To circumvent this issue, we avoid direct tomography of the time-evolved observables on the noisy state. Instead, we convert observable estimation into a survival probability circuit, in a manner similar to that used in out-of-time-order correlators (OTOC) [50] or echo verification circuits [51; 52]. Following forward evolution, we _apply_ the observable and then evolve backwards in time, followed by a projection onto the initial state (see Fig. 3a). This yields a survival probability of the form
\[L_{A,\psi}(t)=\left|\left\langle\psi\right|e^{iHt}Ae^{-iHt}|\psi\right\rangle \right|^{2}=\left\langle\psi\left|A(t)|\psi\right\rangle^{2}. \tag{10}\]
In the following, we drop the label \(\psi\) for notational simplicity. For this procedure to work, \(A\) must be a (local) unitary. For spin systems, it is possible to write any observable as a sum of products of unitary Pauli operators and to measure each Pauli operator separately. Although \(L_{A}(t)\) only gives the expectation value of an observable up to a sign, one can infer the sign by tracking it from the known initial value, assuming \(\left\langle\psi\right|A(t)|\psi\rangle\) is a smooth function [53]. This simplifies previous Loschmidt-echo style methods for learning \(\left\langle\psi\right|A(t)|\psi\rangle\), which required ancilla qubits, the preparation of large Greenberger-Horne-Zeilinger (GHZ) states [51] or intermediate re-preparation and measurement of qubits [52].
As we will now demonstrate, a simple rescaling is remarkably effective at mitigating errors in the estimation of the survival probability. The strategy is based on the observation that the survival probability is approximately proportional to the probability of no error occurring. The reason is that the state becomes highly entangled during the evolution, at which point a single-qubit error results in an orthogonal state with high probability. To be more concrete, consider a single Pauli error \(\sigma_{i}^{\mu}\) occurring at time \(t^{\prime}<t\) at site \(i\) and set the observable \(A\) to be identity. The survival probability is then given by \([\text{Tr}\,\left(\rho_{i}(t^{\prime})\sigma_{i}^{\mu}\right)]^{2}\), where \(\rho_{i}(t^{\prime})\) is the reduced density matrix of \(|\psi(t^{\prime})\rangle\) at site \(i\). If this site is entangled with the other parts of the system, the reduced density matrix will be close to the identity (completely mixed) and the survival probability will be close to zero.
The above discussion suggests that the survival probability with noise is related to the noiseless value, times the probability that no error has occurred. For concreteness, we consider error models in which a single-qubit noise channel \(\mathcal{N}_{p}\) is applied to each qubit after every layer of unitary gates. Here, \(p\) is the probability that the channel causes an error on the qubit. The state of art gate error rate is around \(0.5\%\) for two-qubit gates [13; 14], motivating our choice of \(p=0.3\%\) per qubit per gate as the reference value in our model [54].
Denoting the survival probability in the presence of noise by \(L_{A}^{\mathcal{N}_{p}}(t)\), we then expect that
\[L_{A}^{\mathcal{N}_{p}}(t)/L_{A}(t)\approx(1-p)^{ND}, \tag{11}\]
where \(N\) is the number of qubits and \(D\) is the circuit depth including both forward and backward evolutions. Crucially, no independent knowledge of the noise channel is required to estimate \(L_{A}(t)\). By setting \(A=\mathds{1}\), we obtain \(L_{\mathds{1}}^{\mathcal{N}_{p}}(t)\approx(1-p)^{ND}\) since the noiseless survival probability satisfies \(L_{\mathds{1}}(t)=1\). Hence,
\[L_{A}(t)\approx L_{A}^{\mathcal{N}_{p}}(t)/L_{\mathds{1}}^{\mathcal{N}_{p}}(t), \tag{12}\]
where the right-hand side can be obtained from measurements on the noisy quantum device.
We can make this argument more rigorous for channels that can be represented in terms of unitary Kraus operators. For such channels, the probability that a particular error occurs is independent of the state. This class of channels includes depolarizing and dephasing noise as well as all other Pauli channels [55]. The survival probability after the noisy circuit can be expressed as
\[L_{A}^{\mathcal{N}_{p}}(t)=\text{Tr}\,\left[\,\left(A\rho_{\psi}^{\mathcal{N} _{p}}(t)\right)^{2}\,\right], \tag{13}\]
where \(\rho_{\psi}^{\mathcal{N}_{p}}(t)\) is the mixed state after the noisy forward evolution [56]. We write the state \(\rho_{\psi}^{\mathcal{N}_{p}}(t)\) as
\[\rho_{\psi}^{\mathcal{N}_{p}}(t)=q\left|\psi_{t}\right\rangle\left\langle\psi_ {t}\right|+(1-q)\tilde{\rho}, \tag{14}\]
where \(\left|\psi_{t}\right\rangle=U_{\text{Trotator}}^{t/\tau}(\tau)\left|\psi\right\rangle\) is the state after noiseless forward evolution and \(q=(1-p)^{ND/2}\) is the probability that no error occurred during the forward evolution. The density matrix \(\tilde{\rho}\) is the state conditioned on at least one error having occurred. The survival probability in noisy simulation then becomes
\[\begin{split} L_{A}^{\mathcal{N}_{p}}(t)=& q^{2}|\left\langle\psi_{t}\right|A|\psi_{t}\rangle\,|^{2}+(1-q)^{2} \text{Tr}\,\left[(\tilde{\rho}A)^{2}\right]\\ &+2q(1-q)\left\langle\psi_{t}\right|A\tilde{\rho}A|\psi_{t}\rangle \,.\end{split} \tag{15}\]
Defining \(r=\sqrt{\text{Tr}\,\left[\tilde{\rho}^{2}\right]}\), we can use Cauchy-Schwarz inequality to obtain (see Appendix F)
\[\left|\frac{L_{A}^{\mathcal{N}_{p}}(t)}{q^{2}}-L_{A}(t)\right|\leq(1-q)^{2} \left(\frac{r}{q}\right)^{2}+2(1-q)\frac{r}{q}. \tag{16}\]
Since \(0<q,r\leq 1\), \(L_{A}^{\mathcal{N}_{p}}(t)/q^{2}\) serves as a good approximation of \(L_{A}(t)\) when \(q\gg r\). This condition can be satisfied over a broad range of parameters because \(r\) typically decays with the system size. In the most extreme case of global depolarizing noise, \(\tilde{\rho}\) is a completely mixed state, for which \(r^{2}=2^{-N}\). The condition \(q\gg r\) then gives rise to
\[(1-p)^{ND}>\frac{C}{2^{N}}\Rightarrow ND<\frac{N\log 2+\log(1/C)}{\log\left[1/(1-p) \right]} \tag{17}\]
for some constant \(C\). For \(p=0.3\%\), this evaluates to \(D<230\) in the thermodynamic limit. For more general types of noise,
we similarly expect the scaling with \(q^{2}\) to hold up to some constant circuit depth in the thermodynamic limit. The noisy survival probability at this constant circuit depth will, however, decay exponentially when increasing the system size such that exponentially many measurements are required to resolve the signal. Nevertheless, we will show below that the number of measurements remains experimentally feasible in superconducting quantum devices for moderately sized systems with realistic error rates.
Two situations where Eq. (12) fails directly follow from our argument. One is the case when \(q\) approaches \(r\), as already discussed. The other is when the initial state does not thermalize. For example, the product state \(\ket{Z+}=\ket{0}^{\otimes N}\) is invariant under the (Floquet) XY Hamiltonian and thus will not get entangled. However, even in this case Eq. (12) works well for many practical channels because two independent errors are unlikely to cancel each other.
### Numerical results
We now numerically verify these considerations for the Floquet evolution of the XY model described in Sec. II.3 in the presence of local depolarizing noise. For each qubit, the noise channel is given by
\[\mathcal{N}_{p}(\rho)=(1-p)\rho+\sum_{\mu=1}^{3}\frac{p}{3}\sigma^{\mu}\rho \sigma^{\mu}. \tag{18}\]
Other types of noise are discussed in the Appendix E. In Fig. 3b and c, we respectively show \(L_{\mathds{1}}^{N_{p}}(t)\) and \(L_{A}^{N_{p}}(t)\) for the initial state \(\ket{\psi}=\ket{X+}\) for different system sizes. The computations were performed using the Monte Carlo wavefunction method with the Cirq library [57]. Each data point in the figure corresponds to an average over 2000 quantum trajectories. This number of trajectories is sufficient to observe convergence of the mean value in the region of our interest. The results agree well with Eq. (11). This also holds for different types of noise as we show in Appendix E. We note that the data points start to deviate from the estimated black dashed lines at \(ND\) approximately linear in \(N\), in line with the expectation from Eq. (17).
To quantify the error of the mitigation strategy, we define
\[s_{A}^{N_{p}}(t)=L_{A}^{N_{p}}(t)\big{/}L_{\mathds{1}}^{N_{p}}(t)-L_{A}(t). \tag{19}\]
Figure 4a shows the distribution of \(s\) of the mitigated data from Fig. 3. The error remains small for depths up to \(D\approx 100\). To compare different noise rates, we plot in Fig. 4b the square root of the moving average of \(s^{2}\) for different values of \(p\). Similar plots for types of noise other than depolarizing noise are presented in Appendix E. For reference, the typical value of \(L_{A}(t)\) in the simulation is around \(0.3\), which indicates that for circuit depth \(D=80\), the relative error is around \(10\%\) for \(p=0.3\%\).
Although these results confirm the effectiveness of our error mitigation strategy, we also observe a systematic shift of \(s\) towards positive values. This can be explained by the error terms in Eq. (15). Let us assume for simplicity that \(\tilde{\rho}=\mathds{1}/2^{N}\), from which it follows that
\[\frac{L_{A}^{N_{p}}(t)}{L_{\mathds{1}}^{N_{p}}(t)}=\frac{q^{2}L_{A}(t)+(1-q^{ 2})/2^{N}}{q^{2}+(1-q^{2})/2^{N}}, \tag{20}\]
where we used the fact that \(A^{2}=\mathds{1}\) since \(A\) is hermitian and
Figure 3: **(a)** Quantum circuit to map the expectation value of a (unitary) observable onto a survival probability. The initial state is prepared with \(V\), \(U=U_{1},U_{2},U_{3}\) or \(U_{4}\) is a single step in the Trotter decomposition, and \(\mathcal{N}\) denotes a local noise channel. **(b), (c)** Dependence of \(L_{\mathds{1}}^{N_{p}}(t)\) and \(L_{A}^{N_{p}}(t)\) on the circuit depth \(D\) and system size \(N\) in the presence of depolarizing noise with error probability \(p=0.3\%\). The initial state is \(\ket{\psi}=\ket{X+}\). The observable \(A=4S_{i}^{x}S_{i+1}^{x}\) is a correlator in the center of the lattice. The black dashed lines represent the scaling predicted by Eq. (11).
unitary. Hence,
\[s_{A}^{N_{p}}(t)=\left[1-L_{A}(t)\right]\frac{(1-q^{2})}{q^{2}\cdot 2^{N}+(1-q^{2} )}>0. \tag{21}\]
For certain error models, it may be possible to remove this systematic error by using a more complicated rescaling formula instead of (12). Nevertheless, the systematic error remains small as long as \(q^{2}\gg\mathrm{Tr}\ (\tilde{\rho}^{2})\).
We will now argue that our mitigation strategy enables the observation of prethermalization on current and near-term quantum devices. After Trotterization, the total required circuit depth \(D\) to simulate time evolution of the two-dimensional XY model up to time \(t_{\mathrm{max}}\) is
\[D=4\cdot 2\cdot t_{\mathrm{max}}/\tau, \tag{22}\]
which, from left to right, represents the number of layers per Trotter step, back and forward evolution, and the number of Trotter steps. To see prethermalization of the Floquet XY model, Fig. 2 indicates that \(t_{\mathrm{max}}\) should be at least \(8/J\) for \(\omega=8J\), which yields \(D\approx 80\). The estimation is within the limit of the maximum circuit depth from Eq. (17) and Fig. 4 for \(p=0.3\%\), showing that our proposal is suitable for current and near-term quantum devices.
We have now gathered all the ingredients for the full simulation of the PEVP on a noisy quantum device. We consider the two-dimensional XY model on a \(4\times 4\) square lattice in the presence of depolarizing noise with noise rate \(p=0.3\%\). For the observable, we focus on the correlator \(A=4S_{i}^{x}S_{i+1}^{x}\) of a pair of neighboring sites at the center of the lattice. In Fig. 5, we plot the time averages of \(\langle A(t)\rangle^{2}\) at driving frequency \(\omega=8J\) as a function of the initial state energy \(E\) up to \(t=10\tau\), corresponding to circuit depth \(D=80\). The initial states were chosen from the same set as in Fig. 2b. The black crosses represent the noise-free results, whereas for the red points the experiment was simulated including noise and error mitigation. The error bars show statistical errors due to fluctuations of different Monte Carlo trajectories, propagated from the standard deviations of \(L_{A}^{N_{p}}(t)\) and \(L_{\mathbf{I}}^{N_{p}}(t)\). Note that the sign of \(\langle\psi|A(t)|\psi\rangle\) turns out to be constant during the Floquet time evolution in our range of simulations. In the long-time limit, the time average of the square is therefore equivalent to the square of the time average, given that they converge to a constant.
We find that the noise-free results lie within the error bars for all initial states and that the trend of the observable is well reproduced. This shows that our error mitigation procedure is viable to solve the PEVP. We note that the deviation between the noisy and noise-free results is biased since the red points are systematically above the black crosses, consistent with the expectation from Eq. (21).
### Implementation
The results of the previous section show that our error mitigation strategy enables the solution of the PEVP for the XY model at a depolarizing noise rate of \(p=0.3\%\). One more step remains to assess the experimental viability: an estimate of the number of required measurements.
In experiments, the survival probabilities are estimated from binary outcomes (success / failure). This gives rise to shot noise, which in turn sets a lower bound on the necessary number of samples. To achieve a statistical uncertainty of \(\epsilon\), roughly \(1/\epsilon^{2}\) samples are needed. For the error mitigation scheme to work, the shot noise must be smaller than the survival probability. As the noisy survival probability is suppressed by the factor \((1-p)^{ND}\), it follows that the number of needed measurements scales as \((1-p)^{-2ND}\). We note that this number of samples is typically orders of magnitude larger than the number needed to suppress the fluctuations in Monte Carlo trajectories due to noisy dynamics.
Since the sample complexity scales exponentially with the number of qubits, this is an important limitation to the system size that can realistically be reached. Nevertheless, classically hard regimes are accessible with realistic parameters. For instance, setting \(N=50\) while keeping \(p=0.3\%\) and \(D=80\), we find that \((1-p)^{-2ND}\approx 3\times 10^{10}\) samples are needed. This is inconveniently large as current superconducting quantum devices can collect millions of samples on the time scale of minutes. However, a modest improvement in the error rate to \(p=0.2\%\) reduces the number of samples to a much more realistic value of \(9\times 10^{6}\).
We have so far neglected the role of measurement errors, which occur with probability \(p_{m}\approx 1\%-2\%\) for each single qubit measurement in current devices [58; 14]. Fortunately, these errors are automatically remedied by our error mitigation strategy. The measurement errors simply suppress the survival probability by another factor \((1-p_{m})^{N}\), which is independent of the circuit depth. For system sizes up to \(N=50\), this increases the required number of measurements by at most an order of magnitude.
Figure 5: The time average of \(L_{\sigma_{i}^{x}}\sigma_{i\neq 1,\psi}^{x}(t)\) at \(Jt\approx 7.85\) with \(\omega=8J\), corresponding to \(10\) Trotter steps. The black crosses show the noiseless result. The red points were obtained by applying our error mitigation strategy to noisy simulations with a single-qubit depolarization rate \(p=0.3\%\). Error bars indicate the statistic errors due to fluctuations of different Monte Carlo trajectories, propagated from the standard deviations of \(L_{A}^{N_{p}}(t)\) and \(L_{\mathbf{I}}^{N_{p}}(t)\). The system size \(N=4\times 4\).
Summary and outlook
We have proposed the prethermal expectation value problem as a way to study thermal observables on noisy, intermediate-scale quantum devices. Our approach relies on the observation that relatively large Trotter steps, which do not permit a rigorous bound on the Trotter error, can give rise to prethermalization. We showed that in the prethermal regime, the equilibration of observables is similar to the expected dynamics under the original Hamiltonian. It may be possible to approximate evolution under the original Hamiltonian even better by cancelling higher-order terms of the Magnus expansion at the cost of more complex circuits. The range of energies at which the observables can be probed is set by the range of energies of the used initial states. We restricted ourselves to product states for this work, but the protocol can straightforwardly be extended to different initial states, which may increase the range of accessible energies.
We further demonstrated that the prethermal regime is experimentally accessible with noise rates of near-term devices using an error-mitigation scheme based on measuring and rescaling survival probabilities. This scheme is not limited to the PEVP but can be applied much more broadly in the context of quantum simulation. Our work provides all necessary ingredients to also study the approach to equilibrium and to extract, for instance, diffusion constants. Alternatively, one could consider the quantum dynamics of models which do not thermalize, such as quantum scars [59; 60] or many-body localized systems [61; 62].
Our work creates a new avenue to demonstrating useful quantum advantage on noisy devices. Although the XY model studied here can be efficiently simulated on classical computers with quantum Monte Carlo methods [45], our approach can be readily adapted to more complex Hamiltonians. As a simple modification of the XY model, one might consider adding a site-dependent sign to the interaction strength \(J\). This renders classical simulation of this model much harder since it causes a sign problem in quantum Monte Carlo methods [63; 64; 65]. The complexity of our proposed approach to quantum simulation however remains unaffected by this modification. Hence, quantum advantage may be within reach for studying the equilibrium properties of Hamiltonians with a sign problem.
## Acknowledgements
TEO and VS thank Yaroslav Herasymenko, Robin Kothari and Rolando Somma for useful discussions. We acknowledge the support from the German Federal Ministry of Education and Research (BMBF) through FermiQP (Grant No. 13N15890) and EQUAUHMO (Grant No. 13N16066) within the funding program quantum technologies - from basic research to market. This research is part of the Munich Quantum Valley (MQV), which is supported by the Bavarian state government with funds from the Hightech Agerada Bayern Plus. YY was funded by a grant from Google Quantum AI. DSW has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 101023276. The work was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868.
|
2303.17541 | Nonlinear Approximation with Subsampled Rank-1 Lattices | In this paper we approximate high-dimensional functions $f\colon\mathbb
T^d\to\mathbb C$ by sparse trigonometric polynomials based on function
evaluations. Recently it was shown that a dimension-incremental sparse Fourier
transform (SFT) approach does not require the signal to be exactly sparse and
is applicable in this setting. We combine this approach with subsampling
techniques for rank-1 lattices. This way our approach benefits from the
underlying structure in the sampling points making fast Fourier algorithms
applicable whilst achieving the good sampling complexity of random points
(logarithmic oversampling). In our analysis we show detection guarantees of the
frequencies corresponding to the Fourier coefficients of largest magnitude. In
numerical experiments we make a comparison to full rank-1 lattices and
uniformly random points to confirm our findings. | Felix Bartel, Fabian Taubert | 2023-03-30T17:10:45Z | http://arxiv.org/abs/2303.17541v2 | # Nonlinear Approximation
###### Abstract
In this paper we approximate high-dimensional functions \(f:\mathbb{T}^{d}\to\mathbb{C}\) by sparse trigonometric polynomials based on function evaluations. Recently it was shown that a dimension-incremental sparse Fourier transform (SFT) approach does not require the signal to be exactly sparse and is applicable in this setting. We combine this approach with subsampling techniques for rank-1 lattices. This way our approach benefits from the underlying structure in the sampling points making fast Fourier algorithms applicable whilst achieving the good sampling complexity of random points (logarithmic oversampling).
In our analysis we show detection guarantees of the frequencies corresponding to the Fourier coefficients of largest magnitude. In numerical experiments we make a comparison to full rank-1 lattices and uniformly random points to confirm our findings.
## I Introduction
The recovery of sparse signals or compressed sensing is a thoroughly studied problem in signal processing. While many one-dimensional approaches exist [10], we have a look into the multivariate problem on the \(d\)-dimensional torus \(\mathbb{T}^{d}=(\mathbb{R}_{/\mathbb{Z}})^{d}\). Given an \(s\)-sparse signal \(f=\sum_{\mathbf{k}\in I}\hat{f}_{\mathbf{k}}\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\), \(|I|=s\), the problem is to recover \(I\subset\mathbb{Z}^{d}\) from function evaluations of the function \(f\). Here, a sparse Fourier transform (SFT) approach can be generalized to work in higher dimensions, cf. [14]. However, when the signal is not exactly sparse and we approximate by \(g=\sum_{\mathbf{k}\in I}\hat{g}_{\mathbf{k}}\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\) for some \(I\subset\mathbb{Z}^{d}\), we obtain an additional error
\[\|f-g\|_{L_{2}}^{2}=\|f-P_{I}f\|_{L_{2}}^{2}+\|P_{I}f-g\|_{L_{2}}^{2}\,.\]
In this setting we have to find
* a suitable frequency set \(I\subset\mathbb{Z}^{d}\) to bound the first summand and
* \(\hat{g}_{\mathbf{k}}\in\mathbb{C}\) approximating the true Fourier coefficients \(\hat{f}_{\mathbf{k}}=\langle f,\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle) \rangle_{L_{2}}\in\mathbb{C}\) in order to bound the second summand.
Given a frequency set \(I\), i.e., a suitable linear approximation space, the corresponding Fourier coefficients can be computed via least squares where error bounds are known, cf. [1]. Thus, the main task is the detection of a frequency set \(I\), which should optimally be the support of the Fourier coefficients \(\hat{f}_{\mathbf{k}}\) with largest magnitude like in the best \(m\)-term approximation, cf. [25, Section 1.7].
Recent approaches include [11] or [17] where arbitrary bounded orthonormal product basis were considered. As an application rank-1 lattices were used for the sampling similar to [14] in order to make use of fast Fourier algorithms. This was then compared to random points, which have a better sampling complexity but lack fast algorithms.
In this paper we modify the approach from [17] to work with subsampled rank-1 lattices utilizing recent subsampling techniques from [2] combining the good sampling complexity with the fast algorithms.
The SFT techniques from [17] work for other bounded orthonormal product basis as well and the subsampling methods from [2] for arbitrary \(L_{2}\)-Marcinkiewicz-Zygmund inequalities. Therefore, the presented theory can be generalized but for the sake of readability we restrict ourselves to the torus \(\mathbb{T}^{d}\) and rank-1 lattices.
We will recap the ideas of an SFT approach in Section II-A followed by the subsampling techniques for rank-1 lattices in Section II-B, where we will give an \(L_{2}\)-error bound for least squares approximation. In Section III we will combine the SFT approach with the subsampled rank-1 lattices and show detection guarantees for the Fourier coefficients of largest magnitude in Theorem III.3. Finally, we conclude with a numerical experiment in Section IV comparing rank-1 lattices and random points with the subsampled rank-1 lattices with respect to sampling complexity and runtime. The proofs can be found in the supplementary material.
## II Prerequisites
### _Sparse Fourier Transform_
We briefly recall the key idea of a sparse Fourier transform (SFT) approach. For a more detailed explanation see [22, 14, 16], or [17] for a more general version. As stated in the introduction, the goal is to find frequencies \(I\subset\mathbb{Z}^{d}\) such that the target function \(f\colon\mathbb{T}^{d}\to\mathbb{C}\) can be approximated well from \(\mathrm{span}\{\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\}_{\mathbf{k}\in I}\). In order to do so, we choose a suitable search space \(\Gamma\subset\mathbb{Z}^{d}\) and proceed in a dimension-incremental way:
**One-dimensional frequencies.** We use the projections of \(\Gamma\) to its \(t\)-th component \(\mathcal{P}_{\{t\}}(\Gamma)\coloneqq\{k_{t}:\mathbf{k}\in\Gamma\}\), \(t=1,\ldots,d\) for the candidate sets. From these we construct frequency sets \(I_{\{t\}}\subset\mathcal{P}_{\{t\}}(\Gamma)\), \(t=1,\ldots,d\), consisting of the "most important", one-dimensional frequency components in the respective dimensions.
**Dimension-incremental step.** We construct the next frequency set \(I_{\{1,\ldots,t+1\}}\) as a subset of the candidate set
\((I_{\{1,\ldots,t-1\}}\times I_{\{t\}})\cap\mathcal{P}_{\{1,\ldots,t\}}(\Gamma)\) consisting of the "most important", \(t\)-dimensional frequency components.
The output is the final frequency set \(I\coloneqq I_{\{1,\ldots,d\}}\) and it is left to refine the formulation of "most important".
Let \(I^{\star}\) be the set of frequencies corresponding to the \(s\) Fourier coefficients \(\hat{f}_{\mathbf{k}}\) of largest magnitude for some sparsity \(s\in\mathds{N}\). Ideally, in the step \(t-1\to t\) we want to find the frequencies \(\mathcal{P}_{\{1,\ldots,t\}}(I^{\star})\). The idea is to use an approximation of so-called projected Fourier coefficients
\[\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi})=\int_{\mathds{T}^{t}}f(\mathbf{x},\mathbf{ \xi})\exp(-2\pi\mathrm{i}\langle\mathbf{k},\mathbf{x}\rangle)\;\mathrm{d}\mathbf{x}\,, \tag{1}\]
where \(\mathbf{x}=(x_{1},\ldots,x_{t})\in\mathds{T}^{t}\), \(\mathbf{\xi}=(\xi_{1},\ldots,\xi_{d-t})\in\mathds{T}^{d-t}\), and \(f(\mathbf{x},\mathbf{\xi})=f(x_{1},\ldots,x_{t},\xi_{1},\ldots,\xi_{d-t})\). The name is based on the fact, that those values may be seen as the Fourier coefficients of the function \(f(\cdot,\mathbf{\xi})\in L_{2}(\mathds{T}^{t})\) with a fixed anchor \(\mathbf{\xi}\in\mathds{T}^{d-t}\). By the orthonormality of the Fourier basis we have
\[\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi})=\sum_{\mathbf{l}\in\mathds{Z}^{d-t}} \hat{f}_{(\mathbf{k},\mathbf{l})}\exp(2\pi\mathrm{i}\langle\mathbf{l},\mathbf{\xi}\rangle)\,,\]
i.e., the projected Fourier coefficient \(\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi})\) contains information on the Fourier coefficients with \(\mathbf{k}\in\mathds{Z}^{t}\) in the first \(t\) components of their frequencies.
The frequency \(\mathbf{k}\) is likely to be important and should be included in \(I_{\{1,\ldots,t\}}\), if the absolute value \(|\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi})|\) is larger than some detection threshold \(\delta^{\prime}\). In the algorithm, we carry out \(r\) detection iterations with different, randomly drawn anchors \(\mathbf{\xi}^{i}\), \(i=1,\ldots,r\), to avoid cases where the factors \(\exp(2\pi\mathrm{i}\langle\mathbf{l},\mathbf{\xi}\rangle)\) cause an annihilation (which results in small projected Fourier coefficients, even though the corresponding frequency components \(\mathbf{k}\) are important). The detection of the most important one-dimensional components \(k_{t}\) in the first step of the SFT approach works analogously.
Finally, we need to discuss the approximation of (1). A favorable method \(\mathcal{A}\) should combine the following properties:
* have small sample complexity;
* computationally fast, that is, both the construction of the sampling points \(\mathbf{\xi}\) and the evaluation of the projected Fourier coefficients \(\hat{f}_{\{1,\ldots,t\},\mathbf{k}}\) using the samples \(f(\mathbf{x},\mathbf{\xi})\) can be performed efficiently.
* small error, such that the relative magnitude of the projected Fourier coefficients stays unharmed.
Note, that \(\mathcal{A}\) has to be performed several times throughout the SFT approach in different dimensions up to \(d\). It is favorable to use different methods in the one- and multivariate steps using advantages of the respective methods.
### _Subsampling of rank-1 lattices_
Rank-1 lattices \(\mathbf{X}_{M}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{M}\}\subset\mathds{T}^{d}\) consist of equispaced points on a line which wraps around the \(d\)-dimensional torus \(\mathds{T}^{d}\), more precisely, for a generating vector \(\mathbf{z}\in\mathds{R}^{d}\) and a lattice size \(M\in\mathds{M}\) they are defined via
\[\mathbf{X}_{M}\coloneqq\Big{\{}\frac{1}{M}(i\mathbf{z}\operatorname{mod}\,M\mathds{1 })\in\mathds{T}^{d}:i=0,\ldots,M-1\Big{\}}\,,\]
where the modulus operation is used entry-wise. We will use them in the least squares approximation
\[S_{\mathbf{X}_{M}}f=\operatorname*{arg\,min}_{g\in V}\sum_{i=1}^{M}|g(\mathbf{x}^{i})- f(\mathbf{x}^{i})|^{2}\,,\]
where \(V=\operatorname{span}\{\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\}_{k\in I}\). By simple calculus we have for the Fourier coefficients \(\mathbf{\hat{g}}=(\hat{g}_{\mathbf{k}})_{\mathbf{k}\in I}\) of \(S_{\mathbf{X}_{M}}f=\sum_{\mathbf{k}\in I}\hat{g}_{\mathbf{k}}\exp(2\pi\mathrm{i}\langle\bm {k},\cdot\rangle)\) the equation \(\mathbf{\hat{g}}=(\mathbf{L}^{\star}\mathbf{L})^{-1}\mathbf{L}^{\star}\mathbf{f}\), where
\[\mathbf{L}=(\exp(2\pi\mathrm{i}\langle\mathbf{k},\mathbf{x}^{i}\rangle))_{i=1,\ldots,M,\bm {k}\in I}\,.\]
We will solve this system of equations iterative only using matrix-vector multiplications. Because of their one-dimensional structure of the rank-1 lattices, a one-dimensional FFT can be used to compute the matrix-vector product with the corresponding Fourier matrix \(\mathbf{L}\) in \(\mathcal{O}(M\log M+d|I|)\) instead of the naive \(\mathcal{O}(M\cdot|I|)\), where \(I\subset\mathds{Z}^{d}\) is an arbitrary frequency index set. For approximating functions with rank-1 lattices we suppose the following feature: We say a rank-1 lattice \(\mathbf{X}_{M}\) has the _reconstructing property_ for a frequency index set \(I\), if
\[\frac{1}{M}\sum_{i=1}^{M}\exp(2\pi\mathrm{i}\langle\mathbf{k},\mathbf{x}^{i}\rangle)= \delta_{\mathbf{0},\mathbf{k}}\quad\text{for all}\quad\mathbf{k}\in\mathcal{D}(I)\,, \tag{2}\]
where \(\mathcal{D}(I)=\{\mathbf{k}-\mathbf{l}:\mathbf{k},\mathbf{l}\in I\}\). Approximation bounds and further resources can be found in [23, 20, 12, 13, 21, 7].
**Example II.1**.: _When approximating functions from Sobolev spaces with mixed smoothness \(H^{s}_{\mathrm{mix}}\) for \(s>1/2\) the best frequency index sets 1 for approximation are so called are hyperbolic crosses, cf. [6]. We consider the following two scenarios:_
_1. When approximating with samples from a reconstructing rank-1 lattice \(\mathbf{X}_{M}\) the following error bound was shown for the least squares approximation in [3, Theorem 2]:_
\[M^{-s}\lesssim\sup_{\|f\|_{\mathbf{x}_{\mathrm{mix}}}^{s}\leq 1}\|f-S_{\mathbf{X}_{M}}f\| _{L_{2}}^{2}\lesssim M^{-s}(\log M)^{(d-2)s+d-1},\]
_where the lower bound holds for all rank-1 lattices and there exists a rank-1 lattice satisfying the upper one._
_2. In contrast to that, for the same frequencies from the hyperbolic cross \(I\) and using uniformly drawn points \(\mathbf{X}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{n}\}\) we obtain by [15, Corollary 2]_
\[\sup_{\|f\|_{\mathbf{x}_{\mathrm{mix}}}^{s}\leq 1}\|f-S_{\mathbf{X}}f\|_{L_{2}}^{2} \lesssim n^{-2s}(\log n)^{2ds}\,.\]
Example II.1 demonstrates that the sample complexity loses half the rate of convergence when approximating with rank-1 lattices compared to uniformly random points. The reason for that lies in the reconstructing requirement (2) which are \(|\mathcal{D}(I)|\approx|I|^{2}\) conditions blowing up the size \(M\) of the rank-1 lattice. However, when we use the uniformly random points with the better approximation rate, the lack of structure in the uniformly random points prevents the implementation
of a fast and efficient matrix-vector multiplication with the corresponding Fourier matrix.
It was show in [2, Theorem 3.1] that the good approximation rates and the fast algorithms can be combined: The approach is to discretely subsample a rank-1 lattice to obtain points \(\mathbf{X}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{n}\}\) from a rank-1 lattice with \(n\geq 12|I|(\log|I|+t)\). Since the underlying structure is preserved fast Fourier algorithms are applicable, cf. [2, Eq. 5.5]. Further, we have
\[A\|f\|_{L_{2}}^{2}\leq\frac{1}{n}\sum_{i=1}^{n}|f(\mathbf{x}^{i})|^{2}\leq B\|f\|_{ L_{2}}^{2} \tag{3}\]
for all \(f\in\operatorname{span}\{\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\}_{ \mathbf{k}\in I}\) with \(A=1/2\), \(B=3/2\), and probability exceeding \(1-2\exp(-t)\). The condition (3) is known as \(L_{2}\)_-Marcinkiewicz-Zygmund inequality_ and is a relaxation of the reconstructing property (2), since for \(A=B=1\) (3) is equivalent to (2) which can be shown using the parallelogram law, cf. [2, Theorem 2.3]. It gives a relation of the continuous \(L_{2}\)-norm and the point evaluations and is used to show error bounds for least squares approximation. For continuously random points this was done for individual functions in [4, 5] and improved by [1]. Note, the existence of a probability density was shown such that (3) holds with merely linear oversampling, cf. [8].
The following result is a combination of the discrete subsampling techniques from [2] and the error bound from [1, Thm. 3.2] for individual function approximation.
**Theorem II.2**.: _Let \(f\colon\mathbb{T}^{d}\to\mathds{C}\) be a fixed function and \(\mathbf{X}_{M}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{M}\}\subset\mathbb{T}^{d}\) be a reconstructing rank-1 lattice for a frequency set \(I_{M}\subset\mathds{Z}^{d}\). Further, let \(\emptyset\neq I\subset I_{M}\), \(t>0\), and \(n\) be such that \(n\geq 12|I|(\log|I|+t)\). Drawing a set \(\mathbf{X}=\{\mathbf{x}^{i}\}_{i\in J}\), \(|J|=n\) of points i.i.d. and uniform from \(\mathbf{X}_{M}\), we have_
\[\|f-S_{\mathbf{X}}f\|_{L_{2}}^{2}\] \[\leq\Big{(}3\|f-P_{I}f\|_{L_{2}}+\sqrt{\frac{2}{9|I|}}\|P_{I_{M}} f-P_{I}f\|_{\infty}\Big{)}^{2}\] \[\quad+4\|f-P_{I_{M}}f\|_{\infty}^{2}\] \[\leq\Big{(}3+\sqrt{\frac{2|I_{M}\setminus I|}{9|I|}}\Big{)}^{2}\| f-P_{I}f\|_{L_{2}}^{2}+4\|f-P_{I_{M}}f\|_{\infty}^{2}\]
_with probability exceeding \(1-2\exp(-t)\)._
Given the logarithmic oversampling and assuming \(|I_{M}\setminus I|=c|I|\) for some constant \(c>0\), we obtain the projection error in the first summand, which is the best possible from the given approximation space. This has to be balanced with the second term which decreases for bigger \(I_{M}\), which is a degree of freedom not affecting the sampling complexity. In the numerical experiments we will see that \(I=I_{M}\) is sufficient in practice. Note, that in this case the corresponding rank-1 lattice will still be of size \(M\approx|I|^{2}\) and the random subsampling with logarithmic oversampling will improve the sampling complexity.
## III SFT with subsampled rank-1 lattices
For a function \(f\colon\mathbb{T}^{d}\to\mathds{C}\) and a threshold \(\delta>0\), the final goal is to find \(I_{\delta}\coloneqq\{\mathbf{k}\in\mathds{Z}^{d}:|\hat{f}_{\mathbf{k}}|\geq\delta\}\) or a superset of slightly bigger size. As described in Section II-A, our approach works in a dimension-incremental way and so will the analysis. The goal in the step from dimension \(t-1\) to \(t\) is the detection of \(\mathcal{P}_{\{1,\ldots,t\}}(I_{\delta})\subset\mathds{Z}^{t}\). We first show that using the projected coefficients (1) yields the objective.
**Theorem III.1**.: _Let \(f\colon\mathbb{T}^{d}\to\mathds{C}\), \(\varepsilon,\delta>0\), \(I_{\delta}\coloneqq\{\mathbf{k}\in\mathds{Z}^{d}:|\hat{f}_{\mathbf{k}}|\geq\delta\}\), and_
\[r\geq 4\Big{(}|I_{\delta}|+\frac{1}{\delta^{2}}\Big{(}\sum_{\mathbf{k}\in I_{ \delta}}|\hat{f}_{\mathbf{k}}|\Big{)}^{2}\Big{)}\Big{(}\log|I_{\delta}|+\log\frac{ 1}{\varepsilon}\Big{)}\,.\]
_Further let \(\mathbf{\xi}^{1},\ldots,\mathbf{\xi}^{r}\in\mathbb{T}^{d-t}\) be drawn i.i.d. uniformly random. With probability \(1-\varepsilon\) we detect all important frequencies in dimension \(t\) via the projected Fourier coefficients (1) with \(r\) detection iterations and threshold \(\delta^{\prime}\leq\delta/\sqrt{2}\), i.e.,_
\[\max_{i=1,\ldots,r}|\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi}^{i})|\geq\delta^ {\prime}\quad\forall\mathbf{k}\in\mathcal{P}_{\{1,\ldots,t\}}(I_{\delta})\,.\]
In practice we do not have the exact projected Fourier coefficients \(\hat{f}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi})\). Rather, we will approximate them by approximating
\[f(\cdot,\mathbf{\xi}^{i})=\sum_{\mathbf{k}\in\mathds{Z}^{t}}\hat{f}_{\{1,\ldots,t\}, \mathbf{k}}(\mathbf{\xi})\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\]
for fixed anchors \(\mathbf{\xi}^{1},\ldots,\mathbf{\xi}^{r}\in\mathds{T}^{d-t}\) in the last \(d-t\) components and a subsampled rank-1 lattice \(\mathbf{X}\subset\mathds{T}^{t}\) in the first \(t\) components:
\[S_{\mathbf{X}}f(\cdot,\mathbf{\xi})=\sum_{\mathbf{k}\in\mathds{Z}^{s}}\hat{g}_{\{1,\ldots,t \},\mathbf{k}}(\mathbf{\xi})\exp(2\pi\mathrm{i}\langle\mathbf{k},\cdot\rangle)\,. \tag{4}\]
**Theorem III.2**.: _Let the assumptions from Theorem III.1 hold and let \(\mathcal{P}_{\{1,\ldots,t\}}(I_{\delta})\subset I_{\{1,\ldots,t\}}\subset I_{\{1, \ldots,t\}}^{M}\) be frequency index sets such that \(|I_{\{1,\ldots,t\}}^{M}\setminus I_{\{1,\ldots,t\}}|\leq 9/2|I_{\{1,\ldots,t\}}|\). Further, let \(\mathbf{X}^{M}\) be a reconstructing rank-1 lattice for \(I_{\{1,\ldots,t\}}^{M}\) with probability \(1-\varepsilon\) and \(\mathbf{X}\subset\mathbf{X}^{M}\) an i.i.d. uniformly drawn subset with_
\[|\mathbf{X}|\geq 12|I_{\{1,\ldots,t\}}|\Big{(}\log|I_{\{1,\ldots,t\}}|+\log\Big{(} \frac{2r}{\varepsilon}\Big{)}\Big{)}\,.\]
_With probability \(1-3\varepsilon\) we detect all important frequencies in dimension \(t\) via the approximated projected Fourier coefficients \(\hat{g}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi}^{i})\) from (4) with \(r\) detection iterations and threshold_
\[\delta^{\prime}\leq\frac{\delta}{\sqrt{2}}-4\|f-P_{I_{\delta}}f\|_{L_{2}}-2\|f- \mathcal{P}_{\{1,\ldots,t\}}{\times}\mathbb{T}^{d-t}f\|_{\infty}\,,\]
_i.e.,_
\[\max_{i=1,\ldots,r}|\hat{g}_{\{1,\ldots,t\},\mathbf{k}}(\mathbf{\xi}^{i})|\geq\delta^ {\prime}\quad\forall\mathbf{k}\in\mathcal{P}_{\{1,\ldots,t\}}(I_{\delta})\,.\]
Note, choosing \(I_{\{1,\ldots,t\}}^{M}\) large does not affect the sampling complexity but only the initial rank-1 lattice from which we sample and diminishes the term \(\|f-\mathcal{P}_{I_{\{1,\ldots,t\}}^{M}\times\mathbb{T}^{d-t}f}\|_{\infty}\).
Having shown the successful detection of the important frequencies in one dimension-incremental step it is left to
apply Theorem III.2 iteratively to obtain our main theorem stating the successful detection of all important frequencies \(\mathbf{k}\in I_{\delta}\) using samples in subsampled rank-1 lattices.
**Theorem III.3**.: _Let \(f\colon\mathds{T}^{d}\to\mathds{C}\), \(\varepsilon,\delta>0\), \(\Gamma\supset I_{\delta}\coloneqq\{\mathbf{k}\in\mathbb{Z}^{d}:|\hat{f}_{\mathbf{k}}| \geq\delta\}\), and_
1. _Let_ \(t=1,\ldots,t\) _and_ \(\mathbf{X}_{\{t\}}^{M}\) _be a reconstructing rank-1 lattice for_ \(J_{\{t\}}\coloneqq\mathcal{P}_{\{t\}}(\Gamma)\) _with probability_ \(1-\varepsilon\) _and_ \(\mathbf{X}_{\{t\}}\subset\mathbf{X}_{\{t\}}^{M}\) _an i.i.d. uniformly drawn subset with_ \[|\mathbf{X}_{\{t\}}|\geq 12|J_{\{t\}}|\Big{(}\log|J_{\{t\}}|+\log\Big{(}\frac{2r}{ \varepsilon}\Big{)}\Big{)}\,.\] _Further, let_ \(\mathbf{\Xi}_{\{t\}}=\{\mathbf{\xi}^{1},\ldots,\mathbf{\xi}^{r}\}\subset\mathds{T}^{d-1}\) _be drawn i.i.d. uniformly random._ _Using samples in_ \(\mathbf{X}_{\{t\}}\times\mathbf{\Xi}_{\{t\}}\) _for_ \(r\) _least squares approximations, we construct_ \(I_{\{t\}}\) _such that_ \[J_{\{t\}}\supset I_{\{t\}}\supset\mathcal{P}_{\{t\}}(I_{\delta})\] _with probability exceeding_ \(1-3\varepsilon\)_._
2. _Let_ \(t=2,\ldots,t\) _and_ \(\mathbf{X}_{\{1,\ldots,t\}}^{M}\) _be a reconstructing rank-1 lattice for_ \(J_{\{1,\ldots,t\}}\coloneqq\{I_{\{1,\ldots,t-1\}}\times I_{\{t\}}\}\cap \mathcal{P}_{\{1,\ldots,t\}}(\Gamma)\) _with probability_ \(1-\varepsilon\) _and_ \(\mathbf{X}_{\{1,\ldots,t\}}\subset\mathbf{X}_{\{1,\ldots,t\}}^{M}\) _an i.i.d. uniformly drawn subset with_ \[|\mathbf{X}_{\{1,\ldots,t\}}|\geq 12|J_{\{1,\ldots,t\}}|\Big{(}\log|J_{\{1, \ldots,t\}}|+\log\Big{(}\frac{2r}{\varepsilon}\Big{)}\Big{)}\,.\] _Further, let_ \(\mathbf{\Xi}_{\{1,\ldots,t\}}=\{\mathbf{\xi}^{1},\ldots,\mathbf{\xi}^{r}\}\subset\mathds{ T}^{d-t}\) _be drawn i.i.d. uniformly random._ _Using samples in_ \(\mathbf{X}_{\{1,\ldots,t\}}\times\mathbf{\Xi}_{\{1,\ldots,t\}}\) _for_ \(r\) _least squares approximations, we construct_ \(I_{\{1,\ldots,t\}}\) _such that_ \[J_{\{1,\ldots,t\}}\supset I_{\{1,\ldots,t\}}\supset\mathcal{P}_{\{1,\ldots,t\} }(I_{\delta})\] _with probability exceeding_ \(1-3\varepsilon\)_._
_In particular, we have \(I_{\{1,\ldots,d\}}\supset I_{\delta}\) with probability exceeding \(1-6dz\)._
Proof.: The assertion follows from repeatedly applying Theorem III.2 and union bound.
## IV Numerical experiments
We consider the \(10\)-dimensional test function \(f\colon\mathds{T}^{10}\to\mathds{R}\),
\[f(\mathbf{x})\coloneqq\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Sampling complexity.** As discussed in the theoretical part, the reconstructing requirement of the rank-1 lattice blows up its size resulting in the most used sampling points. Because of computational infeasibility, we do not have many computations with the i.i.d. uniformly random points and cannot capture its behaviour (in our experience it should behave similar to the subsampled rank-1 lattice). The subsampled rank-1 lattices have better sampling complexity than the full rank-1 lattices and the graph suggests this advantage will increase for higher sparsity \(s\).
**Computation time.** The fastest computation time can be seen with the full rank-1 lattices since the approximations only need one matrix-vector product each for which fast Fourier algorithms are utilized. The subsampled rank-1 lattices are slower by a constant factor of \(10\). Here, the same fast Fourier algorithm is used but the approximations use an iterative solver. We capped the maximal number of iterations by \(10\), which explains the constant factor. For the i.i.d. uniformly random points no fast Fourier algorithms are available making it the slowest approach.
The experiments confirms our theoretical findings of subsampled rank-1 lattices combining the computational and sampling advantages of full rank-1 lattices and random points, respectively.
|
2309.02671 | RLSynC: Offline-Online Reinforcement Learning for Synthon Completion | Retrosynthesis is the process of determining the set of reactant molecules
that can react to form a desired product. Semi-template-based retrosynthesis
methods, which imitate the reverse logic of synthesis reactions, first predict
the reaction centers in the products, and then complete the resulting synthons
back into reactants. We develop a new offline-online reinforcement learning
method RLSynC for synthon completion in semi-template-based methods. RLSynC
assigns one agent to each synthon, all of which complete the synthons by
conducting actions step by step in a synchronized fashion. RLSynC learns the
policy from both offline training episodes and online interactions, which
allows RLSynC to explore new reaction spaces. RLSynC uses a standalone forward
synthesis model to evaluate the likelihood of the predicted reactants in
synthesizing a product, and thus guides the action search. Our results
demonstrate that RLSynC can outperform state-of-the-art synthon completion
methods with improvements as high as 14.9%, highlighting its potential in
synthesis planning. | Frazier N. Baker, Ziqi Chen, Daniel Adu-Ampratwum, Xia Ning | 2023-09-06T02:40:33Z | http://arxiv.org/abs/2309.02671v3 | # RLSyncC: Offline-Online Reinforcement Learning for Synthon Completion
###### Abstract
Retrosynthesis is the process of determining the set of reactant molecules that can react to form a desired product. Semi-template-based retrosynthesis methods, which imitate the reverse logic of synthesis reactions, first predict the reaction centers in the products, and then complete the resulting synthons back into reactants. These methods enable necessary interpretability and high practical utility to inform synthesis planning. We develop a new offline-online reinforcement learning method RLSyncC for synthon completion in semi-template-based methods. RLSyncC assigns one agent to each synthon, all of which complete the synthons by conducting actions step by step in a synchronized fashion. RLSyncC learns the policy from both offline training episodes and online interactions which allow RLSyncC to explore new reaction spaces. RLSyncC uses a forward synthesis model to evaluate the likelihood of the predicted reactants in synthesizing a product, and thus guides the action search. We compare RLSyncC with the state-of-the-art retrosynthesis methods. Our experimental results demonstrate that RLSyncC can outperform these methods with improvement as high as 14.9% on synthon completion, and 14.0% on retrosynthesis, highlighting its potential in synthesis planning.
reinforcement learning, retrosynthesis, multi-agent, synthon completion
## I Introduction
Retrosynthesis is the process of determining the set of reactant molecules that can react to form a desired product molecule. Retrosynthesis is essential to drug discovery, where medicinal chemists seek to identify feasible synthesis reactions for desired molecules (i.e., synthesis planning [1]). The recent development on computational retrosynthesis methods using deep learning [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] has enabled high-throughput and large-scale prediction for many products, facilitating medicinal chemists to conduct synthesis planning much more efficiently. Among the existing computational retrosynthesis methods, semi-template-based retrosynthesis methods [11, 12, 13, 14, 15] imitate the reverse logic of synthesis reactions: they first predict the reaction centers in the products, and then transform (complete) the resulting synthons - the molecular structures from splitting the products at the reaction centers, back into reactants. Semi-template-based retrosynthesis methods enable necessary interpretability as to where the reactions happen among the reactants and how the products are synthesized, and thus have high practical utility to inform synthesis planning.
Existing retrosynthesis methods typically train predictive or generative models to transform from products to their reactants, under the supervision of training data with known reactions. The objective during model training is to reproduce the reactions in the training data, and the model performance is evaluated by comparing the predicted reactions of a product against its known reactions. While these models can accurately recover the known reactions for products, they suffer from two issues: (1) they do not have the capability of exploring and learning new reaction patterns not present in the training data; and (2) they do not have the mechanism to enable a more comprehensive evaluation of the predicted reactions - those that are not identical to known reactions may still be viable. These two issues are under-investigated in the literature.
To address these issues, in this manuscript, we develop a new multi-agent reinforcement learning method with offline learning and online data augmentation, denoted as RLSyncC, for synthon completion in semi-template-based methods. Fig. 1 presents the overall idea of RLSyncC. We focus on semi-template-based methods due to their interpretability, practical utility and state-of-the-art performance [14, 15]. We particularly focus on their synthon completion step, because reaction centers can be predicted very accurately [15], but synthon completion is often more complicated [16].
Specifically, RLSyncC assigns one agent to each synthon, all of which complete the synthons by conducting actions step by step in a synchronized fashion. All the agents share the same action selection policy and select the optimal actions with full observation of other agents' states. RLSyncC learns the policy from offline training episodes, and augmented training data generated through online interactions. The augmented data introduce new reaction patterns not included in training data, and thus allow RLSyncC to explore new reaction spaces. RLSyncC uses a reward function to evaluate the likelihood of the predicted reactants in synthesizing a product, and thus guides the action search. We compare RLSyncC with state-of-the-art retrosynthesis methods. Our experimental results demonstrate that RLSyncC can outperform these methods with improvement as high as 14.9% on synthon completion, and 14.0% on retrosynthesis. To the best of our knowledge, RLSyncC is the first reinforcement learning method for synthon completion.
## II Related Work
### _Retrosynthesis_
Deep-learning-based retrosynthesis methods can be categorized into three groups: template-based, template-free and semi-template-based. Template-based methods [2, 3, 4, 5] use reaction templates that are extracted from known reactions to transform a product directly into reactants, and thus are limited to reactions covered by the templates. Template-free methods [6, 7, 8, 9, 10] typically utilize the sequence representation of molecules (SMILES) and employ Transformer models to translate product SMILES strings into reactant SMILES strings, without using reaction templates. For example, RSMILES[9] uses a Transformer to decode the reactant SMILES strings from the product SMILES strings. RetroFormer[10] embeds both the SMILES strings and molecular graphs of products, and uses the embeddings to predict reaction center regions and generate reactant SMILES strings. However, these methods may generate SMILES strings that violate SMILES grammars or chemical rules.
Semi-template-based methods [11, 12, 13, 14, 15] have two steps: (1) they first identify the reaction centers and break the product into synthons using reaction centers; and then (2) they complete synthons into reactants. For example, RetroPrime[13] employs two Transformers to first translate the SMILES strings of products to synthons, and then synthons to reactants. GraphRetro[14] predicts reaction centers by learning from the molecular graphs of products, and then completes synthons by classifying the subgraphs based on whether they can realize the difference between synthons and reactants. G2Retro[15] also predicts reaction centers from molecular graphs, and then completes synthons by sequentially adding rings or bonds.
### _Reinforcement Learning_
Deep reinforcement learning methods have been developed to design new small molecules. For example, GCPN [17] uses a graph convolutional policy network to sequentially add new atoms and bonds to construct new molecules. MolDQN [18] uses Morgan fingerprints [19] to represent molecules, and learns a deep Q-network to guide the addition or change of atoms and bonds, modifying molecules to have desired properties. Reinforcement learning has also been applied for biological sequence design. For example, DyNA PPO [20] uses a model-based variant of proximal-policy optimization to generate DNA and peptide sequences with desired properties. TCRPPO [21] learns a mutation policy to mutate sequences of T-cell receptors to recognize specific peptides.
Reinforcement learning has been used for multi-step retrosynthetic planning, which seeks to find an optimal sequence of multiple reactions to synthesize a product. Schreck _et.al._[22] trains an agent to select reactions from a list to construct the sequence backward starting from the product, until all the reactants of the first reaction in the sequence are purchasable. In this method, reinforcement learning is used to select reactions rather than predicting reactions. However, there is very limited work applying reinforcement learning to retrosynthesis. RCSearcher [23] applies a deep \(Q\) network to search a molecular graph for reaction centers. In contrast, RLSynC uses reinforcement learning for synthon completion.
## III Definitions and Notations
A synthesis reaction involves a set of reactants \(\{\texttt{R}_{i}\}\) and a product molecule P that is synthesized from these reactants. Each reactant \(\texttt{R}_{i}\) has a corresponding synthon \(\texttt{M}_{i}\), which represents the substructures of \(\texttt{R}_{i}\) that appear in P. The connection point of these synthons to form the product, typically a bond, is referred to as the reaction center. Fig. 2 presents the retrosynthesis process. In retrosynthesis, a typical semi-template-based method first identifies the reaction center and thus the corresponding synthons \(\{\texttt{M}_{i}\}\), and then completes the synthons back to reactants \(\{\texttt{R}_{i}\}\).
To complete \(\texttt{M}_{i}\) to \(\texttt{R}_{i}\), atoms may be added to \(\texttt{M}_{i}\) one at a time, through establishing new bonds. In each step \(t\), the intermediate molecular structure generated from \(\texttt{M}_{i}\) is denoted as \(\texttt{M}_{i}^{t}\). With abuse of terms, such intermediate molecular structures are referred to as _current_ synthons. In this manuscript, we focus on the reactions with only two reactants, because
Fig. 1: Overview Scheme of RLSynC
Fig. 2: Retrosynthesis Process
this is the most common case in synthesis reactions [16]. For example, in the USPTO-50K benchmark dataset [11], two-reactant synthesis reactions take 70.8%. In this manuscript, the two terms "pair of reactants" and "reaction" for a product are used interchangeably, when no ambiguity is raised; the term "prediction" refers to the prediction of the two reactants of a product. Table I presents the key definitions and notations.
RLSynC is employed under the assumption that reaction centers are pre-determined or can be accurately predicted. This is because the potential reaction centers are typically limited, especially in the case of small molecules. According to Chen _et.al._[15], reaction center prediction can achieve as high as 97.2% accuracy. However, synth completion is often more complicated and can be realized in a variety of ways [16]. However, RLSynC can be generalized to one-reactant (29.0% in the USPTO-50K benchmark dataset) or multi-reactant (0.2% in USPTO-50K) cases by having one or multiple agents, given how RLSynC completes synthons (Section V).
## IV RLSynC Model
RLSynC assigns one agent, denoted as A, to each synth and uses the agent to transform its synth into a reactant through a sequence of actions. This transformation is achieved through a Markov Decision Process (MDP), denoted as MDP = {\(\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R}\)}, including a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), a transition function \(\mathcal{T}\) and a reward function \(\mathcal{R}\).
### _State Space (\(\mathcal{S}\))_
RLSynC has a discrete state space \(\mathcal{S}\) describing the MDP status. Each state \(\mathbf{s}_{t}\in\mathcal{S}\) is represented as:
\[s_{t,\texttt{p}}=\{\texttt{M}_{1},\texttt{M}_{2},\texttt{M}_{1}^{t},\texttt {M}_{2}^{t},\texttt{P},T-t\}, \tag{1}\]
where \(t\) denotes the steps of actions (Section IV-B); M\({}_{1}\) and M\({}_{2}\) are the synthons from P that are assigned to agent A\({}_{1}\) and A\({}_{2}\), respectively; M\({}_{1}^{t}\) and M\({}_{2}^{t}\) are the _current_ synthons generated from M\({}_{1}\) and M\({}_{2}\) after \(t\) (\(t=0,...,T\)) steps of actions by A\({}_{1}\) and A\({}_{2}\), respectively (M\({}_{1}^{0}=\texttt{M}_{1},\texttt{M}_{2}^{0}=\texttt{M}_{2}\)); P is the product molecule; and \(T\) is the step limit and thus \(\mathbf{s}_{T}\) is a terminal state. The current synthons (M\({}_{1}^{T},M_{2}^{T}\)) in \(\mathbf{s}_{T}\) are the predicted reactants. In RLSynC, \(T\) is set to 3 because 89.10% of the synthons in the benchmark USPTO-50K dataset can be completed with the addition of up to 3 atoms. Empirically, increasing \(T\) could decrease model performance, without significantly increasing coverage over reactions (e.g., \(T=6\) will cover only an additional 2.96% of the synthons in the benchmark dataset). When no ambiguity is raised, \(\mathbf{s}_{t,\texttt{p}}\) is represented as \(\mathbf{s}_{t}\) with P dropped.
### _Action Space (\(\mathcal{A}\))_
RLSynC has two types of actions in its action space \(\mathcal{A}\): (1) adding atoms via bonds, denoted as ADD, and (2) no operation (i.e., doing nothing), denoted as NOOP. For ADD, RLSynC allows 12 types of atoms (B, C, N, O, F, Si, P, S, Cl, Se, Br, and I) via single, double or triple bonds, and thus 36 types of additions. These additions are sufficient to complete 98.42% of the synthons in those two-synthon cases in the benchmark data within 3 steps. Adding more atom and bond types offers very little additional coverage. but expands the action space. Thus, the action space is denoted as follows:
\[\mathcal{A}=\{\texttt{ADD}_{1},\texttt{ADD}_{2},\cdots,\texttt{ADD}_{36}, \texttt{NOOP}\}, \tag{2}\]
where each ADD\({}_{i}\) corresponds to a specific atom type and bond type combination. The atom additions have to satisfy the following constraints:
1. The new atoms are only added to the reaction centers or atoms that are added through the previous actions;
2. The bonds connecting the new added atoms and the current synthons do not violate structural or valency rules;
3. The types of these new bonds exist in the training data.
At each step \(t\) (\(t=0,...,T\)), each agent A\({}_{i}\) selects an action \(\mathbf{a}_{t}^{t}\) from \(\mathcal{A}\), and applies the action to its current synth \(\mathbf{t}_{i}^{t}\) (\(i\!=\!1,2\)). The two agents act in a synchronized fashion and start the next step \(t+1\) only when both finish step \(t\). Note that each agent at step \(t\) has perfect observations of its own action and current synthon, and also the other agent's current synthon. This full observation allows the agents to share the same policy without exchanging information, and thus simplifies the policy learning (Section V-C).
### _Transition Function (\(\mathcal{T}\))_
The transition function \(\mathcal{T}(\mathbf{s}_{t+1}|\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2 }^{t}\})\) in RLSynC calculates the probability of MDP transitioning to state \(\mathbf{s}_{t+1}\), given the current state \(\mathbf{s}_{t}\) and actions \(\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\}\) at step \(t\). In RLSynC, \(\mathcal{T}\) is deterministic, that is, \(\mathcal{T}(\mathbf{s}_{t+1}|\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2 }^{t}\})=1\).
### _Reward Function (\(\mathcal{R}\))_
RLSynC uses a final binary reward to guide its agents. At the terminal step \(T\), if the predicted reactants M\({}_{1}^{T}\) and M\({}_{2}^{T}\) exactly match the reactants given for the product P in the training data, \(\mathcal{R}\) gives \(\mathbf{s}_{T}\) a reward 1. Otherwise, a stand-alone forward synthesis prediction model is applied to predict the products that can be synthesized from M\({}_{1}^{T}\) and M\({}_{2}^{T}\). If P is among the top-5 predictions by this model, \(\mathcal{R}\) gives \(\mathbf{s}_{T}\) a reward 1; otherwise, reward 0. RLSynC uses Molecular Transformer [24] as the forward synthesis prediction model, because it is the state of the art and achieves very high accuracy for forward synthesis prediction [25]. However, RLSynC is not bound to Molecular Transformer and can be easily adapted to any other forward
\begin{table}
\begin{tabular}{l l} \hline \hline Notation & Meaning \\ \hline P & product molecule \\ (R\({}_{1},\texttt{R}_{2}\)) & a pair of reactants \\ (M\({}_{1},\texttt{M}_{2}^{t}\)) & a pair of synthons \\ (M\({}_{1}^{t},\texttt{M}_{2}^{t}\)) & a pair of _current_ synthons at step \(t\) \\ A & an agent \\ _tT_ & time step/time step limit \\ MDP & Markov decision process: MDP = {\(\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R}\)} \\ \(\mathcal{S}\)/\(\mathbf{s}_{t}\) & State space/a state at step \(t\) \\ \(\mathcal{A}\)/\(\mathbf{a}_{1}^{t}\)/\(\mathbf{a}_{2}^{t}\) & Action space/an action used to update M\({}_{1}^{t}\)/M\({}_{2}^{t}\) at step \(t\) \\ \(\mathcal{A}_{1}^{t}\) & the set of feasible actions for A\({}_{i}\) at time step \(t\) \\ \(\mathcal{T}\) & State transition function \\ \(\mathcal{R}(s_{T})\) & reward function for terminal states \\ \hline \hline \end{tabular}
\end{table} TABLE I: Key Notations
synthesis prediction models for \(\mathcal{R}\). Predicted reactants that receive positive rewards are referred to as _correct_ predictions.
### _State-Action Representation_
The state-action pairs will be used to learn a state-action \(Q\)-value function (discussed later in Section V-A). \(\mathsf{RLSynC}\) represents a state-action pair \((\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\) as follows:
\[\mathbf{h}_{i,t}=\mathbf{m}_{i}\oplus\mathbf{m}_{j}\oplus\mathbf{m}_{i}^{t+1} \oplus\mathbf{m}_{j}^{t+1}\oplus\mathbf{p}\oplus[T-(t+1)], \tag{3}\]
where \(i=1,2\) indexing the agent of interest, \(j=3-i\) indexing the other agent; \(\mathbf{m}\)'s are the Morgan fingerprint vectors for the corresponding synths \(\mathsf{M}\)'s (\(\mathsf{M}_{i}^{t}\) will be transformed to \(\mathsf{M}_{i}^{t+1}\) by \(\mathbf{a}_{i}^{t}\), and \(\mathsf{M}_{i}^{t+1}\) is represented by \(\mathbf{m}_{i}^{t+1}\)); \(\mathbf{p}\) is the Morgan fingerprint vector for the product \(\mathbf{p}\); and \(\oplus\) is the concatenation operation. The use of Morgan fingerprints is inspired by Zhou _et.al._[18]. Morgan fingerprints [19] capture molecular substructure information, are easy to construct and do not require representation learning.
## V \(\mathsf{RLSynC}\) Training and Prediction
### _Offline Training_
At a state \(\mathbf{s}_{t}=\{\mathtt{M}_{1},\mathtt{M}_{2},\mathtt{M}_{1}^{t},\mathtt{ M}_{2}^{t},\mathtt{P},T-t\}\), \(\mathsf{RLSynC}\) uses a state-action value function \(Q_{\Theta}(\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\), parameterized by \(\Theta\), to estimate the future rewards of \(\mathbf{s}_{t}\) if the actions \(\mathbf{a}_{1}^{t}\) and \(\mathbf{a}_{2}^{t}\) are applied by \(\mathtt{A}_{1}\) and \(\mathtt{A}_{2}\) on \(\mathtt{M}_{1}^{t}\) and \(\mathtt{M}_{2}^{t}\), respectively. \(Q_{\Theta}(\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\) is modeled as a multi-layer fully-connected neural network, with \((\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\) represented as \(\mathbf{h}_{i,t}\) (Equation 3) as input to the neural network.
#### V-A1 Offline Training Episode Generation
To learn \(Q_{\Theta}\), \(\mathsf{RLSynC}\) uses offline data of pre-computed episodes. An episode refers to a trajectory from \(\mathbf{s}_{0}\) to \(\mathbf{s}_{T}\) for a product \(\mathtt{P}\), that is, \((\mathbf{s}_{0},\{\mathbf{a}_{0}^{1},\mathbf{a}_{0}^{2}\},\mathbf{s}_{1},\{ \mathbf{a}_{1}^{1},\mathbf{a}_{2}^{1}\},\mathbf{s}_{2},...,\{\mathbf{a}_{1}^{ T-1},\mathbf{a}_{2}^{T-1}\},\mathbf{s}_{T},\mathcal{R}(\mathbf{s}_{T}))_{ \mathtt{P}}\). \(\mathsf{RLSynC}\) computes all _true_ episodes from training data that include known reactions to synthesize given products. For each product in the training set, \(\mathsf{RLSynC}\) also computes 4 _random_ episodes from a set of random reactions that are not included in the training data. These random reactions are generated by taking random actions on the synthons of the products in the training data, and their rewards are calculated using \(\mathcal{R}\). While all the _true_ episodes for known reactions have positive rewards, most episodes for random reactions have zero rewards. By training on both positive- and zero-reward episodes, the agents can learn actions to take as well as actions to avoid, thereby improving their overall performance.
#### V-A2 Q-Value Function Learning
From the offline training episodes, \(\mathsf{RLSynC}\) uses a SARSA [26]-like approach to approximate the Q-value \(\tilde{Q}(\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\) as follows:
\[\tilde{Q}(\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\!=\!\begin{cases} \gamma\tilde{Q}(\mathbf{s}_{t+1},\{\mathbf{a}_{1}^{t+1},\mathbf{a}_{2}^{t+1} \})\text{ if }t\!<\!T\!-\!1,\\ \mathcal{R}(\mathbf{s}_{t+1})\hskip 56.905512pt\text{ if }t\!=\!T\!-\!1,\end{cases} \tag{4}\]
where \(0<\gamma<1\) is the discount factor and \(\mathcal{R}\) is the reward function. With the approximate Q-values, \(\mathsf{RLSynC}\) learns \(Q_{\Theta}(\mathbf{s}_{t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\})\) by minimizing the following loss function:
\[\mathcal{L}(\Theta|\mathcal{E})\!=\!\frac{1}{|\mathcal{E}|}\!\sum_{\{\mathbf{s}_{ t},\{\mathbf{a}_{1}^{t},\mathbf{a}_{2}^{t}\}\}\in\mathcal{E}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
predict the action \(\mathsf{a}_{i}^{t}\) for agent \(\mathsf{A}_{i}\) at time \(t\), \(\mathsf{RLSynC}\) first identifies the set of all chemically feasible atom additions, denoted as \(A_{i}^{t}\) (\(A_{i}^{t}\subseteq\mathcal{A}\)), for \(\mathsf{M}_{i}^{t}\) satisfying the constraints as in Section IV-B. \(\mathsf{A}_{i}\) samples the best action \(\mathsf{a}_{i}^{t}\in A_{i}^{t}\) (\(i=1,2\)) if the actions the maximum predicted \(Q\)-value, as follows:
\[\mathsf{a}_{1}^{t} =\arg\max_{\mathsf{a}\in A_{1}^{t}}Q_{\Theta}(\mathsf{s}_{t}, \{\mathsf{a},\texttt{NOOP}\}), \tag{6}\] \[\mathsf{a}_{2}^{t} =\arg\max_{\mathsf{a}\in A_{2}^{t}}Q_{\Theta}(\mathsf{s}_{t},\{ \texttt{NOOP},\mathsf{a}\}). \tag{7}\]
That is, each agent assumes \(\texttt{NOOP}\) from the other agent, but observes the other agent's _current_ synthon in \(\mathsf{s}_{t}\) in order to select its next optimal action. Please note that the two agents share the same \(Q_{\Theta}\) function and follow the same policy.
### _Top-\(N\) Prediction Search_
\(\mathsf{RLSynC}\) uses a novel greedy search algorithm, denoted as \(\mathsf{RLSynC}\), to identify the top-\(N\) predicted reactions with the highest \(Q\)-values. In training data augmentation, such top-\(N\) pairs will be used to augment the training data (Section V-B2). In predicting reactants for new products (e.g., in test set), the top-\(N\) predictions provide more options for synthesis planning. Specially, at step \(t=0\), instead of selecting only one action, each agent selects \(k\) actions with the top-\(k\) highest \(Q\)-values calculated based on Equation 6 and 7 (i.e., instead of "\(\max\)" in Equation 6 and 7, use "top-\(k\)"). Thus, there will be \(k^{2}\) possible next states, resulting from all possible action combinations from the two agents. In each of the possible next states, each agent selects again \(k\) actions for its current synthon. Through \(T\) steps, this process will result in \(k^{2T}\) predicted reactions at the terminal state. \(\mathsf{RLSynC}\) sorts all these predicted reactions using their \(Q\)-values and selects the top-\(N\) predictions. While this process can be expensive, the actions on each of the possible next states can be done independently, and this process can be implemented in parallel. The algorithm is presented in Algorithm 2.
```
0:\(\mathsf{M}_{1}\), \(\mathsf{M}_{2}\), \(\mathsf{P}\), \(T\), \(k\), \(N\), \(Q_{\Theta}\)
0: Top-\(N\) predictions \(\mathcal{E}_{\text{top-}N}\)
1:\(\mathsf{M}_{1}^{0}=\mathsf{M}_{1}\), \(\mathsf{M}_{2}^{0}=\mathsf{M}_{2}\)
2:\(s_{0}=\{\mathsf{M}_{1},\mathsf{M}_{2},\mathsf{M}_{1}^{0},\mathsf{M}_{2}^{0}, \mathsf{P},T\}\), \(\mathsf{S}_{0}=\{s_{0}\}\)
3:for t = 1 to T do
4:\(\mathbb{S}_{t}=\emptyset\)
5:for all\(s_{t-1}\in\mathbb{S}_{t-1}\)do
6:for all agent \(\mathsf{A}_{i}\) (\(i=1,2\)) do
7:\(V_{i}=\emptyset\)
8:for all\(\mathsf{a}\in A_{i}^{t-1}\)do
9:\(V_{i}=V_{i}\cup\{(\mathsf{a}:Q_{\Theta}(s_{t-1},\{\texttt{a},\texttt{NOOP}\}))\}\) {or {NOOP, a}, Equation 4}
10:endfor
11:\(V_{i}=\text{sorted}(V_{i},\text{`decreasing'})[1:k]\)
12:endfor
13:for all\(\{\mathsf{a}_{1}^{t-1},\mathsf{a}_{2}^{t-1}\}\in V_{1}\times V_{2}\)do
14: identify \(s_{t}\) s.t. \(\mathcal{T}\big{(}s_{t-1},\{\mathsf{a}_{1}^{t-1},\mathsf{a}_{2}^{t-1}\})=1\)
15:\(q_{t}=Q_{\Theta}(s_{t-1},\{\mathsf{a}_{1}^{t-1},\mathsf{a}_{2}^{t-1}\})\)
16:\(\mathsf{S}_{t}=\mathbb{S}_{t}\cup\{(\{\mathsf{a}_{1}^{t-1},\mathsf{a}_{2}^{t-1} \},\mathsf{s}_{t}\}:q_{t}\}\)
17:endfor
18:endfor
19:endfor
20:return\(\mathcal{E}_{\text{top-}N}=\text{sorted}(\mathbb{S}_{T},\text{`decreasing'})[1:N]\)
```
**Algorithm 2**\(\mathsf{RLSynC}\)-search
## VI Experimental Settings
### _Data_
We use the benchmark USPTO-50K dataset [28] in our experiments, which contains 50,016 chemical reactions. We use the same training, validation and testing division as in the literature [11], resulting in 40,008 reactions for training, 5,001 for validation, and 5,007 for testing. From each set, we use only the reactions that satisfy the following constraints:
1. The reaction has exactly two reactants;
2. The synthons can be completed to the ground-truth reactants by adding no more than three atoms.
After applying the above filter, our training, validation, and test sets contain 25,225, 3,172 and 3,167 reactions, respectively.
### _Baselines_
We compare \(\mathsf{RLSynC}\) against five state-of-the-art retrosynthesis methods: \(\mathsf{G}^{2}\mathsf{Retro}\)[15] and \(\mathsf{GraphRetro}\)[14] are graph-based methods that complete synthons through the addition of leaving groups. \(\mathsf{RSMILES}\)[9], \(\mathsf{RetroPrime}\)[13], and \(\mathsf{RetroFormer}\)[10] are sequence-based methods that use Transformer models [29] to generate string representations of reactants. Please note that \(\mathsf{RSMILES}\) offers both template-free and semi-template-based approaches to retrosynthesis.
While we are primarily interested in the semi-template-based approach from RSMILES, we still include results for its template-free approach for completeness. RetroFormer only offers a template-free approach, and all the other methods offer semi-template-based approaches. Note that we do not compare \(\mathsf{RLSynC}\) against the methods that the above baselines outperform for retrosynthesis [15], or if the methods do not have source code available.
### _Evaluation Metrics_
We evaluate different methods in terms of the correctness, diversity and validity of their predicted reactants.
#### Vi-C1 Correctness Metrics
Two predicted reactants of a product are considered _correct_ if they receive reward 1. To measure the correctness of top-\(N\) predicted reactant pairs, we use mean average precision at top \(N\) (MAP@\(N\)), defined as follows:
\[\text{MAP@}N=\frac{1}{|\mathcal{D}_{\text{test}}|}\sum_{\mathsf{p}\in \mathcal{D}_{\text{test}}}\frac{1}{N}\sum_{k=1}^{N}\mathcal{R}(\{\mathsf{M}_{ 1}^{T},\mathsf{M}_{2}^{T}\}_{\mathsf{p},k}), \tag{8}\]
where \(\mathcal{D}_{\text{test}}\) is the test set, \(\mathsf{p}\) is a product in \(\mathcal{D}_{\text{test}}\), \(\mathcal{R}\) is the reward (Section IV-D), \(\{\mathsf{M}_{1}^{T},\mathsf{M}_{2}^{T}\}_{\mathsf{p},k}\) is the \(k\)-th ranked predicted reactants (i.e., at the terminal step \(T\), see Section IV-A) for \(\mathsf{p}\). Higher MAP@\(N\) indicates better correctness among top-\(N\) predictions. Note that MAP@\(N\) is different from accuracy@\(N\) in retrosynthesis prediction [9, 14, 15], which compares the predictions only with the ground-truth reactions.
We also use normalized discounted cumulative gain at top \(N\) (NDCG@\(N\)), which is a popular metric in evaluating ranking. In our experiments, NDCG@\(N\) uses the rewards as gains and captures both the rewards and the ranking positions of the predictions. Higher NDCG@\(N\) indicates that correct predictions tend to be ranked higher.
#### Vi-C2 Diversity Metrics
We also measure the diversity of the correct predictions from different methods. A diverse set of correct predictions enables a broad range of viable options to synthesize a product, and thus is preferred in synthesis planning. We measure diversity using the average pairwise similarity among the top-\(N\) correct predictions, as follows:
\[\text{Diversity@}N=1-\frac{\sum_{\mathsf{p}\in\mathcal{D}_{\text{test}}}\sum_{ \mathsf{p}\in\mathcal{D}_{\text{test}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
from known reactions, \(\mathsf{RLSynC}\) benefits from random reactions and online iterations of augmented data, therefore, it can overcome the potential limitations and biases in the known reactions. In addition, \(\mathsf{RLSynC}\) can adapt and improve from past mistakes by learning from online iterations of data augmentation, and thus correct inaccurate \(Q\)-value predictions in zero-reward episodes. These advantages enable \(\mathsf{RLSynC}\) to finally outperform \(\mathsf{G}^{2}\)Retro.
\(\mathsf{RLSynLES}\) employs a Transformer model to translate root-aligned synlles strings into reactant strings. It also augments the training and test synthon SMILES strings with varied atom orders to further improve the performance. However, \(\mathsf{RLSMILES}\) is trained to recover the unique SMILES strings of ground-truth reactants from the training data, disregarding other possible reactants to synthesize the same product. In contrast, by augmenting data with the online interactions, particularly through top-\(N\) search, \(\mathsf{RLSynC}\) focuses beyond just the top-1 prediction and aims to maximize the rewards for the overall top-\(N\) predictions. As a result, \(\mathsf{RLSynC}\) achieves better MAP@\(N\) for \(N\in[2,10]\) than \(\mathsf{RSMILES}\). \(\mathsf{GraphRetro}\) formulates synthon completion as a classification problem over subgraphs. However, it ignores the impact of predicted subgraphs on the overall structures of resulting reactants, which may lead to incorrect predictions. Unlike \(\mathsf{GraphRetro}\), \(\mathsf{RLSynC}\) predicts reactants by adding feasible bonds and atoms to _current_ synthons under the guidance of rewards for resulting molecules. As a result, \(\mathsf{RLSynC}\) outperforms \(\mathsf{GraphRetro}\) on MAP@\(N\) at \(N\in[1,10]\).
Similar trends can be observed in Table III: in terms of NDCG@\(N\), \(\mathsf{RLSynC}\) consistently outperforms the best baseline method \(\mathsf{G}^{2}\)Retro at \(N\in[3,10]\), all with statistically significant improvement; the best improvement is 9.8% at \(N=10\), and average improvement 5.2% over \(N\in[3,10]\). \(\mathsf{RLSynC}\) achieves the same performance as \(\mathsf{G}^{2}\)Retro at \(N=2\). \(\mathsf{RSMILES}\) achieves the best NDCG@1 performance among all the methods, but its performance dramatically decreases for larger \(N\). \(\mathsf{GraphRetro}\)'s performance is between \(\mathsf{G}^{2}\)Retro and \(\mathsf{RSMILES}\). The difference between MAP@\(N\) and NDCG@\(N\) is that NDCG@\(N\) discounts the impact of low-ranking correct predictions. The fact that \(\mathsf{RLSynC}\) achieves both high MAP@\(N\) and NDCG@\(N\) indicates that \(\mathsf{RLSynC}\) predicts more correct reactions at high ranks (e.g., top-2, top-3).
#### V-A2 Diversity Evaluation
Table IV presents the performance of different methods in terms of Diversity@\(N\) among their correct predictions. Similar trends to those for MAP@\(N\) and NDCG@\(N\) can be observed for Diversity@\(N\). \(\mathsf{RLSynC}\) outperforms the best baselines, \(\mathsf{GraphRetro}\) and \(\mathsf{G}^{2}\)Retro at \(N\in[4,10]\), all with statistical significance, with the best improvement 13.1% at \(N=6\), and average improvement 10.6% over \(N\in[4,10]\). \(\mathsf{RLSynC}\) ties with \(\mathsf{GraphRetro}\) for best Diversity@3 performance, and only slightly underperforms \(\mathsf{G}^{2}\)Retro at \(N=1\) with no statistical significance.
While all the baseline methods are limited by the synthon completion patterns within known reactions from the training data, \(\mathsf{RLSynC}\) is able to discover patterns not present in the training data by learning from augmented data via online interactions. These newly discovered patterns could contribute to the better Diversity@\(N\) at \(N\in[3,10]\) for \(\mathsf{RLSynC}\). Higher Diversity@\(N\) indicates a higher variety of correctly predicted reactants. Please note that diversity in predicted reactions is always desired, as it can enable the exploration of multiple synthetic options. With a more diverse set of options, chemists can choose the most suitable synthesis reactions based on specific requirements and constraints. This makes \(\mathsf{RLSynC}\) a potentially preferable tool in synthetic design.
#### V-A3 Validity Evaluation
We also evaluate the validity of the completed synthons by the different methods. \(\mathsf{RLSynC}\) and \(\mathsf{G}^{2}\)Retro always achieve 100% validity among its top-\(N\) predictions (\(N\in[1,10]\)). This is because they enforces validity (Section IV-B), and completes synthons by adding atoms and bonds obeying valency rules. \(\mathsf{RSMILES}\) can achieve on average 99.3% validity among top-\(N\) (\(N\in[1,10]\)) predictions, and \(\mathsf{GraphRetro}\) can achieve 98.4% validity. \(\mathsf{RSMILES}\) formulates synthon completion as a sequence-to-sequence translation problem, so it cannot guarantee valency or the validity of the output strings. \(\mathsf{GraphRetro}\) leverages a graph-based model of molecules, but uses a more relaxed set of valency rules when editing the molecular graphs.
### _Evaluation on Retrosynthesis Prediction_
We also evaluate how \(\mathsf{RLSynC}\) can contribute to retrosynthesis prediction. For \(\mathsf{RLSynC}\), we use the top-5 predicted reaction centers from \(\mathsf{G}^{2}\)Retro as given reaction centers and denote \(\mathsf{RLSynC}\) in this method as \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\). \(\mathsf{G}^{2}\)Retro achieves the state-of-the-art performance on reaction center prediction (e.g., >96% accuracy [15]). For the other semi-template-based methods, we use their own reaction center prediction methods.
Since \(\mathsf{RLSynC}\) can only perform synthon completion on two synthons, we limit \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) to only those products for which \(\mathsf{G}^{2}\)Retro predicts a reaction center with exactly two synthons. To keep the comparison fair with other methods, we evaluate only on this subset of the test set, containing 2,607 products with two synthons predicted by \(\mathsf{G}^{2}\)Retro. Both template-free methods, including the template-free version of \(\mathsf{RSMILES}\)[9] (denoted as R-p2r) and RetroFormer, and semi-template-based methods, including RetroPrime, \(\mathsf{RSMILES}\), \(\mathsf{GraphRetro}\) and \(\mathsf{G}^{2}\)Retro, are compared. Due to space limits, we do not present their performance in terms of Diversity@\(N\).
#### V-B1 Correctness Evaluation
Table V and VI present the performance in terms of MAP@\(N\) and NDCG@\(N\), respectively, of all the methods for retrosynthesis prediction, that is, given a product, to predict the reactants. Table V shows that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) consistently outperforms the baselines for MAP@\(N\) at
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \(N\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \(\mathsf{RSSMILES}\) & 0.172 & 0.177 & 0.186 & 0.193 & 0.200 & 0.206 & 0.212 & 0.217 & 0.221 \\ \(\mathsf{GraphRetro}\) & **0.197** & **0.205** & 0.212 & 0.217 & 0.222 & 0.227 & 0.231 & 0.234 & 0.237 \\ \(\mathsf{G}^{2}\)Retro & 0.194 & 0.202 & 0.209 & 0.216 & 0.222 & 0.229 & 0.236 & 0.243 & 0.249 \\ \(\mathsf{RLSynC}\) & 0.193 & **0.205** & **0.231** & **0.243** & **0.251** & **0.256** & **0.260** & **0.265** & **0.272** \\ \hline imprv.(\%) & -2.0 & 0.0 & 9.0* & 12.0* & 13.1* & 11.8* & 10.2* & 9.1* & 9.2* \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Diversity@\(N\) for Correct Synthon Completion
\([2,10]\). RetroFormer performs best among the baselines for MAP@\(N\) at \(N\in[2,10]\). For \(N=1\), RetroPrime outperforms all methods. \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) significantly outperforms RetroFormer on 9 results, with the best improvement 14.0% at \(N=10\) and average improvement 8.8% over \(N\in[2,10]\). We observe that as \(N\) increases, the performance improvement from \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) over other methods also increases, indicating that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) tends to predict more reactions that receive positive rewards.
RetroFormer embeds both the SMILES strings and molecular graphs of product molecules, and uses the embeddings to predict reaction center regions and generate reactant SMILES strings. By leveraging the rich information in such embeddings, RetroFormer is able to achieve good performance on MAP@\(N\) at \(N\in[2,10]\). RetroPrime first translates the SMILES strings of product into synthons, and then synthons into reactants. Both methods are trained to recover the unique ground-truth synthons and reactants in the training data, leading to superior MAP@1 performance. However, they do not consider any other possible synthons or reactants, and thus, fall short in generating multiple reactions, leading to worse performance on MAP@\(N\) at \(N\in[2,10]\).
In contrast to RetroFormer and RetroPrime, \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) has the potential to discover new patterns by learning from random reactions and online interactions. In addition, RetroFormer and RetroPrime implicitly control the quality of top-\(N\) predictions through likelihood estimation, but \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) takes a more direct approach. It uses the \(Q\)-function to directly evaluate the reward for a specific action (Equation 6 and 7), and then selects the actions with the best rewards so to optimize the quality of top-\(N\) predictions. As a result, \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) improves the overall quality of top-\(N\) predictions and thus outperforms both RetroFormer and RetroPrime on MAP@\(N\) at \(N\in[2,10]\).
Among all the semi-template-based baselines (RetroPrime, RSMILES, GraphRetro and G\({}^{2}\)Retro), on average, GraphRetro and G\({}^{2}\)Retro perform the best - GraphRetro performs better on smaller \(N\) values, and G\({}^{2}\)Retro performs better on larger \(N\) values. As both GraphRetro and G\({}^{2}\)Retro are trained to recover ground-truth reactant molecules, they achieve comparable performance with \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) on MAP@1 (0.896 vs 0.883 vs 0.899). However, \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) outperforms GraphRetro and G\({}^{2}\)Retro in terms of MAP@\(N\) at \(N\in[2,10]\). As mentioned earlier, \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) can directly optimize the overall quality of top-\(N\) predictions, resulting in more correct and higher-quality results among top-\(N\) predictions.
A similar trend can be observed in Table VI, which shows that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) consistently outperforms all the baselines on NDCG@\(N\) at \(N\in[2,10]\), with an average improvement of 6.9% over the best baseline and the best improvement 10.6% at \(N=10\). This demonstrates that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) tends to rank correct predicted reactions high, even when they come from different predicted reaction centers. Such capability allows chemists to sift out infeasible synthesis reactions, focusing on the most promising synthesis reactions instead. Please note that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) is trained on reactants from ground-truth reaction centers given in known reactions, not from predicted reaction centers. Thus, the results also demonstrate that \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) can collaborate well with stand-alone reaction center predictors, enhancing its practical utility in synthetic design.
#### Iv-B2 Validity Evaluation
We evaluate the validity of the predicted reactions from each retrosynthesis method. G\({}^{2}\)Retro and \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) return only valid predicted reactions for retrosynthesis. Among the top-10 predictions, RetroFormer achieves an average 98.58% validity, RSMILES achieves 99.99%, R-p2r achieves 100.00%, GraphRetro achieves 99.77% and RetroPrime achieves 99.99%. RetroFormer, RSMILES, R-p2r and RetroPrime are all sequence-based methods; they do not strictly enforce the validity of their predicted results. GraphRetro adopts a flexible valency checking, resulting in a few molecules with rare valency of 5 for carbon atoms (usually 4). In our study, these molecules are considered invalid due to their rarity in drug molecules.
### _Evaluation on Data Augmentation_
#### Iv-C1 Correctness Evaluation
Fig. 3 and Fig. 4 present the performance in terms of MAP@\(N\) and NDCG@\(N\) from \(\mathsf{RLSynC}\) over different online iterations of data augmentation, respectively, on synthon completion. Note that online data augmentation is determined by the continuity of performance improvement on the validation set (line 16 in Algorithm 1), and therefore, the performance on the test set may not be strictly improving over iterations. Even though, as Fig. 3 shows, with 8 online iterations of data augmentation, \(\mathsf{RLSynC}\) is able to improve its MAP@\(N\) performance over all \(N\) values. For example, \(\mathsf{RLSynC}\) improves its MAP@1 performance from 0.763 at iteration 0 (i.e., the initial model using only known and random reactions) to 0.927 after 8 iterations, that is, 21.5% improvement. Note that at iteration 6, the online data augmentation is switched from using only 1 new episode (line
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \(N\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline R-p2r & 0.905 & 0.850 & 0.806 & 0.774 & 0.750 & 0.728 & 0.709 & 0.691 & 0.676 & 0.662 \\ RetroFormer & 0.890 & 0.851 & 0.817 & 0.790 & 0.768 & 0.744 & 0.725 & 0.707 & 0.689 & 0.670 \\ \hline RetroPrime & **0.946** & 0.767 & 0.692 & 0.634 & 0.597 & 0.558 & 0.533 & 0.503 & 0.489 & 0.466 \\ RSMMILES & 0.792 & 0.681 & 0.608 & 0.558 & 0.515 & 0.485 & 0.495 & 0.438 & 0.418 & 0.402 \\ GraphRetro & 0.896 & 0.835 & 0.789 & 0.748 & 0.708 & 0.673 & 0.641 & 0.611 & 0.582 & 0.555 \\ G\({}^{2}\)Retro & 0.883 & 0.820 & 0.778 & 0.744 & 0.715 & 0.691 & 0.667 & 0.645 & 0.626 & 0.609 \\ \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) & 0.899 & **0.882** & **0.863** & **0.844** & **0.824** & **0.810** & **0.796** & **0.784** & **0.774** & **0.764** \\ \hline \hline improv.(\%) & -5.0\({}^{\circ}\) & 3.6\({}^{\circ}\) & 5.6\({}^{\circ}\) & 6.8\({}^{\circ}\) & 7.3\({}^{\circ}\) & 8.9\({}^{\circ}\) & 9.8\({}^{\circ}\) & 10.9\({}^{\circ}\) & 12.3\({}^{\circ}\) & 14.0\({}^{\circ}\) \\ \hline \hline \end{tabular}
* The annotations in this table are the same as those in Table II.
\end{table} TABLE V: MAP@\(N\) of Retrosynthesis Prediction
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \(N\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline R-p2r & 0.905 & 0.862 & 0.829 & 0.803 & 0.784 & 0.766 & 0.750 & 0.735 & 0.723 & 0.711 \\ RetroFormer & 0.890 & 0.860 & 0.834 & 0.813 & 0.795 & 0.777 & 0.762 & 0.747 & 0.733 & 0.718 \\ \hline RetroPrime & **0.946** & 0.807 & 0.745 & 0.697 & 0.665 & 0.632 & 0.609 & 0.584 & 0.569 & 0.550 \\ RSMMILES & 0.792 & 0.706 & 0.649 & 0.608 & 0.573 & 0.548 & 0.525 & 0.507 & 0.489 & 0.474 \\ GraphRetro & 0.896 & 0.849 & 0.813 & 0.781 & 0.751 & 0.724 & 0.699 & 0.675 & 0.651 & 0.630 \\ G\({}^{2}\)Retro & 0.883 & 0.834 & 0.801 & 0.775 & 0.751 & 0.732 & 0.713 & 0.695 & 0.680 & 0.665 \\ \(\mathsf{RLSynC}_{\mathsf{G}^{2}}\) & 0.899 & **0.886** & **0.872** & **0.857** & **0.843** & **0.831** & **0.821** & **0.811** & **0.802** & **0.794** \\ \hline improv.(\%) & -5.0\({}^{\circ}\) & 2.8\({}^{\circ}\) & 4.6\({}^{\circ}\) & 5.4\({}^{\circ}\) & 6.0\({}^{\circ}\) & 6.9\({}^{\circ}\) & 7.7\({}^{\circ}\
10 in Algorithm 1) to the top-\(5\) new episodes for each product, which significantly boosts performance.
Fig. 6 and Fig. 7 present the performance improvement in terms of MAP@\(N\) and NDCG@\(N\) from iterations 6, 7 and 8 over the initial model, respectively. From these figures, we see higher improvement from later iterations over the initial model. For example, in Fig. 6, the 8-th iteration can improve MAP@10 at 40.1%, whereas the 6-th iteration can improve MAP@10 at 32.5%. Note that the episodes generated for data augmentation in later iterations are derived from agents that have been trained on data from previous iterations. These episodes are more likely to contain correct reactions, and subsequent agents will benefit from training on these correct reactions, leading to better performance in later iterations.
#### Vii-B2 Diversity Evaluation
Fig. 5 and Fig. 8 present the Diversity@\(N\) for RLSynC across different data augmentation iterations. The best diversities result from iteration 8, with Diversity@10 as 0.272 as an example. Iterations 6, 7, and 8 provide large improvements to diversity. These iterations augment the dataset with 5 unique episodes per product, chosen by agents that have been trained for multiple iterations. In contrast, the first five iterations, which augment the dataset with only one episode per product, do not always improve Diversity@\(N\) but improve NDCG@\(N\) and MAP@\(N\).
## VIII Discussion and Conclusions
We developed RLSynC, a novel multi-agent reinforcement learning method with offline training and online data augmentation for synth completion. RLSynC has two agents to complete the two synthons of a product into reactants. The two agents share the same action selection policy learned from known reactions, random reactions, and reactions that are generated and deliberately selected during the online data augmentation iterations. Using a forward synthesis prediction model as the reward function, RLSynC achieves superior performance on synthon completion and for retrosynthesis prediction compared to the state-of-the-art methods.
In retrosynthesis prediction, how to evaluate predicted reactions automatically at scale is under-studied [31]. Existing methods compare the predictions with known reactions (i.e., ground truth) of products, and consider only the predictions that exactly match the ground truth to be correct. However, as demonstrated in Chen _et. al_[15], many "incorrect" predictions can still be chemically possible, and may even represent more viable options. Thus, only comparing to the ground truth may underestimate the performance. Moreover, making the recovery of known reactions the only optimization objective may result in retrosynthesis prediction models that lack the ability to discover novel reactions.
RLSynC provides a new and versatile framework that can enable a more comprehensive evaluation paradigm. By using a reward function composed of multiple evaluation functions, it allows reaction evaluation with respect to the corresponding evaluation metrics, particularly when the reactions do not match the known reactions. Ideally, if high-throughput synthesis reactions can be conducted in laboratories over the predicted reactants, the reaction outcomes (e.g., yield) can be used as the reward. Even more importantly, RLSynC enables the exploration of new reactions that are not included in the ground truth, but are still feasible based on the reaction evaluation (i.e., the reward function), through online iterations of data augmentation. This feature makes RLSynC especially suitable for new reaction discovery purposes, and provide reactions with respect to specific evaluation metrics.
In future work, we will explore the following directions. We will generalize RLSynC for products with up to three reactants (i.e., up to three synthons; 100.00% of reactions in benchmark dataset). This can be done by allowing for three agents and empty synthons if there are fewer reactants. In these cases, the agents with empty synthons will only be able to choose
NOOP. We will also incorporate molecular graph representation learning within RLSynC so as to improve its power to represent and learn from synthon and product structures.
## Acknowledgements
This project was made possible, in part, by support from the National Science Foundation grant nos. IIS-2133650 (X.N.), and The Ohio State University President's Research Excellence program (X.N.). Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agency.
We implement RLSynC in Python 3.8.13 using PyTorch 1.12.1, RDKit 2021.03.5, and gymasium 0.27.0. For computing \(Q\), we use a discount factor \(\gamma=0.95\) after considering \(\{0.90,0.95,0.99\}\). In estimating \(Q\), we use a four-layer feed-forward neural network with 4,096, 2,048, 1,024, and 1 output nodes on each respective layer. The input to the network \(\mathbf{h}_{i,t}\), used 2,048-bit Morgan fingerprints of radius 2 without chirality. We used ReLU and 0.7 dropout between each layer, with no activation or dropout on the output. We explored dropout values of \(\{0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8\}\). We used the Adam optimizer with learning rate \(10^{-4}\) after considering \(\{10^{-3},10^{-4},10^{-5}\}\). to optimize the loss function in Equation 5 with a regularization coefficient of \(\alpha=10^{-5}\) after considering \(\{10^{-4},10^{-5}\}\). During training, we iterate over batches of \(B\) products, where a training batch consists of \(B\times T\times 2\) state-action pairs. For iterations 0 (initial model), 1, 2, 3, and 4 (data augmentation), we used \(B=10\). At iteration 5, as validation performance stabilized, we were able to see a small improvement by increasing \(B\) to \(20\). For iterations 5, 6, 7, and 8, we used \(B=20\).
For iterations 1, 2, 3, 4, and 5 for online data augmentation, we used one predicted reaction for each unique product in the training data to augment training data. For iterations 6, 7, and 8, we used top-5 predicted reactions (\(N=5\), \(k=3\) in Algorithm 2). When evaluating RLSynC, we produced top-10 predicted reactions for each product in the test set using our top-\(N\) search algorithm (Algorithm 2) with \(N=10\) and \(k=3\).
|
2310.17479 | Signature quasinormal modes of Ellis-Bronnikov wormhole embedded in
warped braneworld background | We examine the quasi normal modes of Ellis-Bronnikov wormholes embedded in a
warped five dimensional braneworld background and compare with it's four
dimensional counterpart. These scalar quasi normal frequencies are obtained
using the WKB formula, Prony method and the direct integration method. The
signature of the warped extra dimension shows up as two distinct quasi normal
ringing era, characterised by two distinct dominant quasi normal modes.
Features of the latter region are similar to that observed earlier for massive
scalar field in black hole background. We also discuss the how steepness of the
neck of the wormhole effects the quasi normal frequencies. | Antariksha Mitra, Suman Ghosh | 2023-10-26T15:31:57Z | http://arxiv.org/abs/2310.17479v2 | # Signature quasinormal modes of Ellis-Bronnikov wormhole embedded in warped braneworld background
###### Abstract
We examine the quasi normal modes of Ellis-Bronnikov wormholes embedded in a warped five dimensional braneworld background and compare with it's four dimensional counterpart. These scalar quasi normal frequencies are obtained using the WKB formula, Prony method and the direct integration method. The signature of the warped extra dimension shows up as two distinct quasi normal ringing era, characterised by two distinct dominant quasi normal modes. Features of the latter region are similar to that observed earlier for massive scalar field in black hole background. We also discuss the how steepness of the neck of the wormhole effects the quasi normal frequencies.
## I Introduction
Wormholes are known to be solutions of Einstein field equations, like the Schwarzschild black hole in vacuum, that essentially connects two distinct spacetime points within our Universe (intra-Universe) or two 'parallel universes' (inter-universe) creating a short-cut that allows 'apparently faster than light' travel. Detailed historical account of theoretical discovery/construction of wormholes could be found, for example, in [1; 2]. The original wormhole solutions were found to be _non-traversable_[3; 4; 5; 6] or unstable under perturbation. Violation of the (averaged) null energy condition is required to prevent the wormhole 'throat' from collapsing and making it _traversable_. This could be realised by introducing exotic matter around the throat [7; 8]. It appears as if such matters may have a quantum origin, but the standard model matter seems to be inadequate for the generation of macroscopic wormholes [9]. Remarkably, plethora of wormhole constructions under the so-called'modified theories of gravity' exist in the literature that avoids the use of exotic matter [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24].
The four dimensional Ellis-Bronnikov spacetimes (4D-EB) [25; 26] that employs phantom scalar field (a field with negative kinetic term) and is one of the most researched wormhole geometries since its introduction in 1973. Several studies on this class of model can be found in the literature, including geometry of spinning 4D-EB spacetime [27], generalized spinning of 4D-EB wormhole in scalar-tensor theory [28], hairy Ellis wormholes solutions
[29], Ellis wormholes in anti-de Sitter space [30], stability analysis of 4D-EB solution in higher dimensional spacetime [31] as such. Kar et al. presented a generalised version of 4D-EB spacetime (4D-GEB) [32], where the need for exotic matter is _partially_ evaded by introducing a new wormhole parameter, \(n\geq 2\) (\(n=2\) corresponds to 4D-EB geometry). Quasi-normal modes (QNM), echoes and some other aspects of 4D-GEB wormhole are analysed in [33].
Wormholes are yet contemplated as conjectural. However, recent developments in black hole observation [34] have increased the possibility to distinguish a black hole from a so-called black hole mimicker such as a wormhole. In principle, one may identify wormholes through lensing effects, shadows, Einstein rings, and other phenomena [35; 36; 37; 38; 39; 40; 41] which may in turn favour modified gravity theories over general relativity. QNMs [42; 43; 44; 45; 46; 47] are one such signature that characterises- e.g. the late time response ('ringing') of a black hole (or wormhole) under perturbation. Dominant quasi-normal frequencies (QNFs) can be seen in the gravitational wave signals from black holes (or similar compact objects) at late times. They have been observed recently by LIGO/VIRGO collaborations [48; 49; 50; 51]. Remarkably observation of multi-mode quasi-normal spectrum has been reported in [52]. This allows one to determine the individual black hole/wormhole parameters involved. Determination of QNFs with high accuracy is an important challenge and can constrain various modified gravitational theories also test strong gravity regime.
One class of the modified theories of gravity involve extra spatial dimension(s). In fundamental physics, the emergence of an additional spatial dimension is ubiquitous - Kaluza and Klein [53; 54] first demonstrated it in an effort to combine gravity and electromagnetic theories for a five-dimensional (5D) gravity model in 1921 and 1926, respectively. Be it the string theory [55] or in the context of symmetries of particle physics (the octonionic hypotheses) [56; 57; 58; 59; 60; 61], the extra dimensions seems to appear _naturally_. String theory also motivated the brane-world scenarios- where our 4 dimensional (4D) Universe (3-brane) is embedded in a higher dimensional bulk. The so-called DGP models produce infra-red modification with extra dimensional gravity dominating at low energy scale [62]. Perhaps, the most popular of these models are the 'warped braneworld' models [63; 64; 65; 66; 67] that generate ultra-violet modification to general relativity with extra dimensional gravity dominating at high energy scale and address the Hierarchy issue in the fundamental scales of physics. These models, feature a non-factorisable curved 5D space-time where the 4D metric is a function of the additional dimension through a warping factor.
Attempts to build wormhole models in higher-dimensional spacetime has began to
appear recently [68; 69; 70; 71; 72; 73]. Kar [74] has proposed a 5D warped wormhole model where the warping chosen is largely inspired by the non-static Witten bubble. Recently, in [75], an EB spacetime embedded in 5D warped bulk (5D-WEB) is constructed that _satisfy_ the weak energy conditions (with a decaying warp factor). We further analysed the timelike trajectories and the geodesic congruences in these spacetimes in detail in [76; 77]. The warping factor, we assume is that of the well-known thick brane model [78; 79; 80; 81], which is a smooth function of the extra dimension (thus there are no derivative jump or delta functions in the curvature and connections).
In this work, we determine the QNFs (using multiple techniques/algorithms) for both the 4D-(G)EB and 5D-WEB spacetimes and contrast them to distinguish the effects or signatures of the wormhole parameters and the warped extra dimension. The following is a breakdown of the content of this article. In Section (II), we briefly introduce the novel 5D-W(G)EB wormhole geometry alongside it's 4D counterpart. In Section (III), the field equation for (scalar) perturbation of the geometry and corresponding effective potentials are derived. In section (IV), we discuss various methods to solve the master equation in order to determine the time domain profile of the perturbation and QNFs. Note that numerical computations, though produce accurate values, they often fail to provide physical intuition. In Section (V), we report the results and compare 4D and 5D models to distinguish the signature of the warped extra dimension and the wormhole parameters. Remarkably, we found two distinct QNM era with two different dominant QNFs. Finally, in Section (VI) we summarise the work done and key results.
## II 4D-GEB and 5D-W(G)EB spacetime
4D-EB wormhole is a spacetime geometry constructed in presence of a phantom matter field- one whose action contains a negative kinetic energy term. This solution is a spherically symmetric, static and geodetically complete, horizonless manifold that has a 'throat' (which becomes apparent in an embedding diagram [7]) linking two asymptotically flat regions and is given by the following line element,
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b_{0}^{2}}{r^{2}}}+r^{2}d\theta^{2}+r^{2 }\sin^{2}\theta d\phi^{2}. \tag{1}\]
Here \(b_{0}\) is the wormhole's throat radius. The EB spacetime metric, can also be written as,
\[ds^{2}=-dt^{2}+dl^{2}+r^{2}(l)\ d\theta^{2}+r^{2}(l)\sin^{2}\theta\ d\phi^{2} \tag{2}\]
\[\mbox{with}\ \ r^{2}(l)=l^{2}+b_{0}^{2} \tag{3}\]
and \(l\) is called the 'tortoise coordinate' or 'proper radial distance'. A generalisation of the EB model (GEB) is proposed in [32] (which is consistent with Morris-Thorne conditions essential for a Lorentzian wormhole), given by
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b(r)}{r}}+r^{2}d\theta^{2}+r^{2}\sin^{2} \theta\ d\phi^{2} \tag{4}\]
\[\mbox{with}\ \ b(r)=r-r^{(3-2n)}(r^{n}-b_{0}^{n})^{(2-\frac{2}{n})}. \tag{5}\]
The parameter \(n\) takes only even values so that \(r(l)\) is smooth over the complete range of \(-\infty<l<\infty\). For \(n=2\), we get the original EB geometry back. The GEB metric looks much simpler in terms the tortoise coordinate
\[dl^{2}=\frac{dr^{2}}{1-\frac{b(r)}{r}}\ \Longrightarrow\ r(l)=(l^{n}+b_{0}^{n})^{ \frac{1}{n}}. \tag{6}\]
Note that at the wormhole throat (\(l=0\)) the sole non-vanishing derivative is the \(n^{th}\)-order derivative of \(r(l)\). The effective potential (elaborated later on) also has a non-zero \(n^{th}\)-derivative at \(l=0\), which gives a negative value for EB model (\(n=2\) case), while for all other \(n\) values it provides a positive value.
The 5D warped Ellis-Bronnikov model, introduced in [75] is
\[ds^{2}=e^{2f(y)}\Big{[}-dt^{2}+dl^{2}+r^{2}(l)\ \big{(}d\theta^{2}+\sin^{2} \theta\ d\phi^{2}\big{)}\Big{]}+dy^{2}. \tag{7}\]
In this model, \(y\) is an extra dimension (\(-\infty\leq y\leq\infty\)), \(f(y)\) is a warp factor and the term in square bracket is the GEB space-time. We assume, \(f(y)=\pm\log[\cosh(y/y_{0})]\), which represent known thick brane solutions in presence of bulk matter fields [78]. This choice also avoids jump or delta function in connections and Riemann tensors. In [75], we showed that this class of models indeed satisfy weak energy condition in presence of a decaying warp factor. Further, instead of having \(n>2\) in 4D-GEB, having an warped extra dimension as in 5D-WGEB model, removes the negative energy density matter matter completely from the 3-brane located at \(y=0\)[76; 77]. Note that for all numerical calculation we have chosen \(y_{0}=1\).
## III Field equation and effective potential
The perturbations or fluctuations in a black hole or wormhole geometry may be caused by merger or gravitational interactions with other astrophysical objects or even the so-called test objects that may represent an spaceship passing through. The scalar frequencies of these perturbation evolve via a massless Klein-Gordon equation, given by,
\[\nabla_{\mu}\nabla^{\mu}\Psi=\frac{-1}{\sqrt{-g}}\partial_{\mu}\left(g^{\mu\nu} \sqrt{-g}\ \partial_{\nu}\Psi\right)=0, \tag{8}\]
where \(\Psi\) is the scalar (field) perturbation and \(g\) is the determinant of the metric tensor involved. This equation does allow (with appropriate boundary conditions) solutions having complex frequencies. These QNFs have a natural interpretation as gravitational radiation where the black hole/wormhole is treated as an open system. The QNFs, by definition, are associated with specific boundary conditions which says they are purely outgoing waves at spatial infinities. The real part of a QNF denotes the oscillation while the imaginary part implies damping of the field over time. The vector and tensor perturbations (wherever applicable) also follow a similar field equation as scalar frequencies as such. These QNFs are also key to test stability of a wormhole geometry under perturbation. They certainly depend on the various wormholes parameters involved, thus could have distinct features in comparison with black hole as such. Analysis of the effective potential and determination of QNF's for the 4D-GEB model is briefly addressed in [33]. Below we reproduce and extend their result of the 4D scenario and then compare them with the corresponding results derived for 5D-WEB spacetime.
### 4D scenario
Since the wormhole geometry is static and spherically symmetric, one may use the following separation of variable for the field \(\Psi\), in the 4D-GEB scenario, as
\[\Psi(t,r,\theta,\phi)=\mathcal{Y}(\theta,\phi)\frac{R(r)e^{-i\omega t}}{r}, \tag{9}\]
where \(\mathcal{Y}(\theta,\phi)\) are the spherical harmonics. This leads to a form similar to Schrodinger equation in the tortoise coordinate \(l\),
\[\omega^{2}+\frac{1}{R}\frac{\partial^{2}R}{\partial l^{2}}-V_{eff}=0. \tag{10}\]
The 'effective potential' \(V_{eff}\) is given by
\[V_{eff}=\left[\frac{(n-1)b_{0}^{n}n^{n-2}}{(l^{n}+b_{0}^{n})^{2}}+\frac{m(m+1)}{( l^{n}+b_{0}^{n})^{2/n}}\right]. \tag{11}\]
where, \(m\) represents the azimuthal angular momentum. In terms of the radial coordinate, the effective potential is simply,
\[V_{eff}=\left[\frac{r^{\prime\prime}}{r}+\frac{m(m+1)}{r^{2}}\right]. \tag{12}\]
Before going into the solutions of the field equation and determination of the QNFs, let us analyse the effective potential corresponding to perturbations in 4D and 5D models.
Fig. 1 shows the variation of the effective potential vs \(l\) for the various 4D-GEB models (varying \(n\)) for four different angular frequencies \(m=1,2,,5,10\). While the plots in Fig. 2, show the variation of effective potential vs \(l\) for \(n=2\) (EB case) and \(n=4\) (for various \(m\) frequencies). 1 Few prominent features observed from these plots are as follows.
Footnote 1: The throat radius \(b_{0}\) is taken as unity for numerical evaluation.
* \(V_{eff}\) exhibits a single barrier for \(n=2\), while a twin barrier exists for all \(n>2\)
Figure 1: Plot of effective Potential for fixed \(m\) and varying \(n\), (top left) \(m=1\), (top right) \(m=2\), (bottom left) \(m=5\), (bottom right) \(m=10\)
This particular feature, in fact, corresponds to removal of the exotic matter from the throat region.
* For higher \(m\) frequencies, the potential increases and for \(n>2\), twin-peaks merge to create a plateau-shaped single barrier. In other words, the twin barrier feature is only visible for low values of \(m\). This could have important implications for the stability of the 4D-GEB model which may- be addressed elsewhere.
* It is known that WKB method to determine QNFs are not suitable for twin barrier potentials. However, the potential profiles suggest that WKB formula could be useful even for \(n>2\) for high \(m\).
* All potentials vanish asymptotically which imply trivial boundary condition for QNMs.
### 5D scenario
In the 5D-WGEB spacetime given by Eq. 7, we use the following separation of variables
\[\Psi^{5D}=Y(\theta,\phi)e^{-i\omega t}\frac{R(r)}{r}F(y)e^{-f(y)}, \tag{13}\]
where \(F(y)\) and \(f(y)\) are functions only depending on \(y\). Thus the Klein-Gordon equation in 5D leads to,
\[\left(\omega^{2}-\left[\frac{(n-1)b_{0}^{n}l^{n-2}}{(l^{n}+b_{0}^ {n})^{2}}+\frac{m(m+1)}{(l^{n}+b_{0}^{n})^{2/n}}\right]+\frac{1}{R}\frac{ \partial^{2}R}{\partial l^{2}}\right)\] \[=-\left[\frac{1}{F}\frac{\partial^{2}F}{\partial y^{2}}+2\frac{1 }{F}\frac{\partial F}{\partial y}\frac{\partial f}{\partial y}-\frac{ \partial^{2}f}{\partial y^{2}}-3\left(\frac{\partial f}{\partial y}\right)^{2 }\right]e^{2f(y)} \tag{14}\]
Figure 2: Plot of effective Potential for fixed \(n\) and varying \(m\), (left) \(n=2\), (right) \(n=4\).
Now taking
\[-e^{2f(y)}\left[\frac{1}{F}\frac{\partial^{2}F}{\partial y^{2}}+2\frac{1}{F}\frac {\partial F}{\partial y}\frac{\partial f}{\partial y}-\frac{\partial^{2}f}{ \partial y^{2}}-3\left(\frac{\partial f}{\partial y}\right)^{2}\right]=q^{2}, \tag{15}\]
Eq. (14) reduces to the form of Eq. (10) with the _effective potential_ being,
\[V_{eff}=\left[\frac{(n-1)b_{0}^{n}ln^{-2}}{(l^{n}+b_{0}^{n})^{2}}+\frac{m(m+1)} {(l^{n}+b_{0}^{n})^{2/n}}\right]+q^{2}. \tag{16}\]
To find the eigenvalues \(q^{2}\) by solving Eq. (15), let us first do a coordinate transformation given by \(dz=dy\)\(e^{-f}\) i.e. \(z=\sinh y\) (for the decaying warp factor), that leads to
\[\frac{\partial^{2}F}{\partial z^{2}}+\frac{\partial f}{\partial z}\frac{ \partial F}{\partial z}-\left[\frac{\partial^{2}f}{\partial z^{2}}+2\left( \frac{\partial f}{\partial z}\right)^{2}\right]F=-q^{2}F. \tag{17}\]
Then, we use the ansatz \(F(z)=G(z)\exp(-f/2)\), leading to the following simpler form
\[-\frac{\partial^{2}G}{\partial z^{2}}+V_{e}(z)G=q^{2}G,\quad\mbox{ where }\quad V_{e}(z)=\frac{3}{2}\frac{\partial^{2}f}{\partial z^{2}}+\frac{9}{4} \left(\frac{\partial f}{\partial z}\right)^{2}=\frac{3(5z^{2}-2)}{4(z^{2}+1)^ {2}}. \tag{18}\]
The potential \(V_{e}(z)\) is plotted in Fig. 3. This potential vanishes as \(z\rightarrow\pm\infty\) which implies that positive (or real \(q\)) eigenvalues are a continuum. This analysis is consistent with numerical solution found using MATHEMATICA.
Apparently, it may seem that for negative eigenvalues a discrete spectrum exists. This can be investigated with the following approximation. The series expansion of the potential about \(z=0\) is given by
\[V_{e}(z)=-\frac{3}{2}+\frac{27}{4}z^{2}-12z^{4}+\frac{69}{4}z^{6}-O(z)^{8} \tag{19}\]
Since, for the negative part of the potential- \(|z|<1\), we choose to ignore the terms of
onwards. This leaves us with a harmonic oscillator potential whose eigenvalues are given by
\[E_{h.o.}=\left(n+\frac{1}{2}\right)\sqrt{27}-3/2,\ \ \ \ n=0,1,2... \tag{20}\]
The ground state eigenvalue, for \(n=0\), is positive in spite of the factor \(-3/2\). This observation remains unchanged if we include higher order terms and find the eigenvalue numerically. Thus there are no negative eigenvalues or bound states of Eq. (18). 2 Thus the \(q^{2}\)-term effectively contributes as a effective mass in the Schrodinger equation. This is a well-known feature of massless 5D field equation when projected on 4D as such. Such equation appears in presence of a massive scalar field in a 4D black hole background [82]. There authors have shown that for some values of the black hole mass and the scalar field mass purely real QNM frequencies or the so-called quasi-resonances exist.3. We shall see similar results below. Horowitz and Hubeny [83] has addressed a similar problem in the sense that we also have an asymptotically non-vanishing potential (Fig. 4 below). Note that the effect of the extra dimension is encoded in the eigenvalues \(q^{2}\). If \(q^{2}\) takes continuous values then the information about the functional form the warp factor would not be imprinted on the QNFs to be determined below.
Footnote 2: Negative eigenvalues would imply imaginary \(q\)-values.
Footnote 3: Naturally, there is a debate whether these frequencies can be called QNF at all.
In what follows, we shall choose suitable values of \(q\) to be put in Eq. (16) and Eq. (10) for numerical evaluation and graphical presentations. Note that, \(q\) has a dimension of inverse length. Therefore, its exact numerical value is less important for our purpose. So, one can set \(y_{0}=1\) without loosing any generality. However, '\(q=b_{0}^{-1}\)' is expected to have a physical significance as we will see later.
Figure 4: Plot of effective Potential for constant \(m\) and \(n\) at different dimensions(varying q),(left)\(n=2,m=2\);(right) \(n=4,m=2\).
Fig. 4, shows the effective potential profile vs \(l\), for various values of \(q\) with fixed azimuthal angular momentum (\(m=2\)) for \(n=2\) (WEB) and \(n=4\) (a WGEB) geometries. Due to the presence of the extra dimension is that as \(l\rightarrow\infty\), the potential does not vanish and essentially becomes equal to \(q^{2}\) consistent with Eq. (16). This will be reflected on the choice of boundary conditions to determine the QNFs using various methods.
## IV Time domain spectrum \(\&\) quasi normal frequencies
WGEB models with a decaying warp factor does satisfy the enegy conditions even for \(n=2\) or the original EB spacetime [75] where there is a single barrier only. Which in turn suggests that one may use WKB method to find approximate QNF values not only for 4D-EB wormholes but also for 5D-WEB (\(n=2\)) wormholes as well. We employ numerical methods to supplement the WKB approach also for higher accuracy in cases where the WKB approach is less efficient.
### WKB approach
The semi-analytical WKB approximation to derive QNFs was developed by Schutz and Will [84]. The method is based on matching of the asymptotic WKB solutions at spatial infinities and at the neck of the wormhole (event horizon in case of a black hole) with the Taylor expansion near the top of the potential barrier through the two turning points. The QNF frequencies found by taking the WKB solutions upto the eikonal limit are given by the following formula [85; 86]
\[w_{p}^{2}=V_{0}-i\left(p+\frac{1}{2}\right)\sqrt{-2V_{0}^{\prime\prime}} \tag{21}\]
where \(V_{0}\) and \(V_{0}^{\prime\prime}\) denote the values of the effective potential and its second derivative at the maximum. \(p\) denotes the overtone number with \(p=0\) being the fundamental mode. Here, we focus only on the fundamental frequencies and compared the WKB values with the numerical results (derived in the next section) in the tables given below. For a recent comprehensive review on WKB methods one may refer to [44]. Let us now discuss the numerical methods to compute the QNFs for 4D and 5D geometries.
### Numerical approaches
QNFs are complex frequencies that characterise damped oscillation of gravitational perturbations in the metric. There are many methods developed to determine these frequencies. Few numerical approaches are designed to find QNFs with any desired accuracy (see [44] for a review of methods), which are based on convergent procedures. Each has it's own advantage and disadvantages. Developing efficient method to compute QNFs is an active area of research. The analytic methods, e.g. WKB method, are less accurate compared to numerical methods, for example in presence of multiple barriers (e.g. \(n>2\) in GEB models) [33]. The time dependent wave equation, integrated over the angular coordinates, has following form,
\[V_{eff}\Psi_{m}(t,l)+\frac{\partial^{2}\Psi_{m}(t,l)}{\partial t^{2}}-\frac{ \partial^{2}\Psi_{m}(t,l)}{\partial l^{2}}=0 \tag{22}\]
In the first method, we determine the time evolution of the scalar perturbation by numerically integrating Eq. (22) using the methodology presented in [87; 44]. The essential steps are as follows. One first adopts the light cone coordinates, \(du=dt+dl\) and \(dv=dt-dl\), which implies,
\[\left(4\frac{\partial^{2}}{\partial u\partial v}+V_{eff}(u,v)\right)\Psi_{m}(u,v)=0 \tag{23}\]
The time evolution operator, using simple two-variable Taylor expansion reads as,
\[\begin{split}&\exp\left(h\frac{\partial}{\partial t}\right)=\exp \left(h\frac{\partial}{\partial u}+h\frac{\partial}{\partial v}\right)\\ &=-1+\exp\left(h\frac{\partial}{\partial u}\right)+\exp\left(h \frac{\partial}{\partial v}\right)+\frac{h^{2}}{2}\left[\exp\left(h\frac{ \partial}{\partial u}\right)+\exp\left(h\frac{\partial}{\partial v}\right) \right]\frac{\partial^{2}}{\partial u\partial v}+\mathcal{O}(h^{4})\end{split} \tag{24}\]
Where \(h\) is the step size. Thereafter, we numerically integrate over \(du\) and \(dv\), ideally in the range \([0,\infty]\). We have computed the field amplitude in the region \(0\leq u,v\leq 200\) with a step size \(h=0.01\). The initial condition is taken as a Gaussian distribution along \(v=0\), \(\Psi(u,0)=e^{\frac{-(u-10)^{2}}{10}}\), and as a constant along \(u=0\), \(\Psi(0,v)=1/e\), such that they equate at \(\Psi(0,0)\). This computation is performed using both Python and MATLAB to cross-check for accuracy. A particular case, with \(n=4,m=2\) in the 4D-GEB model, is shown in the (log-linear) plots in Fig. 5. This shows that the efficiency of the two computing platforms are comparable. The presence of quasi-normal frequencies is clearly evident from these time domain evolution spectrum.
From the log-linear plots in Fig. 5, one may identify three distinct stages in the time domain spectrum the _initial region_, approximately for tenure \(t=0-50s\) (which depends on the initial condition), the second stage, the region of our interest- the exponential dampening, roughly during \(t=50-110s\) followed by the third stage of 'tail' [87; 44]. Note that with increase in \(b_{0}\) value the duration quasi-normal ringing increases which suggests an decrease in the value of the QNF. This feature is also present in the 5D scenario. In fact this can be seen by straightforward evaluation of the WKB formula Eq. (21). From the damped region one can extract the QNF values by the _Prony fitting_ method (discussed below). We have also used the _direct integration_ method to determine the QNFs. Below, we briefly discuss these methodologies followed by the results tabulated.
### Determination QNFs
#### iv.3.1 Prony Method
In the prony method, the time domain profile is fitted by the function
\[f(t)=\sum_{n=0}^{\infty}A_{n}e^{\alpha_{n}t}\cos{(\beta_{n}t)}, \tag{25}\]
where the QNFs are given by, \(\omega_{QNF}=\alpha\pm i\beta\). This technique is similar to the Fourier method but is also valid for complex frequencies and was first developed by G.R.D. Prony in 1795 [44]. There is another variation of this technique [46], where the function is equated with the time domain spectrum and converted into a matrix form whose roots (eigenvalues) are found to be the QNFs. This technique is used in all fields having any damped oscillatory signal processing. We used both of these approaches for reliability (using both Python and MATLAB). An example of the Prony fitting is depicted in the Fig. 6.
The Matrix (Prony) method returns a set of complex frequencies which are further
Figure 5: Time domain Spectrum for \(n=4,m=2\), using Python (left) and MATLAB (right).
analysed and sorted with respect to a magnitude similar to the Fourier technique. In Fig. 7, we graphically present the amplitude of each mode obtained in matrix method (using Python). It is evident that only two frequencies (conjugate of each other) have the highest magnitude hence most dominating. Thus the QNFs in their order of dominance can be identified. We shall not show the amplitude plots for any other cases for brevity.
#### iii.2.2 Direct Integration
In the Direct Integration method, the characteristic or the master differential equation Eq. (10) is numerically integrated using purely outgoing boundary conditions. This technique was first used by Chandrasekhar and S. Detweiler [88] in 1975. We essentially follow the steps described in [89; 33]. As our potential is symmetric about the wormhole throat (\(l=0\)) (in both 4D and 5D cases) our solution can be of symmetric or antisymmetric kind. For symmetric (anti-symmetric) solution we should use \(R^{\prime}(0)=0\) (\(R(0)=0\)). Note that, the
Figure 6: Time domain Spectrum fit with dominant \(\omega_{QNF}\) for n=4, m=2 using Python.
Figure 7: Amplitudes \(A_{n}\) of fitting frequencies with dominant \(\omega_{QNF}\) having the greatest value.
asymptotic solution near \(l\rightarrow\infty\) can be expanded as
\[R^{+}=e^{i\Omega l}\sum_{k=0}^{\infty}\frac{A_{k}^{+}}{l^{k}};\ \ \ \ \Omega^{2}= \omega^{2}-q^{2}, \tag{26}\]
which represent purely outgoing wave. However, near the throat or at some finite distance from the throat, expansion should contain both ingoing and outgoing waves, given by,
\[R(l)=e^{i\Omega l}\sum_{k=0}^{\infty}\frac{A_{k}^{+}}{l^{k}}+e^{-i\Omega l} \sum_{k=0}^{\infty}\frac{A_{k}^{-}}{l^{k}}. \tag{27}\]
By putting \(R(l)\) in the field equation, one gets the following recurrence relations (at large \(l_{0}^{2}>>b_{0}^{2}\)),
\[A_{k+1}^{\pm}=\pm\frac{\{k(k+1)-m(m+1)\}A_{k}^{\pm}+(n-1)b_{0}^{n}A_{k-n}^{\pm}} {2i\Omega(k+1)} \tag{28}\]
This gives all the \(A_{k}^{\pm}\) in terms of \(A_{0}^{\pm}\). After we integrate Eq. (10) from \(l=0\) to \(l=l_{0}\), we match the numerically found \(R_{num}(l)\) and \(R_{num}^{\prime}(l)\) with Eq. (27) and it's derivative at \(l_{0}\). This leads to the following matching conditions
\[R_{num}(l_{0})=e^{i\Omega l_{0}}\sum_{k=0}^{\infty}\frac{A_{k}^{+}}{l_{0}^{k}} +e^{-i\Omega l_{0}}\sum_{k=0}^{\infty}\frac{A_{k}^{-}}{l_{0}^{k}}, \tag{29}\]
\[R_{num}^{\prime}(l_{0})=e^{i\Omega l_{0}}\sum_{k=0}^{\infty}\frac{A_{k}^{+}}{l _{0}^{k}}\left(i\Omega-\frac{k}{l_{0}}\right)+e^{-i\Omega l_{0}}\sum_{k=0}^{ \infty}\frac{A_{k}^{-}}{l_{0}^{k}}\left(-i\Omega-\frac{k}{l_{0}}\right). \tag{30}\]
Eliminating \(A_{0}^{+}\) from Eq. (29) and Eq. (30), we get an expression of \(A_{0}^{-}\) as a function of \(l_{0}\) and \(\omega\). Roots of the equation \(A_{0}^{-}=0\), in the large \(l_{0}\) limit, gives us the QNFs. The stability of the solutions is checked by verifying that varying \(l_{0}\) does not considerably change the QNF values. We have only considered QNFs corresponding to the symmetric solutions, which have low damping.
## V Results
Our focus will be \(n=2\) or pure EB model as we are going to compare this results with 5D scenario. However, we will also address higher \(n\)-geometries (4D-GEB) briefly for completeness and to extend the results presented in [33].
### 4D wormhole: varying \(n\) and \(m\)
Fig. 8 shows that for 4D-EB wormhole (\(n=2\)) the damped oscillatory region is less prominent for lower values of \(m\). Therefore, QNF values extracted from these evolutions, using Prony fitting, are sensitive to the choice of beginning and end of QNM oscillation.
Fig. 9 shows time evoluton for a _steep-neck_ 4D-GEB geometry with \(n=10\) (for \(m=2\)) and \(n=4\) (for \(m=5\)). Comparison with Fig. 5 shows that, with increasing \(n\), the beginning of the QN ringing domain has not changed much but the end is delayed considerably i.e. the tail appears much later for a higher value of \(n\). Whereas with increasing \(m\), the QNM oscillation gets triggered earlier. These particular features could be useful signature in detecting the shape of the GEB wormholes apart from those reported in [33].
The (dominant) quasi-normal frequencies for various \(m\) (angular momentum) and \(n\) (steep-neck parameter) values in 4D-GEB have been plotted (real versus the absolute value of imaginary) in Fig. 10. We clearly see that the features of \(n=2\) is markedly different from \(n>2\) scenario. Fig. 10 essentially reproduces results found in [33] and establishes the accuracy of our numerical computation. For a detailed discussion on Fig. 10, we urge the reader to consult [33].
Figure 8: Time domain Spectrum for \(n=2,m=4\) and \(n=2,m=8\).
### 5D wormhole: \(n=2\), varying \(m\) and \(q\)
In the context of the 5D model, as argued earlier, we focus on the \(n=2\) scenario. Incidentally, the effects of the extra dimension (i.e. of varying \(q\)-value or the _massiveness_ coming from the warped extra dimension) on the time evolution, is more striking for higher values of \(m\). Let us look at the time domain profile for \(n=2\), \(m=8\) (any other value of \(m\) will do) with varying \(q\) values. Fig. 11, shows the remarkable changes in the time evolution profile of the wave amplitude for four different values of \(q\).
Even for small (but non-zero) values of \(q=0.5\), the 4D behaviour (which is equivalent to setting \(q=0\)) is lost. This is expected as there is an interplay between \(m\) and \(q\). Interestingly, for \(q>b_{0}^{-1}\) (here we have taken \(b_{0}=1\)), the semi-log plots clearly reveal that the QNM era is divided into _two_ parts (almost as if two linear regions with different slopes are joined at a kink) that are dominated by two different QNF modes. Notably, in the latter region, the most dominant QNM is characterised by \(Re(\omega)\sim q\) with a small imaginary part as depicted in the tables given below. As time evolves, eventually, when the early dominant modes decay, the late-QNM emerges. To further reveal the late-QNM
Figure 11: Time domain spectrum for n=2, m=8 and q=0.5 (left), q=2 (right).
Figure 10: Plot of \(\omega_{QNF}\), real versus (magnitude of) imaginary part for different \(n\) and \(m\) values. From left to right, the \(m\) value increases.
region, we present Fig. 12.
Here, in the left plot we show a perfect fit of the early-QNM region using the dominant QNF found to be \(\omega^{(E)}=5.335+i0.41\). In the right plot, we have fitted the wave amplitude, after subtracting that dominant early-QNM, with the dominant late-QNM given by \(\omega^{(L)}=2.0078+i0.0043\). The order of their dominance has also been confirmed from their amplitudes using matrix Prony method as mentioned earlier. The (almost) purely real frequencies, in the late QNM era, are similar to the quasi-resonances found in [90]. Though imaginary part is non-zero but very small for the QNMs we report here, it is easy to see that (from Table 1 and 2), for larger and larger \(q\), the imaginary part indeed tends to zero and we have exact quasi-resonances. Note that, it's not the existence of the quasi-resonances, but the existence of two QNM era, where the _late_ QNM region is dominated by the almost quasi-resonant QNM, that we emphasize on as signature of the warped extra dimension in the ringing of a effective 4D-EB wormhole.
In Table 1 and Table 2, we tabulated early and late dominant QNFs respectively, for \(n=2\), various \(q\)-momentum and angular momenta \(m\), determined using different methods. For brevity, we have denoted the early dominant QNF as \(\omega^{(E)}\) and the late dominant QNF as \(\omega^{(L)}\). Note that WKB method only matches with early QNMs. As pointed out in [82], the WKB formula, is valid only for small \(q\)-values (or the effective mass). For large field mass the WKB approach needs modifications.
In Table 1, the QNF values for \(q=0\) do match upto three digits after the decimal point with the 4D-GEB QNF values reported in [33]. Thus proving the accuracy of our numerical computation. The Prony method fails to provide accurate determination of early QNFs for low \(m\) values because of small duration. However, using larger numerical value for \(b_{0}\), duration increases and prony method gives better results for low \(m\) values as well. Also, as \(q\) increases, accuracy improves. For non-zero \(q\), the dominant QNF in the early QNM era gets larger with increasing \(q\). Table 2 shows that as the \(q\) value increases, the imaginary part of the fundamental mode tends to zero asymptotically while the real part approaches \(q\). Apparently, this behaviour is expected if one takes the \(q>>m\) limit in Eq. 16. This
Figure 12: Time domain spectrum and dominant QNM fit at early (left) and late (right) QNM era.
property matches exactly with QNMs for massive fields in the black hole background [90]. However, the short lived early QNM era was not reported there. It is crucial to identify the duration of the QNM era to determine accurate values of QNF. Note that for low \(m\) values, the duration of the early QNM era is small thus difficult to detect. Thus it could be possible to determine accurate values of QNF.
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline
**m** & **q** & **Prony** & **Direct Integration** & **WKB** \\ \hline \hline \multirow{8}{*}{1} & 0 & 0.8 32130 -i0.226690 & 1.624584 -i0.219350 & 1.617350 - i0.250251 \\ \cline{2-5} & 0.5 & 0.750281 -i0.188820 & 1.851582 -i0.188352 & 1.879640 - i0.242017 \\ \cline{2-5} & 1 & 1.335939 -i0.215243 & 2.053928 -i0.215267 & 2.058170 - i0.185868 \\ \cline{2-5} & 2 & 2.621701 -i0.123791 & 2.621708 -i0.123782 & 2.672090 - i0.124239 \\ \cline{2-5} & 5 & 5.396131 -i0.036641 & 5.396137 -i0.036642 & 5.394870 -i0.038862 \\ \cline{2-5} & 0 & 2.250079 -i0.457395 & 2.712579 -i0.445679 & 2.629721 - i0.424325 \\ \cline{2-5} & 0.5 & 1.505729 -i0.317217 & 2.810257 -i0.312587 & 2.741552 - i0.315845 \\ \cline{2-5} & 1 & 2.528403 -i0.279339 & 2.948405 -i0.279358 & 2.871002 - i0.282586 \\ \cline{2-5} & 2 & 3.195762 -i0.231845 & 3.194526 -i0.221053 & 3.343490 -i0.222976 \\ \cline{2-5} & 5 & 6.089024 -i0.304504 & 6.129243 -i0.315208 & 6.171861 - i0.303122 \\ \hline \multirow{8}{*}{5} & 0 & 5.590286 -i0.527592 & 5.582526 -i0.512567 & 5.590768 - i0.505916 \\ \cline{2-5} & 0.5 & 5.608272 -i0.515912 & 5.611273 -i0.518386 & 5.612845 - i0.503921 \\ \cline{2-5} & 1 & 5.732418 -i0.514532 & 5.727413 -i0.503953 & 5.678741 - i0.498073 \\ \cline{2-5} & 2 & 5.913451 -i0.477296 & 5.922475 -i0.476795 & 5.935240 - i0.476548 \\ \cline{2-5} & 5 & 7.480281 -i0.367137 & 7.480376 -i0.369172 & 7.492832 - i0.377485 \\ \cline{2-5} & 0 & 8.579942 -i0.492730 & 8.529875 -i0.491728 & 8.558772 - i0.492544 \\ \cline{2-5} & 0.5 & 8.621930 -i0.486822 & 8.624282 -i0.482853 & 8.573310 - i0.491692 \\ \cline{2-5} & 1 & 8.643952 -i0.481875 & 8.647258 -i0.484326 & 8.616820 - i0.489164 \\ \cline{2-5} & 2 & 8.979098 -i0.468332 & 8.979096 -i0.478331 & 8.788610 - i0.469402 \\ \cline{2-5} & 5 & 9.979701 -i0.312355 & 9.978305 -i0.327316 & 9.90901 - i0.324066 \\ \hline \end{tabular}
\end{table}
Table 2: Late time dominant QNF- \(\omega^{(L)}\) values for various modes at n=2.
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline
**m** & **q** & **Prony** & **Direct Integration** & **WKB** \\ \hline \hline \multirow{8}{*}{1} & 0 & 0.8 32130 -i0.226690 & 1.624584 -i0.219350 & 1.617350 - i0.250251 \\ \cline{2-5} & 0.5 & 0.750281 -i0.188820 & 1.851582 -i0.188352 & 1.879640 - i0.242017 \\ \cline{2-5} & 1 & 1.335939 -i0.215243 & 2.053928 -i0.215267 & 2.058170 - i0.185868 \\ \cline{2-5} & 2 & 2.621701 -i0.123791 & 2.621708 -i0.123782 & 2.672090 - i0.124239 \\ \cline{2-5} & 5 & 5.396131 -i0.036641 & 5.396137 -i0.036642 & 5.394870 -i0.038862 \\ \cline{2-5} & 0 & 2.250079 -i0.457395 & 2.712579 -i0.445679 & 2.629721 - i0.424325 \\ \cline{2-5} & 0.5 & 1.505729 -i0.317217 & 2.810257 -i0.312587 & 2.741552 - i0.315845 \\ \cline{2-5} & 1 & 2.528403 -i0.279339 & 2.948405 -i0.279358 & 2.871002 - i0.282586 \\ \cline{2-5} & 2 & 3.195762 -i0.231845 & 3.194526 -i0.221053 & 3.343490 - i0.222976 \\ \cline{2-5} & 5 & 6.089024 -i0.304504 & 6.129243 -i0.315208 & 6.171861 - i0.303122 \\ \hline \multirow{8}{*}{5} & 0 & 5.590286 -i0.527592 & 5.582526 -i0.512567 & 5.590768 - i0.505916 \\ \cline{2-5} & 0.5 & 5.608272 -i0.515912 & 5.611273 -i0.518386 & 5.612845 - i0.503921 \\ \cline{1-1} \cline{2-5} & 1 & 5.732418 -i0.514532 & 5.727413 -i0.503953 & 5.678741 - i0.498073 \\ \cline{1-1} \cline{2-5} & 2 & 5.913451 -i0.477296 & 5.922475 -i0.476795 & 5.935240 - i0.476548 \\ \cline{1-1} \cline{2-5} & 5 & 7.480281 -i0.367137 & 7.480376 -i0.369172 & 7.492832 - i0.377485 \\ \cline{1-1} \cline{2-5} & 0 & 8.579942 -i0.492730 & 8.529875 -i0.491728 & 8.558772 - i0.492544 \\ \cline{1-1} \cline{2-5} & 0.5 & 8.621930 -i0.486822 & 8.624282 -i0.482853 & 8.573310 - i0.491692 \\ \cline{1-1} \cline{2-5} & 1 & 8.643952 -i0.481875 & 8.647258 -i0.484326 & 8.616820 - i0.489164 \\ \cline{1-1} \cline{2-5} & 2 & 8.979098 -i0.468332 & 8.979096 -i0.478331 & 8.788610 - i0.469402 \\ \cline{1-1} \cline{2-5} & 5 & 9.979701 -i0.
be possible that even in case of massive scalar field there exists an early QNM era. Also for \(q<b^{-1}\), duration of early QNM decreases indefinitely with decreasing \(q\). For such low values of \(q\), the timespan of the dominant QNF in the the early QNM era becomes smaller than our algorithm's precision limit in the Prony method. This limitation does not show up for the other methods so they generate efficient values for QNF in those cases.
## VI Discussion
Ellis-Bronnikov wormhole (and it's generalised versions) embedded in warped braneworld background has been shown to satisfy the so-called energy conditions in presence of a decaying warp factor. Earlier we have studied particle trajectories and geodesic congruences in such spacetimes. The recent observations suggest that one way to understand true nature of the ultra-compact objects is through their quasi-normal ringing. This method could potentially distinguish among black holes and possible black hole mimickers such as wormholes. Here we analyse the QNMs of the 5D-WEB wormhole spacetime while looking for distinguishing features of the warped extra dimension and the wormhole parameter. The work done and the results found that reveal the effects of the wormhole (steep-neck) parameter \(n\) and the decaying warp factor on the time domain profile and the QNFs and are summarised below in a systematic manner.
* The nature of the effective potential is similar in both four and five dimensions, except that in 5D, the potential does not vanish asymptotically.
* The momentum eigenvalue along the fifth dimension \(q\), projected on the 4D geometry acts as an effective mass. We solved the corresponding eigenvalue problem and found that \(q\) takes non-negative continuous values.
* Assuming suitable values of \(q\), we then determined the QNFs analytically using WKB formula and numerically using the Prony method and the direct integration method. The results for 4D-GEB model match to 3rd decimal order with earlier reports. We have used both Python and Matlab for numerical computation of QNFs.
* For 4D-GEB spacetimes, the time domain profile has three prominent regions (the initial portion, the QNM era and an asymptotic tail. Apart from the observations made in [33], we notice that the QNM ringing appears earlier for higher angular momentum \(m\) and the tail appears later for higher values of \(n\).
* Remarkably, the time domain profile changes considerably in 5D-WEB scenario. The QNM era is divided into two parts with two different dominant QNFs. The real part of the 'early QNM' (for fixed \(m\)) increases with increasing \(q\) value whereas the real part of the 'late QNM' is close to \(q\) value. Also, the dampening (decided by the imaginary part of QNF) of the late QNM is much slower than that of early QNM.
* With increasing \(q\), the late QNM eventually becomes purely real. These so-called quasi-resonances were observed earlier for massive fields in black hole backgrounds [82]. Further, the tail appears much later compared to the 4D scenario.
Hence, one may conclude that this feature of two different dominating QNFs in the ringing profile, if observed, could provide indirect evidence of existence of wormhole as well as five-dimensional warped geometry. It is always difficult to build physical intuition about the nature of the QNMs once they are determined numerically. However we have seen that it's the interplay between \(m\) and \(q\) that eventually leading to the splitting of the QNM era. Further, one might find similar splitting in other equivalent scenarios [82] if revisited. We look forward to report on this in future.
## VI Acknowledgement
We thank Dr. Poulami Dutta Roy and Prof. Sayan Kar for useful discussions and correspondence. SG thanks BIT Mesra for financial assistance through seed money scheme.
|
2308.02558 | The Paradigm Shifts in Artificial Intelligence | Kuhn's framework of scientific progress (Kuhn, 1962) provides a useful
framing of the paradigm shifts that have occurred in Artificial Intelligence
over the last 60 years. The framework is also useful in understanding what is
arguably a new paradigm shift in AI, signaled by the emergence of large
pre-trained systems such as GPT-3, on which conversational agents such as
ChatGPT are based. Such systems make intelligence a commoditized general
purpose technology that is configurable to applications. In this paper, I
summarize the forces that led to the rise and fall of each paradigm, and
discuss the pressing issues and risks associated with the current paradigm
shift in AI. | Vasant Dhar | 2023-08-02T19:38:24Z | http://arxiv.org/abs/2308.02558v1 | # The Paradigm Shifts in Artificial Intelligence
###### Abstract
Kuhn's framework of scientific progress (Kuhn, 1962) provides a useful framing of the paradigm shifts that have occurred in Artificial Intelligence over the last 60 years. The framework is also useful in understanding what is arguably a new paradigm shift in AI, signaled by the emergence of large pre-trained systems such as GPT-3, on which conversational agents such as ChatGPT are based. Such systems make intelligence a commoditized general purpose technology that is configurable to applications. In this paper, I summarize the forces that led to the rise and fall of each paradigm, and discuss the pressing issues and risks associated with the current paradigm shift in AI.
2023
## 1 Introduction
Artificial Intelligence (AI) captured the world's attention in 2023 with the emergence of pre-trained models such as GPT-3, on which the conversational AI system ChatGPT is based. For the first time, we can converse with an entity, however imperfectly, about anything, as we do with humans. This new capability provided by pre-trained models has created a paradigm shift in AI, transforming it from an application to a general purpose technology that is configurable to specific uses. Whereas historically an AI model was trained to do one thing well, it is now usable for a variety of tasks such as general conversations, assistance, decision-making, or code generation - for which it wasn't explicitly trained. The scientific history of AI provides a backdrop for evaluating and discussing the capabilities and limitations of this new technology, and the challenging that lie ahead.
Economics Nobel Laureate Herbert Simon, one of the fathers of Artificial Intelligence, described Artificial Intelligence as a "science of the artificial." (Simon, 1970). In contrast to the natural sciences, which describe the world as it exists, a science of the artificial is driven by a goal, of creating machine intelligence.
According to Simon, this made AI a science of design and engineering. Pre-trained models have greatly expanded the design aspirations of AI, from crafting high performing systems in narrowly-specified applications, to becoming general-purpose and without boundaries, applicable to anything involving intelligence.
The evolution of AI can be understood through Kuhn's (1962) theory of scientific progress in terms of "paradigm shifts." A paradigm is essentially a set of theories and methods accepted by the community to guide inquiry. It's a way of thinking. Kuhn describes science as a process involving occasional "revolutions" stemming from crises faced by the dominant theories, followed by periods of "normal science" where the details of the new paradigm are fleshed out. Over time, as the dominant paradigm fails to address an increasing number of important anomalies or challenges, we see a
paradigm shift to a new set of theories and methods - a new way of thinking that better addresses them.
A key feature of paradigms is that they have "exemplars" that guide problem formulation and solution. In physics, for example, the models describing the laws of motion, like Kepler's or Newton's laws of motion, could serve as exemplars that drive hypothesis formulation, observation, and hypothesis testing. In AI, exemplars define the core principles and methods for knowledge extraction, representation, and use. Early approaches favored methods for declaring human-specified knowledge as rules using symbols to describe the world, and an "inference engine" to manipulate the symbols - which was viewed as "reasoning." In contrast, current methods have shifted towards learning more complex statistical representations of the world that are derived almost entirely from data. The latter tend to be better at dealing with the contextual subtleties and complexity that we witness in problems involving language, perception and cognition.
The paradigm shifts in AI have been driven by methods that broke through major walls that were considered to be significant at the time. The first generation of AI research in the late 50s and 60s was dominated by game playing search algorithms (Samuel, 1959, 2000) that led to novel ways for searching various kinds of graph structures. But this type of mechanical search provided limited insight into intelligence, where real-world knowledge seemed to play a major role in solving problems, such as in medical diagnosis and planning. Expert Systems provided a way forward, by representing domain expertise and intuition in the form of explicit rules and relationships that could be invoked by an inference mechanism. But these systems were hard to create and maintain. A knowledge engineer needed to define each relationship manually and consider how it would be invoked in making inferences.
The practical challenges of the knowledge acquisition bottleneck led to the next paradigm shift. As more data became available, researchers developed learning algorithms that could automatically create rules or models directly from the data using mathematical, statistical, or logical, methods, guided by a user-specified objective function.
That's where we are today. Systems such as ChatGPT employ variants of neural networks called Transformers that provide the architecture of large language models (LLMs), which are trained directly from the collection of human expression available on the Internet. They use complex mathematical models with billions of parameters that are estimated from large amounts of publicly available data. While language has been a key area of advancement in recent years, these approaches have been used to enable machines to learn from other modalities of data including vision, sound, smell, and touch. What is particularly important today is the shift from building specialized applications of AI to one where knowledge and intelligence don't have specific boundaries, but transfer seamlessly across applications and to novel situations.
## 2 The Paradigm Shifts in AI
To understand the state of the art of AI and where it is heading, it is important to understand its scientific history, including the bottlenecks that stalled progress in each paradigm and the degree to which they were addressed by each paradigm shift.
Figure 1 sketches out the history of Artificial Intelligence from the Expert Systems era - which spanned the late sixties to the late 80s - to the present.
**Expert Systems and Symbolic AI**
Expert systems are attractive in narrow, well-circumscribed domains in which human expertise is identifiable and definable. They perform well at specific tasks where this expertise can be extracted through interactions with humans, and it is typically represented in terms of relationships among situations and outcomes. The driving force in that paradigm was to apply AI to diagnosis, planning, and design across a number of domains including healthcare, science, engineering, and business. The thinking was that if such systems performed at the level of human experts, they were intelligent.
An early success in medicine was the Internist system [14], which performed diagnosis in the field of internal medicine.
Internist represented expert knowledge using causal graphs and hierarchies relating diseases to symptoms. The rule-based expert system Mycin [1] was another early demonstration of diagnostic reasoning involving blood diseases. Other medical applications included the diagnosis of renal failure [15] and glaucoma [16].
In addition to applications in medicine, expert systems were also successful in a number of other domains such as engineering [17], accounting [18], tax planning [19]., configuration of computer systems [15], monitoring industrial plants [14], mineral prospecting [16], and identifying new kinds of chemical molecules [10].
The prototypical exemplar for representing knowledge in this paradigm were symbolic relationships expressed in the form of "IF/THEN" rules [1], "semantic networks," [17] or structured object representations [14]. But it was difficult to express uncertainty in terms of these representations, let alone combine such uncertainties during inference, which prompted the development of more principled graphical models for representing uncertainty in knowledge using probability theory [11].
The exemplar was shaped by the existing models of cognition from Psychology, which viewed humans as having a long-term and a short-term memory, and a mechanism for evoking them in a specific context. The knowledge declared by humans in expert systems, such as the rule "_excess bilirubin \(\xrightarrow{}\)high pallor_" constituted their long-term memory. An interpreter, also known as the inference engine or "control regime," evoked the rules depending on the context, and updated its short-term memory accordingly. If a patient exhibited unusually high pallor for example, this symptom was noted in short-term memory, and the appropriate rule was evoked from long-term memory to hypothesize its cause, such as "excess bilirubin." In effect, symbolic AI separated the declaration of knowledge from its application.
Research in natural language processing was along similar lines, with researchers seeking to discover the rules of language. The expectation was that once these were fully specified, a machine would follow these rules in order to understand and generate language [1, 18]. This turned out to be exceedingly difficult to achieve.
The major hurdle of this paradigm of top-down knowledge specification was the "knowledge engineering bottleneck." It was challenging to extract reliable knowledge from experts, and equally difficult to represent and combine uncertainty in terms of rules. Collaborations
Figure 1: The History of Artificial Intelligence
between experts and knowledge engineers could take years or even decades, and the systems became brittle at scale. Furthermore, researchers found that expert systems would often make errors in common-sense reasoning, which seemed intertwined with specialized knowledge. Evaluating such systems was also difficult, if one ever got to that stage. Human reasoning and language seemed much too complex and heterogenous to be captured by top-down specification of relationships. Progress stalled, as the reality, both in research and practice, fell short of expectations.
### Machine Learning
The supervised machine learning paradigm emerged in the late 80s and 90s, with the maturing of database technology, the emergence of the Internet, and the increasing abundance of observational and transactional data [14, 15]. AI thinking shifted away from spoon-feeding highly specified human abstractions to the machine, and towards automatically learning rules from data, guided by human intuition. While symbolic expert systems required humans to specify a model, machine learning enabled the machine to learn the model automatically from curated examples. Model discovery was guided by a "loss function," designed to directly or indirectly to minimize the system's overall prediction error, which by virtue of the data, could be measured in terms of the differences between predictions and empirical reality.
Empirics provided the ground truth for supervision. For example, to learn how to predict pneumonia, also called the _target_, one could collect historical medical records of people with and without pneumonia, intuit and engineer the features that might be associated with the target, and let the machine figure out the relationships from the data to minimize the overall prediction error. Instead of trying to specify the rules, the new generation of algorithms could learn them from data using optimization. Many such algorithms emerged, but the common thread among them was that they belonged to the broad class of "function approximation" methods that used data and a user-defined objective function to guide knowledge discovery.
This shift in perspective transformed the machine into a generator and tester of hypotheses that used optimization - the loss function - to focus knowledge discovery. This ability made machines capable of automated inquiry without a human in the loop. Instead of a being a passive repository of knowledge, the machine became an active "what if" explorer, capable of asking and evaluating its own questions. This enabled data-driven scientific discovery [16].
The epistemic criterion in machine learning for something to count as knowledge was accurate _prediction_[17]. This conforms to Popper's view of using the predictive power theories as a measure of their goodness. Popper argued that theories that sought only to explain a phenomenon were weaker than those that made "bold" _ex-ante_ predictions" that were objectively falsifiable. Good theories stood the test of time. In his 1963 treatise on this subject, _Conjectures and Refutations_, Popper characterized Einstein's theory of relativity as a "good" one, since it made bold predictions that can be falsified easily, yet all attempts at falsification of the theory have failed.
The exemplars for supervised machine learning are relationships derived from data that is specified in (X,y) pairs, where "y" are data about the target to be predicted based on a situation described by the vector of observable features "X." This exemplar has a very general form: the discovered relationships can be "IF/THEN" rules, graph structures such as Bayesian networks [18, 19], or implicit mathematical functions expressed via weights in a neural network [19, 20]. Once
learned, this knowledge could be viewed as analogous to memory, invoked depending on context, and updatable over time.
But there's no free lunch with machine learning. There is a loss of transparency in what the machine has learned. Neural networks, including large language models, are particularly opaque in that it is difficult to assign meanings to the connections among neurons, let alone combinations of them. Even more significantly, the machine learning paradigm introduced a new bottleneck, namely, requiring the curation of available data using some sort of vocabulary that the machine can understand. This required that the _right features_ be created from the raw data. For example, to include an MRI image as input into the diagnostic reasoning process, the contents of the image had to be expressed in terms of features of the vocabulary such as "inflammation" and "large spots on the liver." Similarly, a physician's notes about a case had to be condensed into features that the machine could process. This type of _feature engineering_ was cumbersome. Specifying the labels accurately could also be costly and time-consuming. These were major bottlenecks for the paradigm.
What was direly needed was the ability of the machine to deal directly with the raw data emanating from the real world, instead of relying on humans to perform the often difficult translation of feature engineering. Machines needed to ingest raw data such as numbers, images, notes or sounds directly, ideally without curation by humans.
### Deep Learning
The next AI paradigm, "Deep learning," made a big dent in the feature engineering bottleneck by providing a solution to perception, such as seeing, reading, and hearing. Instead of requiring humans to describe the world for the machine, this generation of algorithms could consume the raw input similar to what humans use, in the form of images, language, and sound. "Deep neural nets" (DNNs), which involve multiple stacked layers of neurons, form the foundation of vision and language models Hinton (1992); LeCun and Bengio (1998). While learning still involves adjusting the weights among the neurons, the "deep" part of the neural architecture is important in translating the raw sensory input automatically into machine-computable data.
The exemplar in deep learning is a multi-level neural network architecture. Adjusting the weights among the neurons makes it a _universal function approximator_Cybenko (1989), where the machine can approximate any function, regardless of its complexity, to an acceptable degree of precision. What is unique about DNNs is the organization of hidden layers between the input and output that _learn the features_ implicit in the raw data instead of requiring that they be specified by humans. A vision system, for example, might learn to recognize features common to all images, such as lines, curves and colors from the raw images that make up its training data. These can be combined variously to make up more complex image parts such as windows, doors and street signs that are represented by "downstream" layers of the deep neural network. In other words, the DNN tends to have an organization, where more abstract and latent concepts that are closer to its output are composed from more basic features represented in the layers that are closer to the input.
The same ideas have been applied to large language models (LLMs) from which systems like ChatGPT are built. They learn the implicit relationships among things in the world from large amounts of text from books, magazines, web-posts etc. As in vision, we would expect layers of the neural network that are closer to the output to represent more abstract concepts, relative to layers that are closer to the input. However, we don't currently understand how DNNs organize and use such knowledge, or how
they represent relationships in general. This depends on what they are trained for.
In language modeling, for example, the core learning task is typically to predict the next occurrence of an input sequence. This requires a considerable amount of knowledge and understanding of the relationships among the different parts of the input. Large language models use a special configuration of the "Transformer" neural architecture, which represents language as a contextualized sequence, where context is represented by estimating dependencies between each pair of the input sequence [21]. Because this pairwise computation grows sharply with the length of the input, engineering considerations constrain the length of the input sequences - its span of attention, for which LLMs are able to maintain context.
This Transformer architecture holds both long term memory, represented by the connections between neurons, as well as the context of a conversation - the equivalent of short-term memory - using its "attention mechanism," which captures the relationships between all parts of the input. For example, it is able to tell what the pronoun "it" refers to in the sentences "The chicken didn't cross the road because it was wet" and "The chicken didn't cross the road because it was tired." While humans find such reasoning easy by invoking common-sense, previous paradigms failed at such kinds of tasks that require understanding context. The architecture also works remarkably well in vision, where it is able to capture the correlation structure between the various parts of an image.
The downside is that DNNs are large and complex. What pre-trained language models learn as a by-product of learning sequence prediction is unclear because their knowledge - the meanings and relationships among things - is represented in a "distributed" way, in the form of weighted connections among the layers of neurons. In contrast to Expert Systems, where relationships are specified in "localized" self-contained chunks, the relationships in a DNN are smeared across the weights in the network and much harder to interpret.
Nevertheless, the complexity of the neural network architecture - typically measured by the number of layers and connections in the neural network (its parameters) - is what allows the machine to recognize context and nuance. It is remarkable that the pre-trained LLM can be used to explain why a joke is funny, summarize or interpret a legal document, answer questions, and all kinds of other things that is wasn't explicitly trained to do.
Bowman (2023) conjectures that the autocomplete task was serendipitous: it was just at the right level of difficulty, where doing well conversationally forced the machine to learn a large number of other things about the world. In other words, a sufficiently deep understanding about the world, including common-sense, is _necessary_ for language fluency. However, current-day machines can't match humans in terms of common sense. As of this writing, ChatGPT fails at the Winograd Schema task [22], which involves resolving an ambiguous pronoun in a sentence. For example, when asked what the "it" refers to in the sentence "the trophy wouldn't fit into the suitcase because it was too small," ChatGPT thinks that the "it" refers to the trophy. The right answer requires the use of common-sense, and cannot be determined by structure alone.
These are the challenges for the new and still emerging paradigm of AI, namely one of _General Intelligence_, where expertise and common-sense can blend together more seamlessly, as can different modalities of information.
Table 1 summarizes the properties of each paradigm in terms of how knowledge is acquired (its source), the exemplar that guides problem formulation, its capability, and the degree to which the input is curated. The "+" prefix means "in addition to the previous case."
## 3 General Intelligence
Pre-trained models are the foundation for the General Intelligence paradigm. Previous AI applications were tuned to a task. In order to predict pneumonia in a hospital, for example, the AI model was trained using cases from that hospital alone, and wouldn't necessarily transfer to a nearby hospital, let alone a different country. In contrast, General Intelligence is about the ability to integrate knowledge about pneumonia with other diseases, conditions, geographies, etc., like humans are able to do, and to apply the knowledge to unforeseen situations. In other words, General Intelligence refers to an integrated set of essential mental skills that include spatial, numerical, mechanical, verbal, reasoning, and common sense abilities, which underpin performance on _all_ mental tasks [14]. Such knowledge is easily _transferrable_ across tasks, and can be applied to novel situations.
Each paradigm shift greatly expanded the scope of applications. Machine Learning brought structured databases to life. Deep Learning went further, enabling the machine to deal with structured and unstructured data about an application directly from the real world, as humans are able to do.
Pre-trained models provide the building blocks for General Intelligence by virtue of being domain-independent, requiring minimal curation,2 and being transferrable across applications.
Footnote 2: The data curation in pre-trained LLMs like GPT-3 is primarily in the choice of sources and tokenization. AI Systems like ChatGPT3 use additional conversational training and RLHF (reinforcement learning with Human Feedback) to curate their response to be socially acceptable.
The shift to pre-trained models represents a fundamental departure from the previous paradigms, where knowledge was carefully extracted and represented. AI was an application, and tacit knowledge and commonsense reasoning were add-ons that were separate from expertise. The CYC project [15] was the first major effort to explicitly teach the machine common sense. It didn't work as the designers had hoped. There's too much tacit knowledge and common sense in human interaction that is evoked depending on context, and intelligence is much too complex and heterogenous to be compartmentalized and specified in the form of rules.
In contrast, pre-trained models eschew boundaries, as in the pneumonia example. Rather, they integrate specialized and general knowledge, including data about peoples' experiences across a range of subjects. Much of this type of knowledge became available because of the Internet, where in the short span of a few decades, humanity expressed thousands
\begin{table}
\begin{tabular}{|p{113.8pt}||p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Knowledge Source** & **Exemplar** & **Capability** & **Data Curation** \\ \hline Expert Systems & Human & Rules & Follows & **High** \\ \hline Machine Learning & + Databases & Rules/Networks & + Discovers Relationships & Medium \\ \hline Deep Learning & + Sensory & Deep Neural Networks & + Senses Relationships & Low \\ \hline General Intelligence & + StarWhit & Pre-trained Deep Neural Networks & + Undegement-like Model & MinIntel \\ \hline \end{tabular}
\end{table}
Table 1: The Paradigm Shifts in AI
of years of its history in terms of language, along with social media and conversational data on a wide array of subjects. All humans became potential publishers and curators, providing the training data for AI to learn how to communicate fluently. Hinton describes large language models like ChatGPT akin to an alien species that has enthralled us because they speak such good English.
It is important to appreciate that in learning to communicate in natural language, AI has broken through two fundamental bottlenecks simultaneously. First, we are now able to communicate with machines on _our_ terms. This required solving a related problem, of integrating and transferring knowledge about the world, including common sense, seamlessly into a conversation about any subject. Achieving this capability has required the machine to acquire the various types of knowledge simultaneously - expertise, common sense, and tacit knowledge - all of which are embedded in language. Things are connected in subtle ways, which provides the basis for "meaning" and "understanding," which AI pioneer Marvin Minsky describes in terms of "associations" and "perspectives:"
_What is the difference between merely knowing (or remembering, or memorizing) and understanding? We all agree that to understand something, we must know what it means, and that is about as far as we ever get. A thing or idea seems meaningful only when we have several different ways to represent it-different perspectives and different associations. Then we can turn it around in our minds, so to speak: however it seems at the moment, we can see it another way and we never come to a full stop. In other words, we can 'think' about it. If there were only one way to represent this thing or idea, we would not call this representation thinking._(Minsky, 1981)
Conversational agents such as ChatGPT display a remarkable ability to adapt and combine contexts in maintaining conversational coherence, This capability, where the machine can understand what we are saying well enough to maintain a conversation, enables a new kind of interaction, where the machine is able to acquire high quality training data seamlessly "from the wild" and learn in parallel with its operation.
As in deep learning, the exemplar in General Intelligence paradigm is the deep neural network, whose properties we now trying to understand, along with the general principles that underpin their performance. One such principle in the area of LLMs is that performance improves by increasing model complexity, data size, and compute power across a wide range of tasks (Kaplan, et.al, 2020). These "scaling laws of AI" indicate that predictive accuracy on the autocompletion task improves with increased compute power, model complexity, and data. If this measure of performance on autocompletion is a good proxy for General Intelligence, the scaling laws predict that LLMs should continue to improve with increases in compute power and data. A related phenomenon to performance improvement with scaling may be the "emergence" of _new abilities_ at certain tipping points of model size (Wei, et.al, 2022) that don't exist at smaller model sizes.
At the moment, there are no obvious limits to these dimensions in the development of General Intelligence. On the data front, for example, in addition to additional language data that will be generated by humans on the Internet, other modalities of data such as video are now becoming more widely available. Indeed, a fertile area of research is how machines will _integrate_ data from across multiple sensory modalities including vision, touch and smell, like humans are able to do. In short, we are in the early innings of the new paradigm, where we should see continued improvement of pre-trained models and General Intelligence with increases in the volume and variety of data and computing power. However, this should not distract us from the fact that several fundamental aspects of intelligence are still mysterious, and
unlikely to be answered solely by making existing models larger and more complex.
Nevertheless, it is worth noting that the DNN exemplar of General Intelligence has been adopted by a number of disciplines including psychology, neuroscience, linguistics, and philosophy, that seek to _explain_ intelligence, meaning, and understanding. This has arguably made AI more interdisciplinary by unifying its these with its engineering and design perspectives. Explaining and understanding the behavior of DNNs in terms of a set of core principles of its underlying disciplines is an active area of research in the current paradigm.
The progression towards the General Intelligence has followed a path of increasing scope of machine intelligence. The first paradigm was "Learn from humans." The next one was "Learn from curated data." This was followed by "Learn from any kind of data." The current paradigm is "Learn from all kinds of data it in a way that transfers to novel situations." The latest paradigm shift makes AI a general purpose technology and a commodity, one that should keep improving in terms of quality with increasing amounts of data and computing power.
## 4 AI as a General-Purpose Technology
Paradigm shifts as defined by Kuhn are followed by periods of "normal science," where the details of the new paradigm are fleshed out. We are in the early stages of one such period.
Despite their current limitations, pre-trained LLMs and conversational AI have unleashed applications in language and vision, ranging from support services that require conversational expertise to creative tasks such as creating documents or videos. As the capability provided by these pre-trained models grows and becomes embedded in a broad range of industries and functions, AI is transitioning from a bespoke set of tools to a "General Purpose Technology," from which applications are assembled. Like electricity, intelligence becomes a commodity.
Economists use the term _general-purpose technology_ -of which electricity and the Internet are examples - as a new method for producing and inventing that is important enough to have a protracted aggregate economic impact across the economy (Jovanovic and Rousseau, 2005).
Bresnahan and Trachtenburg (1995) describe general purpose technologies in terms of three defining properties:
"_pervasiveness_ - _they are used as inputs by many downstream sectors), inherent potential for technical improvements, and innovational complementarities_ - the productivity of R&D in downstream sectors multiplies as a consequence of innovation in the general purpose technology, creating productivity gains throughout the economy."
How well does the General Intelligence paradigm of AI meet these criteria?
Arguably, AI is already pervasive, embedded increasingly in applications without our realization. And with the new high bandwidth human-machine interfaces enabled by conversational AI, the quality and volume of training data that machines like ChatGPT can now acquire as they operate is unprecedented. Other sensory data from video and other sources will continue to lead to improvements in pre-trained models and their downstream applications.
The last of the three properties, innovation complementarities, may take time to play out at the level of the economy. With previous technologies such as electricity and IT, growth rates were _below_ those attained in the decades immediately preceding their arrival (Jovanovic and Rousseau, 2005). This phenomenon was also observed by the economist Robert Solow
who famously commented that "IT was everywhere except in the productivity statistics." [14]. Erik Brynolffson and his colleagues subsequently explained Solow's observation in terms of the substantial complementary investments required to realize the benefits of IT [1], where productivity emerged after a significant lag. With electricity, for example, it took decades for society to realize its benefits, since motors needed to be replaced, factories needed redesign, and workforces needed to be reskilled. IT was similar, as was the Internet.
AI is similarly in its early stages, where businesses are scrambling to reorganize business processes and rethinking the future of work. Just as electricity required the creation of an electric grid and the redesign of factories, AI will similarly require a redesign of business processes in order to realize productivity gains from this new general purpose technology [1]. Such improvements take time to play out, and depend on effective complementary investments in processes and technologies.
## 5 Challenges of Current Paradigm: Trust and Law
We should not assume that we have converged on the "right paradigm" for AI. The current paradigm will undoubtedly give way to one that addresses its shortcomings.
Indeed, paradigm shifts do not always improve on previous paradigms in every way, especially in their early stages, and the current paradigm is no exception. New theories often face resistance and challenges initially, while their details are being filled in [1]. For example, the Copernican revolution faced numerous challenges in explaining certain recorded planetary movements that were explained by the existing theory, until new methods and measurements emerged that provided strong support for the new theory [11]. Despite the current optimism about AI, the current paradigm faces a serious challenge of trust, that stems in large part from its representation of knowledge that is opaque to humans. Systems such as ChatGPT can be trained on orders of magnitude more cases than a human expert will encounter in their lifetime, but their ability to explain themselves and introspect is very limited relative to humans. And we can never be sure that they are correct, and not "hallucinating," that is, filling in their knowledge gaps with answers that look credible but are incorrect. It's like talking to someone intelligent that you can't always trust.
These problems will need to be addressed if we are to trust AI. Since the data for pre-trained models are not curated, they pick up on the falsehoods, biases, and noise in their training data. Systems using LLMs can also be unpredictable, and systems based on them can exhibit racist or other kinds of undesirable social behavior that their designers didn't intend. While designers might take great care to prohibit undesirable behavior via training using "reinforcement learning via human feedback" (RLHF), such guardrails don't always work as intended. The machine is relatively inscrutable.
Making AI explainable and truthful is a big challenge. At the moment, it isn't obvious whether this problem is addressable solely by the existing paradigm, whether it will require a new paradigm, or whether it will be addressed via an integration of the symbolic and neural approaches to computation.
The unpredictability of AI systems built on pre-trained models also poses new problems for trust. The output of LLM-based AI systems on the same input can vary, a behavior we associate with humans but not machines [13]. To the contrary, we expect machines to be deterministic, not "noisy" or inconsistent like humans. Until now, we have expected consistency from machines.
While we might consider the machine's variance in decision-making as an indication of creativity
- a human-like behavior
- it poses severe risks, especially when combined with its inscrutability and an uncanny ability to mimic humans. Machines are already able to create "deep fakes" which can be undistinguishable from human creations. We are seeing the emergence of things like fake pornography, art, and documents. It is exceedingly difficult to detect plagiarism, or to even define plagiarism or intellectual property theft, given the large corpus of public information on which LLMs have been trained. When will such risks lie with the creators of pretrained models, applications that use them, or their users? Existing laws are not designed to address such problems, and will need to be expanded to recognize them, to limits their risks, and specify culpability.
Finally, inscrutability also creates a larger, existential risk to humanity, which could become a crisis for the current paradigm. For example, in trying to achieve goals that we give the AI, such as "save the planet," we have no idea about the sub-goals the machine will create in order to achieve its larger goals. This is known as "the alignment problem," in that it is impossible to determine whether the machine's hidden goals are aligned with ours. In saving the planet, for example, the AI might determine that humans pose the greatest risk to its survival, and hence they should be contained or eliminated [1, 2, 3].
So, even as we celebrate AI as a technology that will have far-reaching impacts on society, economics, and humanity - potentially exceeding that of other general purpose technologies such as electric power and the Internet - trust and alignment remain disconcertingly unaddressed. They are the most pressing ones that humanity faces today.
|
2310.16767 | Inversion Sets and Quotient Root Systems | We provide a recursive description of all decompositions of the positive
roots $R^+$ of a quotient root system $R$ into disjoint unions of inversion
sets. Our description is type-independent and generalizes the analogous result
for type $\mathbb A$ root systems in [USRA]. The main tool is the notion of an
inflation of a subset of a quotient root system. This new notion allows us to
treat all root systems (and their quotients) uniformly. We also obtain some
numerical results about the number of special decompositions. The new sequences
we obtain may be considered as extensions of Catalan numbers. | Ivan Dimitrov, Cole Gigliotti, Etan Ossip, Charles Paquette, David Wehlau | 2023-10-25T16:57:56Z | http://arxiv.org/abs/2310.16767v1 | # Inversion sets and quotient root systems
###### Abstract.
We provide a recursive description of all decompositions of the positive roots \(R^{+}\) of a quotient root system \(R\) into disjoint unions of inversion sets. Our description is type-independent and generalizes the analogous result for type \(\mathbb{A}\) root systems in [USRA]. The main tool is the notion of an inflation of a subset of a quotient root system. This new notion allows us to treat all root systems (and their quotients) uniformly. We also obtain some numerical results about the number of special decompositions. The new sequences we obtain may be considered as extensions of Catalan numbers.
Keywords: Root system, Quotient root system, Inversion set. +
Footnote †: 2020 _Mathematics Subject Classification._ Primary 17B22; Secondary 17B20, 17B25, 22F30.
## Introduction
Let \(\Delta\) be a root system with Weyl group \(W\) and set of positive roots \(\Delta^{+}\). The inversion set of \(w\in W\) is the set
\[\Phi(w):=\{\alpha\in\Delta^{+}\,|\,w(\alpha)\in\Delta^{-}\}=\Delta^{+}\cap w^ {-1}(\Delta^{-})\,\]
where \(\Delta^{-}=-\Delta^{+}\). If \(\Delta\) is of type \(\mathbb{A}_{n}\), then \(\Delta^{+}\) can be identified with the set
\[\{(i,j)\in\mathbb{Z}\times\mathbb{Z}\,|\,1\leqslant i<j\leqslant n+1\}\,\]
\(W=S_{n+1}\) is the symmetric group on \(n+1\) elements, and the inversion set of \(\sigma\in S_{n+1}\) is the set
\[\Phi(\sigma)=\{(i,j)\in\Delta^{+}\,|\,\sigma(i)>\sigma(j)\}\.\]
The problem of describing the decompositions of \(\Delta^{+}\) as a disjoint union of inversion sets arrises in connection with studying the Littlewood-Richardson cone, see [USRA] and the references therein for details. In [USRA], a complete description of such decompositions was obtained in terms of inflations of permutations. The results for type \(\mathbb{A}\) are also carried over to types \(\mathbb{B}\) and \(\mathbb{C}\) by exploiting a realization of each of the latter root systems as a root system of type \(\mathbb{A}\) with additional symmetries. Unfortunately, the methods developed there do not apply to root systems of type \(\mathbb{D}\) and to exceptional root systems. The goal of this paper with to provide a uniform approach to studying decompositions of \(\Delta^{+}\) into a disjoint union of inversion sets.
The uniform approach to inversion sets and inflations requires expanding the class of root systems to include quotient root systems (QRSs for short). Roughly speaking, if \(\Delta\) is a root system with a base \(\Sigma\) and \(I\) is a subset of \(\Sigma\), the QRS \(\Delta/I\) is the image of \(\Delta\) under the natural projection \(\pi_{I}:\operatorname{span}\Delta\to\operatorname{span}\Delta/\operatorname{ span}I\). QRSs have been studied extensively in connection with Lie theory, [K], simplicial hyperplane arrangements, [Cu] and the references therein, and 3-fold flopping contractions, [IW], to mention a few. The notions of positive roots, bases, etc. extend to QRSs and the counterparts of Weyl groups are certain groupoids called Weyl groupoids.
Let \(R\) be a QRS with positive roots \(R^{+}\). By definition, a subset \(\Phi\subset R^{+}\) is an inversion set if \(\Phi\) is both closed and coclosed. Of course, for root systems, this definition is equivalent to the usual definition of an inversion set. The main result of this paper is Theorem 5.9 which describes recursively all decompositions of \(R^{+}\) into a disjoint union of inversion sets. The main tool in our description is the notion of inflation of a subset of a quotient of \(R^{+}\), see Section 3. In preparation for Theorem 5.9, we prove that every subset of \(R^{+}\) (not necessarily an inversion set) can be represented in a canonical way as the inflation of the empty subset, a primitive subset, or the whole of a quotient of \(R^{+}\), see Theorem 3.14.
If \(R\) is a root system of type \(\mathbb{A}\), the canonical form of an inversion set and the decomposition provided by Theorem 5.9 coincide with the ones from [USRA]. The reason for this is that every quotient of a root system of type \(\mathbb{A}\) is equivalent to a root system of type \(\mathbb{A}\) of smaller rank. For root systems of types \(\mathbb{B}\) and \(\mathbb{C}\) the situation is a bit more complicated since the quotients do not remain equivalent to root systems of the same type; however, as far as inversion set are concerned these new QRS behave exactly as root systems of types \(\mathbb{B}\) and \(\mathbb{C}\), see Remark 1.4. The situation is dramatically different for root systems of type \(\mathbb{D}\) and for exceptional root systems, which explains why these were beyond reach with the methods developed in [USRA].
The paper is organized as follows: In Section 1 we provide the necessary background on QRSs and inversion sets. In Section 2 we introduce and study paths of roots in QRSs. In Section 3 we define inflations, study their properties, and prove Theorem 3.14 which proves that every subset of \(R^{+}\) admits a canonical representation as an inflation. In Section 4 we introduce and study the properties of the main tool for proving the main theorem, namely the graph \(G_{\Phi}^{\Phi^{\varepsilon}}\) associated to an inversion set \(\Phi\subset R^{+}\). In Section 5 we prove the main theorem of the paper, Theorem 5.9. In Section 6 we evaluate the number of fine decompositions of \(R^{+}\) into inversion sets. Note that root systems of type \(\mathbb{A}\) give rise to Catalan numbers, so the sequences we obtain in Section 6 may be considered as analogs and extensions of Catalan numbers.
## 1. Preliminaries
### Quotient Root Systems
In this section, after briefly recalling the notion of classical root system, we define quotient root systems. The idea is that quotient root systems are general enough to allow nice reductive arguments to understand root systems, but still close enough to classical root systems to share many similarities with them.
Let us now fix some notation and recall the notion of a root system. The reader is referred to [B] and [H] for basic properties of root systems. We fix a finite dimensional real vector space \(E\) equipped with the standard bilinear form \(\langle\cdot,\cdot\rangle\), called a Euclidean space. A _root system_\(\Delta\) is a finite subset of non-zero vectors in \(E\), called _roots_, satisfying the following properties with respect to the bilinear form:
1. The roots span \(E\);
2. If \(\alpha\in\Delta\) and \(r\) is a non-zero integer with \(r\alpha\) a root, then \(r=\pm 1\);
3. For \(\alpha\in\Delta\), the reflection with respect to the hyperplane \(H_{\alpha}\) of \(E\) perpendicular to \(\alpha\) sends a root to a root;
4. For \(\alpha,\beta\in\Delta\), the number \(2\frac{\langle\alpha,\beta\rangle}{\langle\alpha,\alpha\rangle}\) is an integer.
We let \(\Sigma\subset\Delta\) be a fixed _base_, that is, the set \(\Sigma\) forms a basis of \(E\) and each root of \(\Delta\) can be written as an integral combination of the elements from \(\Sigma\) with either all coefficients
non-negative, or all coefficients non-positive. When a base is given, a root such that all of its coefficient in that base are non-negative is called a _positive_ root (with respect to the given base). The other roots are called _negative_ roots. We let \(\Delta^{+}\) denote the set of positive roots with respect to \(\Sigma\) and \(\Delta^{-}\) that of negative roots. We have a disjoint union decomposition \(\Delta=\Delta^{+}\cup\Delta^{-}\). We say that \(\Delta\) is _irreducible_ when \(\Delta\) cannot be partitioned into two subsets of mutually orthogonal roots.
Next we introduce and briefly discuss quotient root systems. For more details on quotient root systems, we refer the reader to [K] and [DR]. For a subset \(I\subset\Sigma\), we let \(\Delta_{I}\) denote the subsystem \(\Delta_{I}=\Delta\cap\operatorname{span}I\). Clearly, \(\Delta_{I}\) is a root system (not necessarily irreducible) with a base \(I\) and a fixed set of positive roots \(\Delta_{I}^{+}=\Delta_{I}\cap\Delta^{+}\).
Now, we would like to endow the quotient space \(E/\operatorname{span}I\) with a bilinear form that makes the projection of the roots into a system with rich combinatorics. In order to do this, consider the canonical projection \(\pi_{I}:E\to E/\operatorname{span}I\). We identify the quotient \(E/\operatorname{span}I\) with the orthogonal complement of \(\operatorname{span}I\). Hence, \(E/\operatorname{span}I\) is endowed with a corresponding bilinear form that we also denote \(\langle\cdot,\cdot\rangle\). We let \(\Delta/I\) denote the non-zero images of elements of \(\Delta\). In other words, we set
\[\Delta/I:=\{\pi_{I}(\beta)\,|\,\beta\in\Delta\backslash\operatorname{span}I\}.\]
We call the set \(\Delta/I\) a _quotient root system_ or QRS for short while a given element from \(\Delta/I\) is called a _root_. As for classical root systems, a _base_\(\Sigma^{\prime}\) of such a QRS is a subset of the set of roots that forms a basis of the underlying vector space \(E/\operatorname{span}I\), and such that every root can be written as an integral combination of \(\Sigma^{\prime}\), and where the coefficients are either all non-negative (the root is then called positive with respect to \(\Sigma^{\prime}\)) or all non-positive (the root is then called negative). It is easy to see that
\[\Sigma/I:=\{\pi_{I}(\beta)\,|\,\beta\in\Sigma\backslash I\}\]
is a base of \(\Delta/I\). This gives rise to the corresponding sets \((\Delta/I)^{+}\) of positive roots and \((\Delta/I)^{-}\) of negative roots. We observe that we have the disjoint union decomposition \(\Delta/I=(\Delta/I)^{+}\cup(\Delta/I)^{-}\) and where \((\Delta/I)^{+}=\pi_{I}(\Delta^{+})\backslash\{0\}\) and \((\Delta/I)^{-}=\pi_{I}(\Delta^{-})\backslash\{0\}\). The _rank_ or a QRS is just the dimension of the ambient vector space, or equivalently, the number of roots in a base.
**Remark 1.1**.:
1. Note that when \(I=\emptyset\), the QRS \(\Delta/I\) is canonically identified with the root system \(\Delta\). When considering QRSs, we will assume that \(I\neq\Sigma\), so that the vector space \(E/\operatorname{span}I\) is not trivial.
2. Unlike root systems, QRS allow for multiples of roots which are also roots, i.e., a QRS may contain both \(\alpha\) and \(2\alpha\). A root \(\alpha\) for which \(\alpha/r\) is not a root for all \(r\) with \(|r|>1\) is called _primitive_.
We will usually denote a QRS by \(R\), with a given fixed base by \(\Sigma\). With respect to this base, \(R^{+}\) denotes the set of positive roots and \(R^{-}\) that of negative roots. The underlying root system giving rise to the QRS \(R\) will be denoted \(\Delta\).
#### 1.1.1. Coefficient vectors, partial order
Let \(R\) be a QRS with base \(\Sigma=\{\theta_{1},\ldots,\theta_{r}\}\). We can identify any root by its _coefficient vector_, which is just the vector in \(\mathbb{Z}^{r}\) collecting the coefficients of the root when writing it as an integral combination of the elements of the base. In other words, for a root \(\alpha\), we consider the expression \(\alpha=\sum_{i=1}^{r}d_{i}\theta_{i}\) and define
\(d(\alpha)=(d_{1},\ldots,d_{r})\) to be the coefficient vector of \(\alpha\), with respect to the base \(\Sigma\). By definition of a base, any coefficient vector is in \((\mathbb{Z}_{\geqslant 0})^{r}\) or in \((\mathbb{Z}_{\leqslant 0})^{r}\).
We can use these to define a partial order on roots (which again depends on \(\Sigma\)). For two roots \(\alpha,\beta\), we write \(\alpha\leqslant\beta\) if \(d(\alpha)\leqslant d(\beta)\), where for two vectors \(\vec{u},\vec{v}\in\mathbb{Z}^{r}\), we write \(\vec{u}\leqslant\vec{v}\) when \(u_{i}\leqslant v_{i}\) for \(i=1,2,\ldots,r\).
#### 1.1.2. Adjacency diagram and support
Let \(R\) be a QRS with base \(\Sigma=\{\theta_{1},\ldots,\theta_{r}\}\). The _adjacency diagram_ of \(R\) with respect to \(\Sigma\) is the graph on vertices \(\Sigma\) and such that there is an edge connecting \(\theta_{i}\) with \(\theta_{j}\) if and only if \(\langle\theta_{i},\theta_{j}\rangle<0\). We note that a QRS is connected precisely when its adjacency diagram is connected.
The _support_ of a root \(\alpha\), with respect to a given base \(\Sigma\), is
\[\operatorname{supp}(\alpha)=\{\theta_{i}\in\Sigma\mid d(\alpha)_{i}\neq 0\}.\]
If \(S\) is a set of roots, then we define its support \(\operatorname{supp}(S)\) to be the union of the supports of the roots in \(S\). It is not hard to see that the full subgraph of the adjacency diagram corresponding to the support of a root is always connected. This follows from the similar fact on classical root systems.
### Inversion Sets
In this subsection, we fix a QRS \(R\) with base \(\Sigma\). For a subset \(\Phi\) of \(R^{+}\), we denote by \(\Phi^{c}\) its complement in \(R^{+}\).
**Definition 1.2**.: Let \(\Phi\subseteq R^{+}\). We say that
1. \(\Phi\) is _closed_ if \(\alpha+\beta\in\Phi\) whenever \(\alpha,\beta\in\Phi\).
2. \(\Phi\) is _co-closed_ if \(\alpha+\beta\in\Phi^{c}\) whenever \(\alpha,\beta\in\Phi^{c}\).
3. \(\Phi\) is an _inversion set_ if it is both closed and co-closed.
**Remark 1.3**.: It is easy to see that intersections of closed sets are closed and unions of co-closed sets are co-closed.
For classical roots systems, the notion of inversion set takes its origin from the fact that \(\Phi\) is an inversion set precisely when there is an element \(w\) in the Weyl group of the root system such that
\[\Phi=\{\alpha\in R^{+}\mid w(\alpha)\in R^{-}\}.\]
As we will be interested in decomposing inversion sets, we need the following definition.
**Remark 1.4**.: A set \(\Phi\subset R^{+}\) is an inversion set if and only if it can be realized as the set of elements of \(R^{+}\) lying on one side of some hyperplane of \(E\). In particular, if two QRSs \(R_{1}\) and \(R_{2}\) have the same set of primitive elements, cf. Remark 1.1, then there is a natural bijection between the inversion sets in \(R_{1}^{+}\) and \(R_{2}^{+}\).
**Definition 1.5**.: Let \(\Phi\subseteq R^{+}\) be a non-empty inversion set. We say that \(\Phi\) is _irreducible_ if, whenever \(\Phi=\Phi_{1}\cup\Phi_{2}\) as a disjoint union of inversion sets, then \(\Phi_{1}=\Phi\) or \(\Phi_{2}=\Phi\).
### Decomposition of inversion sets
We will be interested in decomposing an inversion set into disjoint union of inversion sets. We have the following definition.
**Definition 1.6**.: Let \(\Phi\) be an inversion set in \(R^{+}\). A _decomposition_ of \(\Phi\) is an expression
\[\Phi=\Phi_{1}\sqcup\cdots\sqcup\Phi_{r}\]
where all \(\Phi_{i}\) are non-empty pairwise disjoint inversion sets. Such a decomposition is _fine_ if \(r\) is the rank of \(R\).
**Proposition 1.7**.: _Let \(\Phi_{1}\sqcup\cdots\sqcup\Phi_{r}=\Phi\) be a decomposition of an inversion set \(\Phi\). Then for each \(1\leqslant k\leqslant r\), \(\Phi_{1}\sqcup\cdots\sqcup\Phi_{k}\) is an inversion set._
Proof.: It suffices to prove that \(\Phi_{1}\sqcup\Phi_{2}\) is closed and co-closed. It is clear that \(\Phi_{1}\sqcup\Phi_{2}\) is co-closed as the union of co-closed sets. Further, \(\Phi_{1}\sqcup\Phi_{2}\) is closed as the complement of \(\Phi_{3}\sqcup\cdots\sqcup\Phi_{r}\sqcup\Phi^{c}\), which is co-closed.
## 2. Paths in QRSs
One difficulty of working with QRSs (as well as in root systems themselves) is that sums of roots are not in general roots, and it is unclear whether given a system \(R\), roots \(\alpha,\beta\in R\), and a set \(K\subseteq R\) with \(\beta-\alpha\in\operatorname{span}K\), there is some way to add roots in \(K\) to get from \(\alpha\) to \(\beta\) without leaving \(R\). This question motivates the definition of paths in QRSs and is answered in the affirmative in Proposition 2.7.
**Definition 2.1**.: A _path_ between two roots \(\alpha\) and \(\beta\) is a collection of roots \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\) such that \(\alpha+\kappa_{1}+\cdots+\kappa_{n}=\beta\) and \(\alpha+\kappa_{1}+\cdots+\kappa_{i}\in R\) for all \(i\). We denote this path \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) and call each \(\kappa_{i}\) a _step_.
### Existence of Paths
Seeing as the key condition in a path is that the partial sums are roots, it will be essential to have some sufficient condition for the sum of two roots to be a root. The following proposition is part of Theorem 2.3 in [K].
**Proposition 2.2**.: _Let \(\alpha,\beta\in R\). If \(\langle\alpha,\beta\rangle<0\), then \(\alpha+\beta\in R\). If \(\langle\alpha,\beta\rangle>0\), then \(\alpha-\beta\in R\)._
**Proposition 2.3**.: _Let \(\alpha,\beta,\kappa_{1},\ldots,\kappa_{n}\in R\). If \(\beta=\alpha+\kappa_{1}+\cdots+\kappa_{n}\) where \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\notin R\), then \(\alpha+\kappa_{i}\in R\) for some \(1\leqslant i\leqslant n\)._
Proof.: Since \(\beta\) is a root but \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\) is not, \(\alpha\neq 0\). Next, because \(\beta-\alpha\notin R\), \(0\geqslant\langle\alpha,\beta\rangle=\langle\alpha,\alpha\rangle+\langle \alpha,\kappa_{1}\rangle+\cdots+\langle\alpha,\kappa_{n}\rangle\). Since \(\langle\alpha,\alpha\rangle>0\), there must be at least one \(\kappa_{i}\) such that \(\langle\alpha,\kappa_{i}\rangle<0\), so that \(\alpha+\kappa_{i}\in R\).
**Lemma 2.4**.: _Let \(\alpha,\beta,\gamma\) be roots such that \(\alpha+\beta+\gamma\in R\) but \(\beta+\gamma\notin R\). If \(\alpha+\gamma\neq 0\), then \(\alpha+\beta\in R\). If \(\alpha+\beta\neq 0\), then \(\alpha+\gamma\in R\)._
Proof.: We assume that \(\alpha+\gamma\neq 0\) and show that \(\alpha+\beta\in R\). It suffices to show that either \(\langle\alpha,\beta\rangle<0\) or \(\langle\alpha+\beta+\gamma,\gamma\rangle>0\). For that reason, we will assume \(\langle\alpha,\beta\rangle\geqslant 0\) and show that \(\langle\alpha+\beta+\gamma,\gamma\rangle>0\). Note that because \(\beta+\gamma\notin R\), both \(\langle\beta,\gamma\rangle\geqslant 0\) and \(\langle\alpha+\beta+\gamma,\alpha\rangle\leqslant 0\). We extract from these inequalities that \(\langle\alpha+\gamma,\alpha\rangle\leqslant-\langle\beta,\alpha\rangle\leqslant 0\). Because \(\langle\alpha+\gamma,\alpha+\gamma\rangle>0\) and \(\langle\alpha+\gamma,\alpha\rangle\leqslant 0\), we now have that \(\langle\alpha+\gamma,\gamma\rangle>0\). Finally, we note that \(\langle\alpha+\beta+\gamma,\gamma\rangle=\langle\alpha+\gamma,\gamma\rangle+ \langle\beta,\gamma\rangle>0\), and so we conclude that \(\alpha+\beta\in R\). By the symmetry of the statement, \(\alpha+\gamma\) is a root provided \(\alpha+\beta\neq 0\).
**Remark 2.5**.: In cases where \(\alpha+\beta\neq 0\) and \(\alpha+\gamma\neq 0\), for example when \(\alpha,\beta,\gamma>0\), the lemma amounts to the statement that if \(\alpha,\beta,\gamma,\) and \(\alpha+\beta+\gamma\) are roots, then at least two of \(\alpha+\beta\), \(\alpha+\gamma\), and \(\beta+\gamma\) are roots.
**Proposition 2.6**.: _Let \(\alpha,\beta,\kappa_{1},\ldots,\kappa_{n}\in R\). If \(\beta=\alpha+\kappa_{1}+\cdots+\kappa_{n}\) where \(\mu:=\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\in R\) and \(\mu\neq 0\), then there exists some \(\kappa_{i}\) such that \(\alpha+\kappa_{i}\in R\) or \(\alpha+(\mu-\kappa_{i})\in R\)._
Proof.: Since \(\langle\mu,\kappa_{1}\rangle+\langle\mu,\kappa_{2}\rangle+\cdots+\langle\mu, \kappa_{n}\rangle=\langle\mu,\mu\rangle>0\), there is some \(\kappa_{i}\) such that \(\langle\mu,\kappa_{i}\rangle>0\) and \(\mu-\kappa_{i}\in R\). We then apply Lemma 2.4 to the expression \(\beta=(\mu-\kappa_{i})+\alpha+\kappa_{i}\) to conclude
that if \(\alpha+\kappa_{i}\notin R\), then \(\mu-\kappa_{i}+\alpha\in R\), provided that \((\mu-\kappa_{i})+\kappa_{i}\neq 0\), which is true by assumption.
**Proposition 2.7**.: _Let \(\beta=\alpha+\kappa_{1}+\cdots+\kappa_{n}\) for roots \(\alpha,\beta,\) and \(\kappa_{i}\), where no subcollection of \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\) sums to 0. There is a path \([\alpha;\mu_{1},\mu_{2},\ldots,\mu_{n};\beta]\) from \(\alpha\) to \(\beta\) for some permutation \(\mu_{1},\mu_{2},\ldots,\mu_{n}\) of \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\)._
Proof.: We prove the sufficient claim that, given any expression \(\beta^{\prime}=\alpha^{\prime}+\kappa_{1}^{\prime}+\cdots+\kappa_{n}^{\prime}\) for roots \(\alpha^{\prime},\beta^{\prime},\kappa_{1}^{\prime},\ldots,\kappa_{n}^{\prime}\), there is some \(1\leqslant i\leqslant n\) such that \(\alpha^{\prime}+\kappa_{i}^{\prime}\in R\). We prove this by induction on \(n\), the length of the expression, with the base case being trivial. In the inductive step, we write \(\beta^{\prime}=\alpha^{\prime}+\kappa_{1}^{\prime}+\cdots+\kappa_{n}^{\prime}\) and let \(\mu:=\kappa_{1}^{\prime}+\kappa_{2}^{\prime}+\cdots+\kappa_{n}^{\prime}\). If \(\mu\notin R\), then by Proposition 2.3, there exists some \(\kappa_{i}^{\prime}\) such that \(\alpha^{\prime}+\kappa_{i}^{\prime}\in R\). If \(\mu\in R\), we use Proposition 2.6 to select some \(\kappa_{i}^{\prime}\) such that either \(\alpha^{\prime}+\kappa_{i}^{\prime}\in R\) or \(\alpha^{\prime}+(\mu-\kappa_{i}^{\prime})\in R\). If \(\alpha^{\prime}+\kappa_{i}^{\prime}\in R\), we are done. Otherwise, \(\alpha^{\prime}+(\mu-\kappa_{i}^{\prime})\in R\) is an expression with one fewer term than \(\alpha^{\prime}+\mu\), so we obtain our result by our inductive hypothesis.
### Reduced Paths
While Proposition 2.7 describes the existence of paths between roots, there are often multiple ways to rearrange the steps in a path without breaking the condition that partial sums be roots. Having the freedom to permute the steps in a path may allow us to ensure that all partial sums are positive, for example, or stay within an inversion set. We therefore introduce the notion of a reduced path, and show in Proposition 2.12 that, under some conditions, every permutation of the steps of a reduced path defines another reduced path.
**Definition 2.8**.: A path \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) is _reduced_ if \(\kappa_{i}+\kappa_{j}\notin R\) for any \(1\leqslant i\neq j\leqslant n\).
**Proposition 2.9**.: _No non-trivial sum of steps in a reduced path is a root. More strongly, if \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\in R\) with \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\in R\) and \(n>1\), then for any \(i\leqslant n\), there exists some \(i\neq j\leqslant n\) such that \(\kappa_{i}+\kappa_{j}\in R\)._
Proof.: This follows from Proposition 2.7, by forming a path from \(\kappa_{i}\) to \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\) with steps in \(\{\kappa_{j}\mid j\neq i\}\).
**Proposition 2.10**.: _Let \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) be a path. There exists a reduced path \([\alpha;\mu_{1},\ldots,\mu_{m};\beta]\) between \(\alpha\) and \(\beta\) with each \(\mu_{i}\) being a sum of a subcollection of the steps \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\) of the original path._
Proof.: We define an iterative process on the collection of roots \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\). At each step, we replace two roots whose sum is a root with that sum, decreasing the size of the collection but preserving its sum. When this process terminates, which it necessarily does since it decreases the length of the collection at each step, we are left with some \(\mu_{1},\mu_{2},\ldots,\mu_{m}\) with \(m\leqslant n\) and \(\mu_{i}\in R\) for all \(i\). By Proposition 2.7, we can order these \(\mu_{i}\) as steps to form a path from \(\alpha\) to \(\beta\), avoiding the condition that any subset of them sum to 0 through Proposition 2.9. This is our desired reduced path.
**Remark 2.11**.: Given some initial path, the procedure outlined in the above proof does not produce a unique output. There may be more than one way to reduce a path.
**Proposition 2.12**.: _Let \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) be a reduced path with \(\alpha\neq-\kappa_{i}\) for any \(1\leqslant i\leqslant n\). For any permutation \(\mu_{1},\mu_{2},\ldots,\mu_{n}\) of \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\), \([\alpha;\mu_{1},\ldots,\mu_{n};\beta]\) is a reduced path._
Proof.: We prove that any adjacent transposition of steps in \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) results in another reduced path, from which the desired result follows. To that end, we take some \(1\leqslant i<n\) and consider swapping the positions of \(\kappa_{i}\) and \(\kappa_{i+1}\). The resulting sequence of partial sums is the same as that of the original path, except \(\alpha+\kappa_{1}+\cdots+\kappa_{i-1}+\kappa_{i}\) is replaced by \(\alpha+\kappa_{1}+\cdots+\kappa_{i-1}+\kappa_{i+1}\). We therefore need only to show that this is a root. To that end, we apply Lemma 2.4 to the expression \((\alpha+\kappa_{1}+\cdots+\kappa_{i-1})+\kappa_{i}+\kappa_{i+1}\) to conclude that \(\alpha+\kappa_{1}+\cdots+\kappa_{i-1}+\kappa_{i+1}\in R\) provided \(\alpha+\kappa_{1}+\cdots+\kappa_{i-1}+\kappa_{i}\neq 0\). If \(i=1\), this is guaranteed by the condition that \(\alpha\neq-\kappa_{j}\) for any \(1\leqslant j\leqslant n\). If \(i>1\), then we would have a non-trivial sum of steps in a reduced path equal to \(-\alpha\), which is impossible by Proposition 2.9. We therefore conclude that \([\alpha;\kappa_{1},\ldots,\kappa_{i+1},\kappa_{i},\ldots,\kappa_{n};\beta]\) is another reduced path.
**Example 2.13**.: In the root system \(D_{5}\) we have the following equation,
\[010^{0}_{0}-000^{0}_{1}+001^{0}_{1}+001^{1}_{1}+100^{0}_{0}=112^{1}_{1}\]
and indeed these roots can be arranged to form the path
\[\left[010^{0}_{0};100^{0}_{0},001^{1}_{1},-000^{0}_{1},001^{0}_{1};112^{1}_{1} \right].\]
We can reduce this path, following the procedure outlined in Proposition 2.10, obtaining
\[\left[010^{0}_{0};100^{0}_{0},001^{1}_{0},001^{0}_{1};112^{1}_{1}\right],\]
which is reduced. To verify Proposition 2.12, we check that each permutation of the steps in the above path properly defines a path.
\[\left[010^{0}_{0};100^{0}_{0},001^{1}_{0},001^{0}_{1};112^{1}_{1}\right] \left[010^{0}_{0};100^{0}_{0},001^{0}_{1},001^{1}_{0};112^{1}_{1}\right] \left[010^{0}_{0};001^{1}_{0},100^{0}_{0},001^{0}_{1};112^{1}_{1}\right] \left[010^{0}_{0};001^{0}_{1},100^{0}_{0};001^{1}_{0};112^{1}_{1}\right] \left[010^{0}_{0};001^{0}_{1},001^{1}_{0},100^{0}_{0};112^{1}_{1}\right]\]
It is not too difficult to see that every partial sum along each of these paths is a root in \(D_{5}\).
The following proposition will be useful in Section 5. Its proof uses some properties of reduced paths that we just proved.
**Proposition 2.14**.: _If \(\Phi\) is a co-closed subset of \(R^{+}\), then \(\operatorname{span}_{\mathbb{Z}}\Phi=\operatorname{span}_{\mathbb{Z}} \operatorname{supp}\Phi\)._
Proof.: It follows from the definition of \(\operatorname{supp}\Phi\) that \(\operatorname{span}_{\mathbb{Z}}\Phi\subseteq\operatorname{span}_{\mathbb{Z} }\operatorname{supp}\Phi\). To demonstrate the reverse inclusion, we take \(\theta\in\operatorname{supp}\Phi\) and show that it is in \(\operatorname{span}_{\mathbb{Z}}\Phi\). This is obvious if \(\theta\in\Phi\), so we consider the case where \(\theta\notin\Phi\). Since \(\theta\in\operatorname{supp}\Phi\), there is some \(\alpha\in\Phi\) such that \(\theta<\alpha\). We may therefore, appealing to Proposition 2.7, form a path from \(\theta\) to \(\alpha\) with positive simple roots as steps. By Proposition 2.10, we may reduce this path into \([\theta;\kappa_{1},\ldots,\kappa_{n};\alpha]\) with positive steps. Using Proposition 2.12, we take the ordering of the \(\kappa_{i}\)'s to be as follows, with those in \(\Phi\) first, and later those in \(\Phi^{c}\).
\[[\theta;\underbrace{\kappa_{1},\ldots,\kappa_{m}}_{\kappa_{i}\in\Phi}, \underbrace{\kappa_{m+1},\ldots,\kappa_{n}}_{\kappa_{i}\in\Phi^{c}};\alpha]\]
Co-closure implies that \(\theta+\kappa_{1}+\ldots+\kappa_{m}\in\Phi\), from which it follows that \(\theta\in\operatorname{span}_{\mathbb{Z}}\Phi\).
## 3. Inflations
**Definition 3.1**.: Let \(I\subseteq S\) and \(\Phi\subseteq R^{+}\). We say that \(\Phi\) is an inflation from \(I\) of \(\Psi\subseteq(R/I)^{+}\) by \(X\subseteq R^{+}_{I}\) if \(\Phi=\pi_{I}^{-1}(\Psi)\cup X\) where \(\pi_{I}\) is the canonical projection from \(V\) onto \(V/\mathrm{span}\,I\). We write \(\Phi=\inf_{I}^{S}(\Psi,X)\).
**Definition 3.2**.: Let \(\alpha,\beta\in R^{+}\) and \(\Phi\subseteq R^{+}\). Then we write \(\alpha\sim_{\Phi}\beta\) if either both \(\alpha,\beta\in\Phi\) or both \(\alpha,\beta\notin\Phi\).
**Remark 3.3**.: We will often use the following simple properties of inflations without explicit reference.
1. Every \(\Phi\subseteq R^{+}\) is an inflation \(\Phi=\inf_{S}^{S}(\emptyset,\Phi)\) as well as \(\Phi=\inf_{\emptyset}^{S}(\Phi,\emptyset)\).
2. Let \(I\subseteq S\). If \(\inf_{I}^{S}(\Psi_{1},X_{1})=\inf_{I}^{S}(\Psi_{2},X_{2})\), then \(\Psi_{1}=\Psi_{2}\) and \(X_{1}=X_{2}\).
3. If \(\Phi=\inf_{I}^{S}(\Psi,X)\), then \(\Phi^{c}=\inf_{I}^{S}(\Psi^{c},X^{c})\).
4. \(\inf_{I}^{S}(\Psi_{1},X_{1})\cup\inf_{I}^{S}(\Psi_{2},X_{2})=\inf_{I}^{S}(\Psi _{1}\cup\Psi_{2},X_{1}\cup X_{2})\)
The following two propositions are easily verified.
**Proposition 3.4**.: _Let \(\Phi\subseteq R^{+}\) and \(I\subseteq S\). The following are equivalent:_
1. \(\Phi\) _is inflated from_ \(I\)_._
2. _For any_ \(\alpha,\beta\in R^{+}\) _such that_ \(\pi_{I}(\alpha)=\pi_{I}(\beta)\neq 0\)_,_ \(\alpha\sim_{\Phi}\beta\)_._
3. _For any_ \(\alpha\in R^{+}\) _and_ \(\theta\in\pm I\) _such that_ \(\pi_{I}(\alpha)\neq 0\) _and_ \(\alpha+\theta\in R^{+}\)_,_ \(\alpha+\theta\sim_{\Phi}\alpha\)_._
4. _For any_ \(\overline{\alpha}\in R/I\)_,_ \(\pi^{-1}(\overline{\alpha})\subseteq\Phi\) _or_ \(\pi^{-1}(\overline{\alpha})\subseteq\Phi^{c}\)_._
5. \(\pi_{I}(\Phi)\cap\pi_{I}(\Phi^{c})\subseteq\{0\}\)_._
**Proposition 3.5**.: _Let \(I\subseteq J\subseteq S\) and let \(X\subseteq R^{+}_{I}\), \(T\subseteq R^{+}_{J/I}\), and \(\Psi\subseteq R^{+}_{S/J}\). Then_
\[\inf_{I}^{S}(\inf_{J/I}^{S/I}(\Psi,T),X)=\inf_{J}^{S}(\Psi,\inf_{I}^{J}(T,X)) \tag{3.1}\]
Here we use the natural isomorphism \((S/I)/(J/I)\cong S/J\) to make sense of \(\inf_{J/I}^{S/I}(\Psi,T)\).
**Proposition 3.6**.: _Assume \(I\subseteq J\) and \(\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{J}^{S}(\Theta,Y)\). Then \(\Psi=\inf_{J/I}^{S/I}(\Theta,Z)\) for some \(Z\subseteq R_{J/I}\)._
Proof.: Note that \(Y\subset\mathrm{span}\,J\subset\mathrm{span}\,S\). Since \(Y=\Phi\cap\mathrm{span}\,J\), using Proposition 3.4 (ii), one verifies easily that \(Y=\inf_{I}^{J}(Z,X)\) for some \(Z\subset\mathrm{span}\,J/I\). Then (3.1) above implies
\[\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{J}^{S}(\Theta,Y)=\inf_{J}^{S}(\Theta,\inf_{I}^ {J}(Z,X))=\inf_{I}^{S}(\inf_{J/I}^{S/I}(\Theta,Z),X)\,\]
proving that \(\Psi=\inf_{J/I}^{S/I}(\Theta,Z)\).
**Lemma 3.7**.: _Let \(I\subseteq S\) and \(\overline{\alpha},\overline{\beta},\overline{\gamma}\) be non-zero elements in \(R/I\) such that \(\overline{\gamma}=\overline{\alpha}+\overline{\beta}\). Then_
1. _If_ \(\gamma\in\pi_{I}^{-1}(\overline{\gamma})\)_, then there are_ \(\alpha\in\pi_{I}^{-1}(\overline{\alpha})\) _and_ \(\beta\in\pi_{I}^{-1}(\overline{\beta})\) _with_ \(\gamma=\alpha+\beta\)_._
2. _If_ \(\alpha\in\pi_{I}^{-1}(\overline{\alpha})\)_, then there are_ \(\beta\in\pi_{I}^{-1}(\overline{\beta})\) _and_ \(\gamma\in\pi_{I}^{-1}(\overline{\gamma})\) _with_ \(\gamma=\alpha+\beta\)_._
Proof.: Statement (i) follows from [K, Theorems 1.9 and 2.3] and statement (ii) follows from (i) by considering the elements \(\overline{\alpha}=\overline{\gamma}+(-\overline{\beta})\).
**Proposition 3.8**.: _Let \(\Phi\subseteq R^{+}\). If \(\Phi=\inf_{I}^{S}(\Psi,X)\). \(\Phi\) is an inversion set if and only if both \(\Psi\) and \(X\) are inversion sets._
### The set \(\operatorname{Gen}(\Phi)\)
**Definition 3.9**.: Given \(\Phi\subseteq R^{+}\), define
\[\operatorname{Gen}(\Phi):=\{K\subseteq S\,|\,\Phi=\inf_{K}^{S}(\Theta,Y)\text{ for some }\Theta\text{ and some }Y\}\,\]
the sets from which \(\Phi\) is inflated.
**Remark 3.10**.: The following properties of \(\operatorname{Gen}(\Phi)\) are obvious:
1. \(\emptyset,S\in\operatorname{Gen}(\Phi)\)
2. \(\operatorname{Gen}(\Phi)=\{\emptyset,S\}\) if and only if \(\Phi\) is primitive.
3. \(\operatorname{Gen}(\Phi)=2^{S}\) if and only if \(\Phi=\emptyset\) or \(\Phi=R^{+}\).
4. \(\operatorname{Gen}(\Phi)=\operatorname{Gen}(\Phi^{c})\).
**Proposition 3.11**.: _Let \(\Phi\subseteq R^{+}\), and \(I,J\in\operatorname{Gen}(\Phi)\). Then \(I\cup J\in\operatorname{Gen}(\Phi)\) and \(I\cap J\in\operatorname{Gen}(\Phi)\)._
Proof.: The statement about \(I\cap J\) is straightforward. The proof about \(I\cup J\) is an immediate consequence of the fact that, for any \(K\subset S\) and any non-zero \(\nu\in R/K\), the set \(\pi_{K}^{-1}(\nu)\) is connected by elements of \(K\), which is a consequence of Proposition 2.7.
### Canonical Form
The aim of this subsection is to provide a canonical way to describe a subset of \(R^{+}\) as an inflation.
**Definition 3.12**.: \(\Phi\subset R^{+}\) is _primitive_ if \(\Phi\neq\emptyset,\Phi\neq R^{+}\), and \(\Phi=\inf_{I}^{S}(\Psi,X)\) implies that either \(I=\emptyset\) or \(I=S\).
**Remark 3.13**.: \(\Phi\subset R^{+}\) is primitive if and only if \(\Phi^{c}\) is primitive.
**Theorem 3.14**.: _Let \(R\) be a connected QRS, and let \(\Phi\subseteq R^{+}\). Then \(\Phi=\inf_{I}^{S}(\Psi,X)\) where \(I\) is a proper subset of \(S\) and one of the following mutually exclusive alternatives holds:_
1. \(\Psi\) _is primitive_ _or_ _if_ \(\Psi=\emptyset\) _or_ \(\Psi=(R/I)^{+}\) _and_ \(I\) _is smallest with this property._
_Moreover, this decomposition is unique._
We call the expression \(\Phi=\inf_{I}^{S}(\Psi,X)\) described in Theorem 3.14 the _canonical form of \(\Phi\)_. Note that in [USRA], the notion of a "simple form of a permutation" was studied. The canonical form of an inversion set in type \(\mathbb{A}\) is the counterpart of the simple form of the corresponding permutation.
Before proving the theorem, we establish two useful results.
**Lemma 3.15**.: _Let \(\Phi\subseteq R^{+}\). There exists a unique set \(I\subseteq S\) with the following property: \(\Phi=\inf_{K}^{S}(\emptyset,X)\) for some set \(X\) if and only if \(I\subseteq K\)._
Proof.: Set \(I:=\operatorname{supp}\Phi\). Clearly, \(\Phi\subset\operatorname{span}I\) and \(\Phi=\inf_{I}^{S}(\emptyset,\Phi)\). Hence, for any \(I\subset K\subset S\), we have \(\Phi=\inf_{I}^{S}(\emptyset,\Phi)=\inf_{K}^{S}(\emptyset,\Phi)\). Conversely, if \(\Phi=\inf_{K}^{S}(\emptyset,X)\) for some \(X\subset\operatorname{span}K\), then \(\Phi=X\subset\operatorname{span}K\). In particular, \(\operatorname{supp}(\alpha)\subset K\) for every \(\alpha\in\Phi\), proving that \(I\subset K\).
**Lemma 3.16**.: _Assume that \(I\cap J=\emptyset\) and \(I\cup J=S\). Define_
\[I^{\prime}:=\{\theta\in I\,|\,\theta\perp\lambda\text{ for any }\lambda\in J\} \quad\text{and}\quad J^{\prime}:=\{\theta\in J\,|\,\theta\perp\lambda\text{ for any }\lambda\in I\}\.\]
_If \(\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{J}^{S}(\Theta,Y)\), then \(\Phi=\inf_{I^{\prime}\cup J^{\prime}}^{S}(\Xi,Z)\), where \(\Xi=\emptyset\) or \(\Xi=(R/(I^{\prime}\cup J^{\prime}))^{+}\)._
Proof.: Let \(\tau\) denote the highest root of \(R^{+}\). If \(\gamma\in R^{+}\backslash(R_{I}\cup R_{J})\), then \(\tau\) and \(\gamma\) are linked by a sequence of roots, none of which belongs to \(R_{I}\cup R_{J}\) and the difference between any two consecutive roots in the sequence is an element of \(S=I\cup J\). Hence \(\gamma\sim_{\Phi}\tau\).
Next we prove that if \(\gamma\in R^{+}\backslash(R_{I^{\prime}}\cup R_{J^{\prime}})\), then \(\gamma\sim_{\Phi}\tau\). Without loss of generality we may assume that \(\gamma\in R_{I}\backslash R_{I^{\prime}}\). The definition of \(I^{\prime}\) implies that there is \(\beta\in J\) which is connected to the support of \(\gamma\). Then \(\langle\gamma,\beta\rangle<0\) and hence \(\gamma+\beta\in R^{+}\). Since \(\pi_{J}(\gamma)=\pi_{J}(\gamma+\beta)\neq 0\), we conclude that \(\gamma\sim_{\Phi}\gamma+\beta\sim_{\Phi}\tau\). Since the support of a root is connected, we get \(R_{I^{\prime}\cup J^{\prime}}=R_{I^{\prime}}\cup R_{J^{\prime}}\), which proves that \(\Phi\) is an inflation from \(I^{\prime}\cup J^{\prime}\) of \((R/I^{\prime}\cup J^{\prime})^{+}\) or \(\emptyset\) depending on whether \(\tau\) belongs to \(\Phi\) or not.
Proof of Theorem 3.14.: _Existence._ If \(\Phi\) is primitive, \(\Phi=\emptyset\), or \(\Phi=R^{+}\), then we observe that \(\Phi=\inf_{\emptyset}^{S}(\Phi,\emptyset)\) and we are done. If \(\Phi\) is not primitive, then \(\Phi=\inf_{K}^{S}(\Xi,Y)\) for some proper non-empty subset \(K\) of \(S\). If \(\Xi\) is primitive, \(\Xi=\emptyset\), or \(\Xi=(R/K)^{+}\), we are done (invoking Lemma 3.15 if necessary). If not, Proposition 3.5 implies that either \(\Phi\) is an inflation of from a proper subset \(K^{\prime}\) of \(S\) which contains \(K\) as a proper subset. This process is finite since, for every \(L\subset S\) with \(\#L=\operatorname{rk}R-1\), every subset of \((R/L)^{+}\) is either primitive, the empty set, or \((R/L)^{+}\) itself, proving the existence part of the statement.
_Uniqueness._ If \(\Phi=\inf_{I}^{S}(\emptyset,X)\) for some \(X\), then the highest root of \(R^{+}\) does not belong to \(\Phi\) in contrast to the case when \(\Phi\) is an inflation of \((R/I)^{+}\). This proves that \(\Phi\) cannot be an inflation of both \(\emptyset\) and \((R/I)^{+}\).
Now assume that \(\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{J}^{S}(\Theta,Y)\), where \(\Psi\) is primitive and \(\Theta\) is either primitive or equals \(\emptyset\) or \((R/J)^{+}\). By Proposition 3.11, we have that \(\Phi=\inf_{I\cup J}^{S}(\Gamma,Z)\) for some \(\Gamma\) and \(Z\). Now, since \(\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{I\cup J}^{S}(\Gamma,Z)\), by Proposition 3.6, it follows that \(\Psi=\inf_{I\cup J/I}^{S/I}(\Gamma,T)\) for some \(T\). Since \(\Psi\) is primitive, we have \(I\cup J=I\) or \(I\cup J=S\). In the first case, uniqueness follows from Proposition 3.6. Hence, we may assume that \(I\cup J=S\). Moreover, if \(I\cap J\neq\emptyset\), then \(\Phi=\inf_{I\cap J}^{S}(\Xi,Z)\), implying that \(\Xi\) is an inflation of \(\Psi\) and \(\Theta\) which proves the uniqueness by induction on \(\operatorname{rk}R\).
It remains to deal with the case \(I\cap J=\emptyset\), \(I\cup J=S\). Lemma 3.16 implies that \(\Phi\) is an inflation from \(I^{\prime}\cup J^{\prime}\). If \(J^{\prime}\) is not empty, then \(\Phi\) is also an inflation from \(I\cup J^{\prime}\), contradicting the assumption that \(\Psi\) is primitive. If both \(I^{\prime}\) and \(J^{\prime}\) are empty, Lemma 3.16 implies that \(\Phi=\emptyset\) or \(\Phi=R^{+}\), again contradicting the assumption that \(\Psi\) is primitive.
Finally we consider the case when \(I^{\prime}\neq\emptyset\), \(J^{\prime}=\emptyset\), \(\Theta=\emptyset\) (the case \(\Theta=(R/J)^{+}\) follows because of Remark 3.13). Lemma 3.16 implies that \(\Phi\) is an inflation of the empty set from \(I^{\prime}\) and \(J\) and Lemma 3.15 implies that \(\Phi=\emptyset\), completing the proof of the uniqueness.
**Remark 3.17**.: If \(\Phi\) is canonically inflated from \(I\), then so is \(\Phi^{c}\).
**Proposition 3.18**.: _In the language of Theorem 3.14,_
1. \(\Psi\) _is primitive if and only if_ \(\operatorname{supp}\Phi=\operatorname{supp}\Phi^{c}=S\)_;_
2. \(\Psi=\emptyset\) _if and only if_ \(\operatorname{supp}\Phi\) _is a proper subset of_ \(S\) _and_ \(\operatorname{supp}\Phi^{c}=S\)_. Respectively,_ \(\Psi=(R/I)^{+}\) _if and only if_ \(\operatorname{supp}\Phi^{c}\) _is a proper subset of_ \(S\) _and_ \(\operatorname{supp}\Phi=S\)_._
Proof.: If \(\Phi=\inf_{I}^{S}(\emptyset,X)\), then \(\operatorname{supp}\Phi=I\). Conversely, if \(\operatorname{supp}\Phi=J\) is a proper subset of \(S\), then \(\Phi\subset R_{J}^{+}\) and \(\Phi=\inf_{J}^{S}(\emptyset,\Phi)\). This proves that \(\operatorname{supp}\Phi=S\) unless \(\Phi=\inf_{I}^{S}(\emptyset,X)\).
We complete this section by describing the set \(\operatorname{Gen}(\Phi)\) when in the canonical form \(\Phi=\inf_{I}^{S}(\Psi,X)\), \(\Psi\) is primitive. When \(\Psi\) above is not primitive, the description of \(\operatorname{Gen}(\Phi)\) is more complicated and we omit it.
**Proposition 3.19**.: _Let \(\Phi=\inf_{I}^{S}(\Psi,X)\) be the canonical form of \(\Phi\subseteq R^{+}\) where \(\Psi\) is primitive. Then \(\operatorname{Gen}(\Phi)=\operatorname{Gen}(X)\cup\{S\}\)._
Proof.: First we prove that \(\operatorname{Gen}(\Phi)\backslash\{S\}\) contains a unique maximal element. Assume, by way of contradiction, that \(\operatorname{Gen}(\Phi)\backslash\{S\}\) contains two distinct maximal elements \(K\) and \(L\). Then \(K\cap L\) and \(K\cup L\) are both elements of \(\operatorname{Gen}(\Phi)\) by Proposition 3.11 and \(K\cup L=S\) by the maximality of \(K\) and \(L\). Write
\[\Phi=\inf_{K}^{S}(\Psi^{\prime},X^{\prime})=\inf_{L}^{S}(\Psi^{\prime\prime},X ^{\prime\prime})=\inf_{K\cap L}^{S}(\Gamma,Y)\;.\]
We define \(K^{\prime\prime}:=K/(K\cap L)\) and \(L^{\prime\prime}:=L/(K\cap L)\) and \(S^{\prime\prime}:=S/(K\cap L)\). Then \(K^{\prime\prime}\cap L^{\prime\prime}=\emptyset\) and \(K^{\prime\prime}\cup L^{\prime\prime}=S^{\prime\prime}\). By Proposition 3.6, there exists \(\theta^{\prime},\theta^{\prime\prime},Z^{\prime},Z^{\prime\prime}\) such that
\[\Gamma=\inf_{K^{\prime\prime}}^{S^{\prime\prime}}(\theta^{\prime},Z^{\prime})= \inf_{L^{\prime\prime}}^{S^{\prime\prime}}(\theta^{\prime\prime},Z^{\prime \prime})\;.\]
Applying Lemma 3.16, we define \(K^{\prime}:=K^{\prime\prime}\cap(L^{\prime\prime})^{\perp}\subset S^{\prime\prime}\) and \(L^{\prime}:=L^{\prime\prime}\cap(K^{\prime\prime})^{\perp}\subset S^{\prime\prime}\) to get \(\Gamma=\inf_{K^{\prime}\cup L^{\prime}}^{S^{\prime\prime}}(\Xi,W)\) where \(\Xi=\emptyset\) or \(\Xi=R/(K^{\prime}\cup L^{\prime})\).
Note that \(K^{\prime}\cup L^{\prime}\neq S^{\prime\prime}\). To see this, recall that \(S^{\prime\prime}\) is connected and thus there exists \(\alpha\in K^{\prime\prime}\) and \(\beta\in L^{\prime\prime}\) with \(\alpha\) and \(\beta\) not perpendicular. Thus both \(\alpha\) and \(\beta\) lie in \(S^{\prime\prime}\setminus(K^{\prime}\cup L^{\prime})\).
Therefore
\[\Phi=\inf_{K\cap L}^{S}(\Gamma,Y)=\inf_{K\cap L}^{S}(\inf_{K^{\prime}\cup L^{ \prime}}^{S^{\prime\prime}}(\Xi,W),Y)=\inf_{K^{\prime}\cup L^{\prime}}^{S}(\Xi,U)\]
for some \(U\) where \(\Xi=\emptyset\) or \(\Xi=R/(K^{\prime}\cup L^{\prime})\). But this contradicts the hypothesis that \(\Phi=\inf_{I}^{S}(\Psi,X)\) is the canonical form of \(\Phi\) where \(\Psi\) is primitive. This contradiction shows that \(\operatorname{Gen}(\Phi)\backslash\{S\}\) has a unique maximal element.
Next we note that this maximal element is \(I\). Suppose, to the contrary, that \(\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{K}^{S}(\Psi^{\prime},X^{\prime})\) where \(I\subsetneq K\subsetneq S\). Then by Proposition 3.6, we have \(\Psi=\inf_{K/I}^{S/I}(\Psi^{\prime},Z)\) for some \(Z\subset\operatorname{span}(R/I)\). This contradicts the assumption that \(\Psi\) is primitive, proving that \(I\) is the unique maximal element of \(\operatorname{Gen}(\Phi)\backslash\{S\}\).
To complete the proof that \(\operatorname{Gen}(\Phi)=\operatorname{Gen}(X)\cup\{S\}\), assume that \(J\in\operatorname{Gen}(\Phi)\backslash\{S\}\). Then \(J\subset I\) and
\[\Phi=\inf_{I}^{S}(\Psi,X)=\inf_{J}^{S}(K,Y)\;.\]
By Proposition 3.6, \(K=\inf_{I/J}^{S/J}(\theta,Z)\) and
\[\Phi=\inf_{J}^{S}(K,Y)=\inf_{J}^{S}(\inf_{I/J}^{S/J}(\theta,Z),Y)=\inf_{I}( \theta,\inf_{I}^{J}(Z,Y))\;.\]
Comparing the above with \(\Phi=\inf_{I}^{S}(\Psi,X)\), we conclude that \(\theta=\Psi\) and \(X=\inf_{J}^{I}(Z,Y)\), proving that \(J\in\operatorname{Gen}(X)\). This shows that \(\operatorname{Gen}(\Phi)\subset\operatorname{Gen}(X)\cup\{S\}\). The converse inclusion is obvious.
## 4. The graph \(G_{\Phi}^{\Phi^{c}}\)
For the rest of the paper, the symbol \(\Phi\) will denote an inversion set of \(R\), and in particular, a
While Section 2 deals with the existence and properties of paths between roots in \(R\), it is often useful to restrict the possible partial sums and steps associated with a path. To that end, we introduce the family of graphs \(G_{A}^{B}\) and in particular the graph \(G_{\Phi}^{\Phi^{c}}\), the structure of whose components gives important information about the properties of \(\Phi\).
**Definition 4.1**.: For \(A,B\subseteq R\), we define \(G_{A}^{B}\) to be the graph whose vertices are roots in \(A\), where \(\alpha\) and \(\alpha^{\prime}\) are connected by an edge if \(\alpha-\alpha^{\prime}\in\pm B\).
Many statements about paths can be translated into statements about these graphs. The statement that there is a path between any two roots with simple roots as steps, for example, is the statement that \(G_{R}^{S}\) is connected, while the statement that \(\operatorname{span}_{\mathbb{Z}}\Phi=\operatorname{span}_{\mathbb{Z}} \operatorname{supp}\Phi\) can be understood to mean that the components of \(G_{R}^{\Phi}\) are inflated from \(\operatorname{supp}\Phi\).
**Proposition 4.2**.: _Let \(\Phi=\Phi_{1}\sqcup\Phi_{2}\). If \(C\) is a component of \(G_{\Phi}^{\Phi^{c}}\), then \(C\subseteq\Phi_{1}\) or \(C\subseteq\Phi_{2}\)._
Proof.: Suppose to the contrary that neither \(C\cap\Phi_{1}\) nor \(C\cap\Phi_{2}\) is empty. By definition we would have \(\alpha+\kappa=\beta\) for some \(\alpha\in\Phi_{1},\beta\in\Phi_{2},\) and \(\kappa\in\pm\Phi^{c}\). Without loss of generality we may take \(\kappa>0\), from which we obtain the contradiction that \(\beta\notin\Phi_{2}\) by co-closure.
**Remark 4.3**.: Let \(\Phi=\inf_{I}^{S}(\Psi,X)\) and \(C\) be a component of \(G_{\Phi}^{\Phi^{c}}\). Then \(C\subseteq\inf_{I}^{S}(\Psi,\emptyset)\) or \(C\subseteq X\).
Proposition 4.2 hints that when studying decompositions of an inversion set \(\Phi\), the relevant objects to examine are not arbitrary subsets of \(\Phi\), but rather collections of components of \(G_{\Phi}^{\Phi^{c}}\). For that reason, understanding the structure of the set of these components is the focus of the following subsections.
### Addition of Components
Somewhat surprisingly, the components of \(G_{\Phi}^{\Phi^{c}}\) admit an approximately additive structure whose properties we investigate.
**Lemma 4.4**.: _Let \(\Phi\subseteq R^{+}\) and \(A,B,C\) be components of \(G_{\Phi}^{\Phi^{c}}\) with \(\alpha,\in A,\beta\in B\), and \(\alpha+\beta\in C\)._
1. _If_ \(A\neq C\)_, then for all_ \(\alpha^{\prime}\in A\) _there exists_ \(\beta^{\prime}\in B\) _such that_ \(\alpha^{\prime}+\beta^{\prime}\in C\)__
2. _If_ \(A\neq C\) _and_ \(B\neq C\)_, then for all_ \(\gamma^{\prime}\in C\)_, there exist_ \(\alpha^{\prime}\in A\) _and_ \(\beta^{\prime}\in B\) _such that_ \(\gamma^{\prime}=\alpha^{\prime}+\beta^{\prime}\)_._
Proof.: First we prove (i). It suffices to verify (i) for every neighbour \(\alpha^{\prime}\in A\) of \(\alpha\). Write \(\alpha^{\prime}=\alpha-\kappa\) where \(\kappa\in\pm\Phi^{c}\). Then \(\alpha^{\prime}+\kappa+\beta=\alpha+\beta\). Since the right hand side is a root, either \(\alpha^{\prime}+\beta\in R\) or \(\beta+\kappa\in R\).
If \(\alpha^{\prime}+\beta\) is a root then it lies in \(\Phi\) by the closure of \(\Phi\). Since \((\alpha^{\prime}+\beta)-(\alpha+\beta)=-\kappa\in\pm\Phi^{c}\) we see that \(\alpha^{\prime}+\beta\in C\) verifying (i).
Conversely suppose that \(\beta+\kappa\in R\). If \(\beta+\kappa\in\pm\Phi^{c}\) then \((\alpha+\beta)-\alpha^{\prime}\in\pm\Phi^{c}\) and thus \(C=A\). This contradiction shows that \(\beta+\kappa\notin\pm\Phi^{c}\). If \(\beta+\kappa\in-\Phi\) then \(-\kappa=\beta+(-\beta-\kappa)\in\Phi\). This contradiction shows that \(\beta+\kappa\notin-\Phi\). Therefore \(\beta+\kappa\in\Phi\) and thus \(\beta+\kappa\in B\). Then \(\alpha^{\prime}+(\beta+\kappa)=\alpha+\beta\in C\) again verifying (i).
To prove (ii), it suffices show that (ii) holds for every neighbour \(\gamma\) of \(\gamma^{\prime}\). Write \(\gamma=\gamma^{\prime}+\kappa\) where \(\kappa\in\pm\Phi^{c}\). Consider the equation \(\alpha^{\prime}+\beta^{\prime}+\kappa=\gamma\). Since the right hand side is a root either \(\alpha^{\prime}+\kappa\in R\) or \(\beta^{\prime}+\kappa\in R\). Without loss of generality we may suppose that \(\alpha^{\prime}+\kappa\in R\).
If \(\alpha^{\prime}+\kappa\in\pm\Phi^{c}\) then \(\gamma-\beta^{\prime}=\alpha^{\prime}+\kappa\in\pm\Phi^{c}\) and thus \(C=B\). This contradiction shows that \(\alpha^{\prime}+\kappa\notin\pm\Phi^{c}\). If \(\alpha^{\prime}+\kappa\in-\Phi\) then \(-\kappa=\alpha+(-\alpha^{\prime}-\kappa)\in\Phi\). This contradiction shows that \(\alpha^{\prime}+\kappa\notin-\Phi\). Therefore \(\alpha^{\prime}+\kappa\in\Phi\) and thus \(\alpha^{\prime}+\kappa\in A\). Then \((\alpha^{\prime}+\kappa)+\beta^{\prime}=\gamma\) as required.
**Lemma 4.5**.: _Suppose that \(A,B,C,D\) are components of \(G_{\Phi}^{\Phi^{c}}\) with \(B\neq C\), and \(D\neq C\), and \(A\neq D\). Further suppose that \(n\) is minimal such that exists \(\alpha^{\prime}\in A\), and \(\beta^{\prime},\beta^{\prime\prime}\in B\) such that the \(\alpha^{\prime}+\beta^{\prime}\in C\) and \(\alpha^{\prime}+\beta^{\prime\prime}\in D\) and there exists a path \([\beta^{\prime};\kappa^{\prime}_{1},\kappa^{\prime}_{2},\ldots,\kappa^{\prime }_{n};\beta^{\prime\prime}]\) of length \(n\) in \(G_{\Phi}^{\Phi^{c}}\). Then_
1. \(\alpha^{\prime}+\beta^{\prime}+\kappa^{\prime}_{1}\) _is not a root,_
2. \(\alpha^{\prime}+\beta^{\prime\prime}-\kappa^{\prime}_{1}\) _is not a root,_
3. \(\alpha^{\prime}_{1}:=\alpha^{\prime}-\kappa^{\prime}_{1}\in A\)__
4. \([\beta^{\prime}+\kappa^{\prime}_{1};\kappa^{\prime}_{2},\ldots,\kappa^{\prime }_{n},\kappa^{\prime}_{1};\beta^{\prime\prime}+\kappa^{\prime}_{1}]\) _is a path in_ \(B\) _of length_ \(n\)_, and_ \(\alpha^{\prime}_{1}+\beta^{\prime}+\kappa^{\prime}_{1}=\alpha^{\prime}+\beta \in C\) _and_ \(\alpha^{\prime}_{1}+\beta^{\prime\prime}+\kappa_{1}=\alpha^{\prime}+\beta^{ \prime\prime}\in D\)_._
Proof.: Since \([\beta^{\prime};\kappa_{1},\kappa_{2},\ldots,\kappa_{n};\beta^{\prime\prime}]\) is a path in \(G_{\Phi}^{\Phi^{c}}\), each difference \(\kappa^{\prime}_{i}\in\pm\Phi^{c}\). Define \(\beta^{\prime}_{0}=\beta^{\prime}\) and \(\beta^{\prime}_{i}=\beta^{\prime}_{i-1}+\kappa_{i}\) for \(i=1,2,\ldots,n\). Then each \(\beta^{\prime}_{i}\in B\).
Consider the equation:
\[\beta^{\prime}_{1}+\alpha^{\prime}-\kappa^{\prime}_{1}=\beta^{\prime}+\alpha^{\prime}\]
Since the right hand side is a root, at least one of \(\beta^{\prime}_{1}+\alpha^{\prime}\) or \(\alpha^{\prime}-\kappa^{\prime}_{1}\) is a root.
Assume by way of contradiction that \(\beta^{\prime}_{1}+\alpha^{\prime}\) is a root. Then \(\beta^{\prime}_{1}+\alpha^{\prime}\in\Phi\) by the closure of \(\Phi\). Then \(\beta^{\prime}_{1}+\alpha^{\prime}=(\beta^{\prime}+\alpha)+\kappa_{1}\) lies in \(C\). This violates the minimality of \(n\) as the path \([\beta^{\prime}_{1};\kappa^{\prime}_{2},\kappa_{3}\ldots,\kappa^{\prime}_{n}; \beta^{\prime\prime}]\) in \(G_{\Phi}^{\Phi^{c}}\) shows. Therefore \(\beta^{\prime}_{1}+\alpha^{\prime}\) is not a root which proves (1). Furthermore, it implies that \(\alpha^{\prime}_{1}:=\alpha^{\prime}-\kappa^{\prime}_{1}\) is a root which proves (3).
If \(\alpha^{\prime}-\kappa^{\prime}_{1}=(\alpha^{\prime}+\beta^{\prime})-\beta^{ \prime}_{1}\in\pm\Phi^{c}\) then \(C=B\). This contradiction shows \(\alpha^{\prime}-\kappa^{\prime}_{1}\notin\pm\Phi^{c}\). If \(\alpha^{\prime}-\kappa^{\prime}_{1}\in-\Phi\) then \(\kappa^{\prime}_{1}=\alpha^{\prime}+(-\alpha^{\prime}+\kappa^{\prime}_{1})\in\Phi\). This contradiction shows \(\alpha^{\prime}-\kappa^{\prime}_{1}\notin-\Phi\). Therefore \(\alpha^{\prime}-\kappa^{\prime}_{1}\in\Phi\) and thus \(\alpha^{\prime}_{1}:=\alpha^{\prime}-\kappa^{\prime}_{1}\in A\).
Consider the equation:
\[\beta^{\prime\prime}+\alpha^{\prime}_{1}+\kappa^{\prime}_{1}=\beta^{\prime \prime}+\alpha^{\prime}\]
Since the right hand side is a root at least one of \(\beta^{\prime\prime}+\alpha^{\prime}_{1}\) or \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\) is a root.
Assume by way of contradiction that \(\beta^{\prime\prime}+\alpha^{\prime}_{1}\) is a root. Then \(\beta^{\prime\prime}+\alpha^{\prime}_{1}\in\Phi\) by the closure of \(\Phi\) and \(\beta^{\prime\prime}+\alpha^{\prime}_{1}=(\beta^{\prime\prime}+\alpha^{\prime})- \kappa_{1}\) lies in \(D\). Hence we have a path \([\beta^{\prime}+\kappa^{\prime}_{1};\kappa^{\prime}_{2},\kappa^{\prime}_{3}, \ldots,\kappa^{\prime}_{n};\beta^{\prime\prime}]\) of length \(n-1\) in \(G_{\Phi}^{\Phi^{c}}\) where \(\alpha^{\prime}_{1}+(\beta^{\prime}+\kappa^{\prime}_{1})\in C\) and \(\alpha^{\prime}_{1}+\beta^{\prime\prime}\in D\). This contradicts the minimality of \(n\) and thus \(\beta^{\prime\prime}+\alpha^{\prime}_{1}\) cannot be a root. This proves (2). Furthermore it implies that \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\) must be a root.
If \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\in\pm\Phi^{c}\) then \((\alpha^{\prime}+\beta^{\prime\prime})-(\alpha^{\prime}-\kappa^{\prime}_{1})= \beta^{\prime\prime}+\kappa^{\prime}_{1}\in\pm\Phi^{c}\) which implies that \(\alpha^{\prime}+\beta^{\prime\prime}\) and \(\alpha^{\prime}-\kappa^{\prime}_{1}\) lie in the same component, and thus \(D=A\). This contradiction shows \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\notin\pm\Phi^{c}\). If \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\in-\Phi\) then \(-\kappa^{\prime}_{1}=\beta^{\prime\prime}+(-\beta^{\prime\prime}-\kappa^{ \prime}_{1})\in\Phi\). This contradiction shows \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\notin-\Phi\). Therefore \(\beta^{\prime\prime}+\kappa^{\prime}_{1}\in\Phi\). Hence \(\beta^{\prime}_{n+1}:=\beta^{\prime\prime}+\kappa^{\prime}_{1}\in B\).
Thus we have the path \([\beta^{\prime}_{1};\kappa^{\prime}_{2},\kappa^{\prime}_{3},\ldots,\kappa^{ \prime}_{n},\kappa^{\prime}_{1};\beta^{\prime}_{n+1}]\) of length \(n\) in \(G_{\Phi}^{\Phi^{c}}\) such that \(\alpha^{\prime}_{1}+\beta^{\prime}_{1}=\alpha^{\prime}-\kappa^{\prime}_{1}+ \beta^{\prime}+\kappa^{\prime}_{1}+C\) and \(\alpha^{\prime}_{1}+\beta^{\prime}_{n+1}=\alpha^{\prime}-\kappa^{\prime}_{1}+ \beta^{\prime\prime}+\kappa^{\prime}_{1}\in D\) which proves (4).
**Proposition 4.6**.: _Let \(A,B,C,D\) be components of \(G_{\Phi}^{\Phi^{c}}\). Suppose that \(\alpha\in A\) and \(\beta,\beta^{\prime}\in B\) with \(\alpha+\beta\in C\) and \(\alpha+\beta^{\prime}\in D\). Further suppose that either \(A\neq B\) or that \(A=B\notin\{C,D\}\). Then \(C=D\)._
Proof.: We assume that \(C\neq D\). The hypotheses guarantee that we may assume that \(B\neq C\) and \(A\neq D\). There exists a path \([\beta;\kappa_{1},\kappa_{2},\ldots,\kappa_{n};\beta^{\prime}]\) in \(B\). Each of the differences \(\kappa_{i}\in\pm\Phi^{c}\). Put \(\beta_{0}:=\beta\) and \(\beta_{i}=\beta_{i-1}+\kappa_{i}\) for \(i=1,2,\ldots,n\). We are required to prove that \(\alpha+\beta\) and \(\alpha+\beta^{\prime}\) are connected by a path in the graph \(G_{\Phi}^{\Phi^{c}}\).
We proceed by induction on \(n\), the length of the path in \(G_{\Phi}^{\Phi^{c}}\) joining \(\beta\) to \(\beta^{\prime}\). If \(n=0\), then \(\beta=\beta^{\prime}\) and the result is true.
The induction hypothesis is that \(C=D\) whenever there exists \(\alpha^{\prime}\in A\), \(\beta^{\prime\prime},\beta^{\prime\prime\prime}\in B\) with \(\alpha^{\prime}+\beta^{\prime\prime}\in C\), and \(\alpha^{\prime}+\beta^{\prime\prime\prime}\in D\) such that there is a path in \(G_{\Phi}^{\Phi^{c}}\) of length less than \(n\) joining \(\beta^{\prime\prime}\) to \(\beta^{\prime\prime\prime}\in B\).
If there exists a path in \(G_{\Phi}^{\Phi^{c}}\) of length less than \(n\) joining \(\beta\) to \(\beta^{\prime}\). then by the induction hypothesis we are done. We assume by way of contradiction that no such path exists. This means that Lemma 4.5 applies.
Thus both \(\alpha+\beta+\kappa_{1}\) and \(\alpha+\beta^{\prime}-\kappa_{1}\) are not roots. Furthermore \([\beta+\kappa_{1};\kappa_{2},\kappa_{3},\ldots,\kappa_{n},\kappa_{1};\beta^{ \prime}+\kappa_{1}]\) is a path in \(B\) of length \(n\) where \(\alpha_{1}+\beta+\kappa_{1}\in C\) and \(\alpha_{1}+\beta^{\prime}+\kappa_{1}\in D\) with \(\alpha_{1}:=\alpha-\kappa_{1}\in A\).
We may apply Lemma 4.22 to this new path. Thus \(\alpha_{1}+(\beta+\kappa_{1})+\kappa_{2}=\alpha-\kappa_{1}+\beta+\kappa_{1}+ \kappa_{2}=\alpha+\beta+\kappa_{2}\) is not a root. Similarly \(\alpha+\beta^{\prime}-\kappa_{2}\) is not a root. Furthermore \([\beta+\kappa_{1}+\kappa_{2};\kappa_{3},\ldots,\kappa_{n}.\kappa_{1},\kappa_ {2};\beta^{\prime}+\kappa_{1}+\kappa_{2}]\) is a path in \(B\) of length \(n\) where \(\alpha_{2}+\beta+\kappa_{1}+\kappa_{2}\in C\) and \(\alpha_{2}+\beta^{\prime}+\kappa_{1}+\kappa_{2}\in D\) where \(\alpha_{2}:=\alpha_{1}-\kappa_{2}=\alpha-\kappa_{1}-\kappa_{2}\in A\).
Applying the Lemma 4.22 successively \(n\) times we discover that both \(\alpha+\beta+\kappa_{i}\) and \(\alpha+\beta^{\prime}-\kappa_{i}\) are not roots for all \(i=1,2,\ldots,n\).
Since \((\alpha+\beta)+\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}=\alpha+\beta^{\prime}\) is a root, Proposition 2.3 implies that \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\) is a root. Hence Proposition 2.6 applies. This implies that there is some \(i\) such that either \(\alpha+\beta+\kappa_{i}\) or \(\alpha+\beta^{\prime}-\kappa_{i}\) is a root. This contradiction shows that a path satisfying the induction hypothesis must exist and thus \(C=D\).
**Proposition 4.7**.: _Let \(A,B\) be components of \(G_{\Phi}^{\Phi^{c}}\). Let \(\alpha,\alpha^{\prime}\in A\), \(\beta,\beta^{\prime}\in B\) with \(\alpha+\beta,\alpha^{\prime}+\beta^{\prime}\in R\). If \(A\neq B\) or \(A=B\) but \(\alpha+\beta,\alpha^{\prime}+\beta^{\prime}\notin A\), then \(\alpha+\beta\) and \(\alpha^{\prime}+\beta^{\prime}\) belong to the same component of \(G_{\Phi}^{\Phi^{c}}\)._
Proof.: Suppose that \(A\) and \(B\) are components of \(G_{\Phi}^{\Phi^{c}}\) and \(\alpha,\alpha^{\prime}\in A\), \(\beta,\beta^{\prime}\in B\), with both \(\alpha+\beta,\alpha^{\prime}+\beta^{\prime}\in R\). Let \(D\) denote the component of \(G_{\Phi}^{\Phi^{c}}\) which contains \(\alpha^{\prime}+\beta^{\prime}\). By Lemma 4.4 there is a root \(\beta^{\prime\prime}\in B\) such that \(\alpha+\beta^{\prime\prime}\in D\). Let \(C\) denote the component of \(G_{\Phi}^{\Phi^{c}}\) which contains \(\alpha+\beta\). Then \(\alpha+\beta\in C\) and \(\alpha+\beta^{\prime\prime}\in D\). We can apply Proposition 4.6 to conclude \(C=D\) since \(A\neq D\) and \(B\neq D\).
We are now ready to define addition of components of \(G_{\Phi}^{\Phi^{c}}\).
**Definition 4.8**.: Consider two components \(A,B\) of \(G_{\Phi}^{\Phi^{c}}\) and the set of roots \(Z:=\{\alpha+\beta\mid\alpha\in A,\beta\in B\}\cap R\).
1. If \(Z\neq\emptyset\) and is contained in a single component \(C\), then we define \(A+B=C\). Note that this includes the cases where \(A\neq B\).
2. If \(Z\neq\emptyset\) and intersects more than one component, then \(B=A\), and \(Z\) meets exactly two components, which are \(A,C\) (see Proposition 4.7). In this case, we define \(A+A=C\).
3. If \(Z=\emptyset\), then we leave \(A+B\) undefined.
**Remark 4.9**.: In the above definition, we have no example where case (2) occurs. Although we suspect it never occurs, we do not yet know how to exclude this possibility.
**Remark 4.10**.: Let \(A,B,C\) be components of \(G_{\Phi}^{\Phi^{c}}\). If \(\alpha+\beta=\gamma\) for \(\alpha\in A,\beta\in B\), and \(\gamma\in C\), then \(A+B=C\) or \(A=B=C\) (or both).
**Example 4.11**.: Here is an inversion set \(\Phi\) in \(\mathbb{D}_{\tau}\) which is the inflation of a primitive \(\Psi\). It is given by \(\Phi=\inf_{I}^{S}(\Psi,X)\) where the graph \(G_{\Phi}^{\Phi^{c}}\) has two components outside \(X\). Here
\(\Psi=\{100,001,111,012,112\}\) and \(I\) is the set consisting of the last \(4\) simple roots, namely, \(\{\theta_{4},\theta_{5},\theta_{6},\theta_{7}\}\).
\[\Phi=\left\{\begin{array}{cccc}10000^{0}_{0},&00100^{0}_{0},&00010^{0}_{0},& 00001^{0}_{0},&00110^{0}_{0}\\ 00011^{0}_{0},&00001^{0}_{1},&11100^{0}_{0},&00111^{0}_{0},&00011^{0}_{1},\\ 11110^{0}_{0},&00111^{0}_{0},&00111^{0}_{1},&11111^{0}_{0},&00111^{1}_{1},\\ 11111^{1}_{0},&11111^{0}_{1},&00112^{1}_{1},&11111^{1}_{1},&00122^{1}_{1},\\ 11112^{1}_{1},&11122^{1}_{1},&01222^{1}_{1},&11222^{1}_{1}\end{array}\right\}\]
**Example 4.12**.: Here is an inversion set \(\Phi\) in \(\mathbb{E}_{6}\) which is \(\Phi=\inf_{I}^{S}(\Psi,X)\) where \(I=\{\theta_{2},\theta_{4},\theta_{5},\theta_{6}\}\), \(\Psi=\mathbb{C}_{2}=\{10,01,11,12\}\) (which is not primitive) and
\(X=\left\{\begin{array}{cccc}0&0&1&0\\ 00100\cdot 00001\cdot 00100\cdot 00111\end{array}\right\}\).
The graph \(G_{\Phi}^{\Phi^{c}}\) has five components of which the first four are outside \(X\). Here
\[\Phi=\left\{\begin{array}{cccc}0&0&0&0&0&0\\ 10000\cdot&01000\cdot&00100\cdot&00001\cdot&11000\cdot\\ 1&0&0&1&0\\ 00100\cdot&01100\cdot&11100\cdot&01100\cdot&01110\cdot\\ 0&1&0&1&0\\ 00111\cdot&11100\cdot&11110\cdot&01110\cdot&01111\cdot\\ 1&0&1&1\\ 11110\cdot&11111\cdot&01210\cdot&01111\cdot&11210\cdot\\ 1&1&1&1\\ 11111\cdot&01211\cdot&12210\cdot&11211\cdot&01221\cdot\\ 1&1&1&1&2\\ 12211\cdot&11221\cdot&12221\cdot&12321\cdot&12321\cdot\end{array}\right\}\]
The first four components are:
\(C_{1}=\left\{\begin{array}{cccc}0\\ 10000\end{array}\right\}\),
\(C_{2}=\left\{\begin{array}{cccc}1&0&1&1&0&0&1&1&1&0\\ 01211\cdot 01000\cdot 01110\cdot 01221\cdot 01111\cdot 01100\cdot 01210\cdot 0111 \cdot 01100\cdot 01110\end{array}\right\}\),
\(C_{3}=\left\{\begin{array}{cccc}0&1&1&0&1&0&1&1&0\\ 11111\cdot 11210\cdot 1111\cdot 11000\cdot 11211\cdot 11100\cdot 11221\cdot 11100\cdot 11110 \cdot 11110\cdot 1110\end{array}\right\}\), and
\(C_{4}=\left\{\begin{matrix}1&1&1\\ 12210\cdot 12211\cdot 12221\cdot 12321\cdot 12321\end{matrix}\right\}\)
The fifth component is \(C_{5}=X\). The addition table is given by
\[\begin{array}{c|cccc}&1&2&3&4&5\\ \hline 1&-&3&-&-&-\\ 2&3&-&4&-&2\\ 3&-&4&-&-&3\\ 4&-&-&-&-&4\\ 5&-&2&3&4&-\end{array}\]
**Example 4.13**.: Here is an inversion set \(\Phi\) in \(\mathbb{E}_{6}\) which is \(\Phi=\inf_{f}^{S}(\Psi,X)\) where \(I=\{\theta_{1},\theta_{3},\theta_{4},\theta_{5}\}\), \(\Psi=\mathbb{B}_{2}=\{10,01,11,21\}\) (which is not primitive) and
\(X=\left\{\begin{matrix}0&0&0\\ 00100\cdot 00010\cdot 00110\end{matrix}\right\}\).
The graph \(G_{\Phi}^{\Phi^{c}}\) has seven components of which the first four are outside \(X\). Here
\[\Phi=\left\{\begin{array}{ccccc}1&0&0&0&1\\ 00000\cdot&00100\cdot&00010\cdot&00001\cdot&00100\cdot\\ 0&0&1&1&0\\ 00110\cdot&00011\cdot&01100\cdot&00110\cdot&00111\cdot\\ 1&1&1&0&1\\ 11100\cdot&01110\cdot&00111\cdot&01111\cdot&11110\cdot\\ 0&1&1&1&1\\ 11111\cdot&01210\cdot&01111\cdot&11210\cdot&1111\cdot\\ 1&1&1&1&1\\ 01211\cdot&12210\cdot&11211\cdot&01221\cdot&12211\cdot\\ 1&1&1&2\\ 11221\cdot&12221\cdot&12321\cdot&12321\end{array}\right\}\]
The first four components are:
\[C_{1} =\left\{\begin{matrix}1&1&1\\ 01210\cdot 0000\cdot&11210\cdot 0010\cdot 12210\cdot 01100\cdot 00110\cdot 1 11100\cdot 01110\cdot 11110\end{matrix}\right\},\] \[C_{2} =\left\{\begin{matrix}0&0&0&0&0\\ 011111\cdot 00001\cdot 11111\cdot&00011\cdot 00111\end{matrix}\right\},\] \[C_{3} =\left\{\begin{matrix}1&1&1\\ 11211\cdot 00111\cdot 01221\cdot 12211\cdot 11221\cdot 12221\cdot 12321\cdot 0111 \cdot 11111\cdot 01211\end{matrix}\right\},\] \[C_{4} =\left\{\begin{matrix}2\\ 12321\end{matrix}\right\}.\]
The last three components are
\[C_{5}=\left\{\begin{matrix}0\\ 00100\end{matrix}\right\},C_{6}=\left\{\begin{matrix}0\\ 00010\end{matrix}\right\},C_{7}=\left\{\begin{matrix}0\\ 00110\end{matrix}\right\}\text{where }X=C_{5}\sqcup C_{6}\sqcup C_{7}.\]
The addition table is given by
\[\begin{array}{c|cccccccc}&1&2&3&4&5&6&7\\ \hline 1&-&3&4&-&1&1&1\\ 2&3&-&-&-&2&2&2\\ 3&4&-&-&-&3&3&3\\ 4&-&-&-&-&-&-\\ 5&1&2&3&-&-&7&-\\ 6&1&2&3&-&7&-&-\\ 7&1&2&3&-&-&-&-\end{array}\]
**Example 4.14**.: Here is an inversion set \(\Phi\) in \(\mathbb{B}_{5}\) which is \(\Phi=\inf_{I}^{S}(\Psi,X)\) where \(I=\{\theta_{4},\theta_{5}\}\), \(\Psi=\{100,001,111,012,112\}\), which is primitive, and \(X=\{00010,00001,00011,00012\}\). The graph \(G_{\Phi}^{\Phi^{c}}\) has six components of which the first two are outside \(X\). Here
\[\Phi=\left\{\begin{array}{llll}10000,&00100,&00010,&00001,&00110,\\ 00011,&11100,&00111,&00012,&11110,\\ 00112,&11111,&00122,&11112,&11122,\\ 01222,&11222&\end{array}\right\}.\]
The first two components are:
\(C_{1}=\{10000,00100,00110,11100,00111,11110,00112,11111,00122,11112,11122,01222\}\), and \(C_{2}=\{11222\}\).
The last four components are
\(C_{3}=\{00010\}\)\(C_{4}=\{00001\}\), \(C_{5}=\{00011\}\), \(C_{6}=\{00012\}\) where \(X=C_{3}\sqcup C_{4}\sqcup C_{5}\sqcup C_{6}\).
The addition table is given by
\[\begin{array}{c|cccccc}&1&2&3&4&5&6\\ \hline 1&2&-&1&1&1&1\\ 2&-&-&-&-&-&-\\ 3&1&-&-&5&-&-\\ 4&1&-&5&-&6&-\\ 5&1&-&-&6&-&-\\ 6&1&-&-&-&-&-\end{array}\]
### Partial Order of Components
In this subsection, we define a partial order for the components of \(G_{\Phi}^{\Phi^{c}}\) in two different ways, one using the notion of addition from Subsection 4.1.
The following definition explains our convention for adding multiple components and omitting bracketing.
**Definition 4.15**.: Given components \(C_{1},C_{2},\ldots,C_{n}\), a _bracketed sum_ of \(C_{1},C_{2},\ldots,C_{n}\) is a sum in which the components appear in this order and which contains \(n-2\) pairs of parentheses and these pairs of parentheses are properly nested. Moreover every subsum of components in this expression must be defined. The _standard sum_ of \(C_{1},C_{2},\ldots,C_{n}\) is the bracketed sum \((\ldots((C_{1}+C_{2})+C_{3})+\ldots)+C_{n}\). We denote the standard sum by \(C_{1}+C_{2}+\ldots+C_{n}\).
As usual we denote the sum \(A+A+\cdots+A\) of \(k\) copies of \(A\) by \(kA\), when it is defined.
**Definition 4.16**.: Let \(A,B\) be components of \(G_{\Phi}^{\Phi^{c}}\) for some \(\Phi\). We say \(A\leqslant B\) if there exists a (possibly empty) collection of components \(C_{1},C_{2},\ldots,C_{n}\) such that \(A+C_{1}+\ldots+C_{n}=B\).
**Proposition 4.17**.: \(\leqslant\) _is a partial order on the components of \(G_{\Phi}^{\Phi^{c}}\)._
Proof.: Reflexivity and transitivity are clear from the definition. To see that \(\leqslant\) is antisymmetric, suppose \(A<B\) and \(B<A\). We may then write \(B=A+C_{1}+\ldots+C_{m}\) and \(A=B+C_{1}^{\prime}+\ldots+C_{n}^{\prime}\) with \(n,m>0\). Without loss of generality, we may take \(A+C_{1}+\ldots+C_{i}\neq A+C_{1}+\ldots+C_{i+1}\) and \(B+C_{1}^{\prime}+\ldots+C_{i}^{\prime}\neq B+C_{1}^{\prime}+\ldots+C_{i+1}^{ \prime}\) for all \(i\). Starting with any root \(\alpha\in A\), Lemma 4.4(i) allows us to find a root \(\alpha+\gamma_{1}+\ldots+\gamma_{m}\in B\) where \(\gamma_{i}\in C_{i}\). Similarly, for any root \(\beta\in B\), we find a root \(\beta+\gamma_{1}^{\prime}+\ldots+\gamma_{n}\in A\) with \(\gamma_{i}^{\prime}\in C_{i}^{\prime}\). Given a root \(\alpha^{\prime}\in A\), we can thus find roots \(\beta^{\prime}\in B\) with \(\beta^{\prime}>\alpha^{\prime}\) and \(\alpha^{\prime\prime}\in A\) with \(\alpha^{\prime\prime}>\beta^{\prime}\). There are thus no maximal roots in \(A\), contradicting the fact that \(R\) is finite. We conclude that if \(A\leqslant B\) and \(B\leqslant A\), then \(A=B\).
**Example 4.18**.: Below are Hasse diagrams that correspond to the posets of components in Examples 4.12, 4.13, and 4.14, respectively.
**Proposition 4.19**.: _Let \(A,B,C\) be components of \(G_{\Phi}^{\Phi^{c}}\) with \(A+B=C\). Then \(\operatorname{supp}C=\operatorname{supp}A\cup\operatorname{supp}B\)._
Proof.: First we show that \(\operatorname{supp}A\subseteq\operatorname{supp}C\). If \(A=C\), the result is immediate. Otherwise, for any \(\theta\in\operatorname{supp}A\), we pick \(\alpha\in A\) supported on \(\theta\). By Lemma 4.4 (i), there exists a \(\beta\in B\) such that \(\alpha+\beta\in C\). Since \(\theta\in\operatorname{supp}(\alpha+\beta)\), \(\theta\in\operatorname{supp}C\) as well, so \(\operatorname{supp}A\subseteq\operatorname{supp}C\). By symmetry, this also shows that \(\operatorname{supp}B\subseteq\operatorname{supp}C\), so that \(\operatorname{supp}A\cup\operatorname{supp}B\subseteq\operatorname{supp}C\).
To show that \(\operatorname{supp}C\subseteq\operatorname{supp}A\cup\operatorname{supp}B\) we note that the statement is trivial if \(C=A\) or \(C=B\). Otherwise, by Lemma 4.4 (ii), each \(\gamma\in C\) can be written as \(\alpha+\beta\) for \(\alpha\in A\) and \(\beta\in B\), so that if \(\theta\in\operatorname{supp}\gamma\), then \(\theta\in\operatorname{supp}\alpha\) or \(\theta\in\operatorname{supp}\beta\), which implies that \(\theta\in\operatorname{supp}A\cup\operatorname{supp}B\) and concludes the argument.
**Proposition 4.20**.: _Let \(A,B\) be components of \(G_{\Phi}^{\Phi^{c}}\) with \(A\leqslant B\). Then \(\operatorname{supp}A\subseteq\operatorname{supp}B\)._
Proof.: This follows immediately from Proposition 4.19.
**Proposition 4.21**.: \(A\leqslant B\) _if and only if there exist \(\alpha\in A,\beta\in B\) with \(\alpha\leqslant\beta\) in the usual sense._
Proof.: Suppose that \(A\leqslant B\) and \(\alpha\in A\). We construct a root \(\beta\in B\) such that \(\alpha\leqslant\beta\). Writing \(A+C_{1}+\ldots+C_{n}=B\) with \(A+C_{1}+\ldots+C_{i}\neq A+C_{1}+\ldots+C_{i+1}\) for all \(i\), we find through Lemma 4.4(i) a root \(\beta:=\alpha+\gamma_{1}+\ldots+\gamma_{n}\) with \(\gamma_{i}\in C_{i}\). We observe that \(\beta\in B\) and \(\alpha\leqslant\beta\). In
the trivial case where \(A=B\) this process simply chooses \(\beta=\alpha\). Conversely, we suppose that there is a pair \(\alpha\in A\) and \(\beta\in B\) with \(\alpha\leqslant\beta\). By definition, this means that there is some expression \(\beta=\alpha+\theta_{1}+\ldots+\theta_{n}\) with \(\theta_{i}\in S\). By Proposition 2.7, we may form a path \([\alpha;\kappa_{1},\ldots,\kappa_{n};\beta]\) with \(\kappa_{1},\kappa_{2},\ldots,\kappa_{n}\) being a permutation of \(\theta_{1},\theta_{2},\ldots,\theta_{n}\). By Proposition 2.10, we may reduce this path into a reduced path of the form \([\alpha;\mu_{1},\ldots,\mu_{m};\beta]\), with \(\mu_{i}>0\). Since Proposition 2.12 guarantees that any rearrangement of the steps in this path gives another valid path, we may first put steps in \(\Phi\) and then those in \(\Phi^{c}\), obtaining a path like the one below.
\[[\alpha;\underbrace{\nu_{1},\ldots,\nu_{k}}_{\nu_{i}\in\Phi},\underbrace{\nu _{k+1},\ldots,\nu_{m}}_{\nu_{i}\in\Phi^{c}};\beta]\]
First by closure, then by co-closure, the partial sums \(\alpha+\nu_{1}+\ldots+\nu_{i}\) are in \(\Phi\). Let \(\beta^{\prime}=\alpha+\nu_{1}+\cdots+\nu_{k}\). Observe that \(\beta^{\prime},\beta\) lie in the same component. Hence we have a reduced path of elements in \(\Phi\) of length \(k\) from a root in \(A\) to a root in \(B\). We proceed by induction on \(k\). If \(k=0\), then \(A=B\) and we are done. Assume that \(k\geqslant 1\). Let \(\gamma=\alpha+\nu_{1}+\cdots+\nu_{k-1}\), which is a root in \(\Phi\). Let this root be in a component \(D\). By induction, there are components \(C_{1},\ldots C_{s}\) such that \(D=A+C_{1}+\cdots+C_{s}\). We may further assume that \(A+C_{1}+\cdots+C_{i}\neq A+C_{1}+\cdots+C_{i}+C_{i+1}\) for all \(i\). If \(D=B\), then we are done. Otherwise, consider the component \(C\) of \(G_{\Phi}^{\Phi^{c}}\) containing \(\nu_{k}\). Since \(\gamma+\nu_{k}\) is a root in \(B\) and \(D\neq B\), this means that \(D+C=B\). That means that \(A+C_{1}+\cdots+C_{s}+C=B\). Hence, \(A\leqslant B\) as required.
### Further properties of Component Addition
The addition of components is not associative, since one can find components \(A,B,C\) such that \(A+B\) and \((A+B)+C\) is defined, but \(B+C\) is not defined. However, we will prove below that weaker forms of associativity hold, cf. Proposition 4.23.
We will use the following Lemma in the proof of Proposition 4.24.
**Lemma 4.22**.: _Suppose that \(A,B,C,D\) where \(A\neq D\) and \(B\neq D\) are components of \(G_{\Phi}^{\Phi^{c}}\) Further suppose that \(n\) is minimal such that exist \(\alpha^{\prime}\in A\), and \(\beta^{\prime}\in B\), and \(\gamma^{\prime}\in C\) and both \(\alpha^{\prime}+\beta^{\prime}\in D\) and \(\alpha^{\prime}+\gamma^{\prime}\in D\) and a path \([\alpha^{\prime}+\beta^{\prime};\kappa_{1},\kappa_{2},\ldots,\kappa_{n}; \alpha^{\prime}+\gamma^{\prime}]\) of length \(n\) in \(G_{\Phi}^{\Phi^{c}}\)._
_Then_
1. \(\beta^{\prime}+\kappa_{1}^{\prime}\) _is not a root,_
2. \(\gamma^{\prime}-\kappa_{1}^{\prime}\) _is not a root,_
3. \(\alpha_{1}^{\prime}:=\alpha^{\prime}+\kappa_{1}^{\prime}\in A\)__
4. \([\alpha_{1}^{\prime}+\beta^{\prime};\kappa_{2}^{\prime},\ldots,\kappa_{n}^{ \prime},\kappa_{1}^{\prime};\alpha_{1}^{\prime}+\gamma^{\prime}]\) _is a path in_ \(G_{\Phi}^{\Phi^{c}}\) _of length_ \(n\) _and_ \(\alpha_{1}^{\prime}+\beta^{\prime},\alpha_{1}^{\prime}+\gamma^{\prime}\in D\)_._
Proof.: Since \([\alpha^{\prime}+\beta^{\prime};\kappa_{1},\kappa_{2},\ldots,\kappa_{n}; \alpha^{\prime}+\gamma^{\prime}]\) is a path in \(G_{\Phi}^{\Phi^{c}}\), each difference \(\kappa_{i}^{\prime}\in\pm\Phi^{c}\). Define \(\delta_{0}^{\prime}=\alpha^{\prime}+\beta^{\prime}\) and \(\delta_{i}^{\prime}=\delta_{i-1}^{\prime}+\kappa_{i}\) for \(i=1,2,\ldots,n\). Then each \(\delta_{i}^{\prime}\in D\).
Consider the equation:
\[\delta_{1}^{\prime}-\alpha^{\prime}-\kappa_{1}^{\prime}=\beta^{\prime}\]
Since the right hand side is a root, at least one of \(\delta_{1}^{\prime}-\alpha^{\prime}\) or \(\alpha^{\prime}+\kappa_{1}^{\prime}\) is a root.
Assume by way of contradiction that \(\delta_{1}^{\prime}-\alpha^{\prime}\) is a root. If \(\delta_{1}^{\prime}-\alpha^{\prime}\in\pm\phi^{c}\) then \(\delta_{1}^{\prime}\) and \(\alpha^{\prime}\) are in the same component. This contradiction shows \(\delta_{1}^{\prime}-\alpha^{\prime}\notin\pm\Phi^{c}\). If \(\delta_{1}^{\prime}-\alpha^{\prime}=\beta^{\prime}+\kappa_{1}^{\prime}\in-\Phi\) then \(-\kappa_{1}^{\prime}=(-\kappa_{1}^{\prime}-\beta^{\prime})+\beta^{\prime}\in\Phi\) by the closure of \(\Phi\). This contradiction shows \(\delta_{1}^{\prime}-\alpha^{\prime}\notin-\Phi^{c}\). If \(\delta_{1}^{\prime}-\alpha^{\prime}=\beta^{\prime}+\kappa_{1}^{\prime}\in\Phi\) then it lies in the component \(B\). Then we have a path \([(\alpha^{\prime}+(\beta^{\prime}+\kappa_{1}^{\prime});\kappa_{2}^{\prime}, \kappa_{3}^{\prime},\ldots,\kappa_{n}^{\prime};\delta_{n}^{\prime}=\gamma^{ \prime}+\alpha^{\prime}]\) of length \(n-1\) in \(D\). By the minimality of \(n\)
this cannot happen and thus \(\delta^{\prime}_{1}-\alpha^{\prime}=\beta^{\prime}+\kappa^{\prime}_{1}\) is not a root. This proves (1). Moreover it implies that \(\alpha^{\prime}_{1}:=\alpha^{\prime}+\kappa^{\prime}_{1}\) is a root.
If \(\alpha^{\prime}+\kappa^{\prime}_{1}=\delta^{\prime}_{1}-\beta^{\prime}\in\pm \phi^{c}\) then we find that \(\delta^{\prime}_{1}\) and \(\beta^{\prime}\) lie in the same component. This contradiction shows \(\alpha^{\prime}+\kappa^{\prime}_{1}\notin\pm\Phi^{c}\). If \(\alpha^{\prime}+\kappa^{\prime}_{1}\in-\Phi\) then \(-\kappa^{\prime}_{1}=(-\kappa^{\prime}_{1}-\alpha^{\prime})+\alpha^{\prime}\in\phi\) by the closure of \(\Phi\). This contradiction shows \(\alpha^{\prime}+\kappa^{\prime}_{1}\notin-\Phi\). Hence we must have \(\alpha^{\prime}_{1}=\alpha^{\prime}+\kappa^{\prime}_{1}\in\Phi\) and thus \(\alpha^{\prime}_{1}\in A\) proving (3).
Consider the equation:
\[\delta^{\prime}_{n}-\alpha^{\prime}_{1}+\kappa^{\prime}_{1}=\gamma^{\prime}\]
Since the right hand side is a root at least one of \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}\) or \(\delta^{\prime}_{n}+\kappa^{\prime}_{1}\) is a root.
Assume by way of contradiction that \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}\) is a root. If \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}\in\pm\Phi^{c}\) then \(\delta^{\prime}_{n}\) and \(\alpha^{\prime}_{1}\) are in the same component. This contradiction shows \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}\notin\pm\Phi^{c}\). If \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}=\gamma^{\prime}-\kappa^{\prime}_{1} \in-\Phi\) then \(\kappa^{\prime}_{1}=(\kappa^{\prime}_{1}-\gamma^{\prime})+\gamma^{\prime}\in\Phi\) by the closure of \(\Phi\). This contradiction shows \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}\notin-\Phi^{c}\).. If \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}=\gamma^{\prime}-\kappa^{\prime}_{1} \in\Phi\) then it lies in the component \(C\). We now have a path \([\alpha^{\prime}_{1}+\beta^{\prime}=\delta^{\prime}_{1};\kappa^{\prime}_{2}, \kappa^{\prime}_{3},\ldots,\kappa^{\prime}_{n};\delta^{\prime}_{n}=\alpha^{ \prime}_{1}+(\gamma^{\prime}-\kappa^{\prime}_{1})]\) of length \(n-1\) in \(D\). By the minimality of \(n\), this cannot happen and thus \(\delta^{\prime}_{n}-\alpha^{\prime}_{1}=\gamma^{\prime}-\kappa^{\prime}_{1}\) is not a root. This proves (2). Moreover it implies that \(\delta^{\prime}_{n}+\kappa^{\prime}_{1}\) is a root.
The root \(\delta^{\prime}_{n}+\kappa^{\prime}_{1}=\gamma^{\prime}+\alpha^{\prime}_{1}\) lies in \(\Phi\) by the closure of \(\Phi\). Moreover it is an element of \(D\).
We now have a path \([\alpha^{\prime}_{1}+\beta^{\prime}=\delta^{\prime}_{1};\kappa^{\prime}_{2}, \kappa^{\prime}_{3},\ldots,\kappa^{\prime}_{n};\delta^{\prime}_{n+1}=\gamma^{ \prime}+\alpha^{\prime}_{1}]\) in \(D\) of length \(n\), with differences \(\kappa_{2},\kappa_{3},\ldots,\kappa_{n},\kappa_{1}\) which proves (4).
As noted above, addition of components is not associative either but the following weaker statements hold.
**Proposition 4.23**.: _Let \(A,C_{1},C_{2},\ldots,C_{n}\) be components of \(G_{\Phi}^{\Phi^{c}}\)._
1. _Suppose that_ \(C_{1}+C_{2}+C_{3}\) _is defined. Then there exist distinct_ \(r,s\) _with_ \(\{r,s\}\neq\{1,2\}\) _such that_ \(C_{r}+C_{s}\) _is defined. Furthermore, we can pick such_ \(r,s\) _with the property that_ \((C_{r}+C_{s})+C_{t}=C_{1}+C_{2}+C_{3}\) _where_ \(t\in\{1,2,3\}\backslash\{r,s\}\)_._
2. _Suppose a bracketed sum of components_ \(C_{1},C_{2},\ldots,C_{n}\) _exists and equals_ \(A\)_. Then there is a permutation_ \(\{i_{1},i_{2},\ldots,i_{n}\}\) _of_ \(\{1,2,\ldots,n\}\) _such that_ \(A=C_{i_{1}}+C_{i_{2}}+\ldots+C_{i_{n}}\)_._
Proof.: First we prove (i). Let \(\alpha_{i}\in C_{i}\) for \(i=1,2,3\) be such that both \(\alpha_{1}+\alpha_{2}\in C_{1}+C_{2}\) and \(\delta=(\alpha_{1}+\alpha_{2})+\alpha_{3}\in C_{1}+C_{2}+C_{3}\) are roots. By Lemma 2.4 there exists at least two distinct unordered pairs \(\{i,j\}\), \(\{j,k\}\subset\{1,2,3\}\) such that both \(\alpha_{i}+\alpha_{j}\) and \(\alpha_{j}+\alpha_{k}\) are roots. Thus both \(C_{i}+C_{j}\) and \(C_{j}+C_{k}\) are defined. The remaining of the proof of (i) considers several cases and subcases.
Case A. Assume first that the \(C_{i}\) are pairwise distinct. Since \((C_{1}+C_{2})+C_{3}\) is defined, there are roots \(\alpha_{i}\in C_{i}\) with \(\alpha_{1}+\alpha_{2}\in C_{1}+C_{2}\) and \(\alpha_{1}+\alpha_{2}+\alpha_{3}\in(C_{1}+C_{2})+C_{3}\). That includes the case where \(C_{1}+C_{2}=C_{3}\) and \(C_{3}+C_{3}=C_{3}\). Now, either \(\alpha_{1}+\alpha_{3}\) or \(\alpha_{2}+\alpha_{3}\) is a root. Without loss of generality, assume the first case happens. Then \(C_{1}+C_{3}\) is defined. Since \(C_{1}\neq C_{3}\), we know that \(\alpha_{1}+\alpha_{3}\in C_{1}+C_{3}\). The expression that \((\alpha_{1}+\alpha_{3})+\alpha_{2}\) is a root says that \((C_{1}+C_{3})+C_{2}\) is defined. If \(C_{1}+C_{3}\neq C_{2}\), then it is clear that we get \((C_{1}+C_{3})+C_{2}=C_{1}+C_{2}+C_{3}\), as required. Assume that \(C_{1}+C_{3}=C_{2}\). In this case, \(C_{1}+C_{2}\neq C_{3}\) as it would otherwise yield \(C_{2}\leq C_{3}\leq C_{2}\), a contradiction. We take \(\beta_{2}\) maximal in \(C_{2}\) such that \(\beta_{1}\in C_{1}\) is such that \(\beta_{1}+\beta_{2}\in C_{1}+C_{2}\), and then we pick \(\beta_{3}\in C_{3}\) such that \(\beta_{1}+\beta_{2}+\beta_{3}\in C_{1}+C_{2}+C_{3}\).
Subcase A.1. If \(\beta_{1}+\beta_{3}\) is a root, then \(\beta_{1}+\beta_{3}\in(C_{1}+C_{3})=C_{2}\). However, we know that \((\beta_{1}+\beta_{3})+\beta_{2}\) is defined and by maximality of \(\beta_{2}\), we know that \(C_{2}+C_{2}\neq C_{2}\). In this case, we get \((C_{1}+C_{3})+C_{2}=C_{1}+C_{2}+C_{3}\).
Subcase A.2. If \(\beta_{1}+\beta_{3}\) is not a root, then \(\beta_{2}+\beta_{3}\) is a root. In this case, \(C_{2}+C_{3}\) is defined. As above, the conclusion follows if \(C_{2}+C_{3}\neq C_{1}\). Otherwise, we get \(C_{1}\leq C_{2}\leq C_{1}\), a contradiction.
Now, assume that two of the \(C_{i}\) coincide. The conclusion is clear if they are all the same. We call these two components \(A\) and \(B\) where \(A\) is the component that is repeated twice. We consider the cases where the sums are \(A+A+B,A+B+A\) or \(B+A+A\).
Case B. Consider first the case \(A+A+B\).
Subcase B.1. Assume that \(A+A=A\). The expression \(A+B\) is defined, and we need to prove that \(A+A+B=A+(A+B)\). If \(A+B=A\), then the result follows. Otherwise, we can find a root of the form \(\alpha_{1}+\alpha_{2}\) in \(A+A\) and a root \(\beta\) in \(B\) such that \(\alpha_{1}+\alpha_{2}+\beta\) is a root. Now, one of \(\alpha_{1}+\beta,\alpha_{2}+\beta\) is a root. In the first case, \((\alpha_{1}+\beta)+\alpha_{2}\) being a root and \(A+B\neq A\) shows what we want. Similarly in the other case.
Subcase B.2. Assume that \(A+A=2A\neq A\). We can pick \(\alpha_{1},\alpha_{2}\) and \(\beta\) such that \(\alpha_{1}+\alpha_{2}\in 2A\) and \(\alpha_{1}+\alpha_{2}+\beta\in A+A+B\). That includes the case where \(B=2A\). The 2 out of 3 rule shows that \(A+B\) is defined and \(\alpha_{1}+\beta\) or \(\alpha_{2}+\beta\) lies in \(A+B\). Assume the first case. The only problematic case is when \(A+B=A\). But then \((\alpha_{1}+\beta)+\alpha_{2}\) is a sum of two roots in \(A\). If it is not in \(2A\), then it is in \(A\) but this would yield \(2A<A\), a contradiction.
Case C. We have \(B+A+A\). This case is the same as case B up to relabeling.
Case D. The only remaining case is \(A+B+A\). Obviously, we have that \(B+A\) is defined and \((B+A)+A=(A+B)+A=A+B+A\), as wanted.
This finishes the proof of (i).
Now we prove (ii). Let \(A=X+Y\), with \(X\) being a bracketed sum of the components \(C_{1},\dots,C_{k}\) and \(Y\) being a bracketed sum of the components \(C_{k+1},\dots,C_{n}\). Induction on \(n\), \(n=1\) being trivial. Assume the statement is true for fewer than \(n\) components. Induction on \(j:=\min(k,n-k)\). The case \(j=1\) follows from the induction on \(n\). Assume \(j>1\) and, without loss of generality, \(j=k\). Then \(X=X^{\prime}+X^{\prime\prime}\) and, by (i) we have
\[X+Y=(X^{\prime}+X^{\prime\prime})+Y=(X^{\prime}+Y)+X^{\prime\prime}\text{ or }(X^{\prime\prime}+Y)+X^{\prime}\;.\]
Since both \(X^{\prime}\) and \(X^{\prime\prime}\) involve fewer components than \(X\), the statement follows by induction on \(j\).
The addition of components lacks some of the usual properties of addition. For example, \(A+B=A+C\) does not necessarily imply that \(B=C\). In Example 4.13, \(C_{1}+C_{5}=C_{1}+C_{6}=C_{1}+C_{7}=C_{1}\). However, the following weaker cancellation rules apply:
**Proposition 4.24**.: _Let \(A,B,C\) be components of \(G_{\Phi}^{\Phi^{c}}\)._
1. _If_ \(A+B=A+C\neq A\)_, then_ \(B=C\)_._
2. _If_ \(A+B+C=A+B\)_, then_ \(A+C=A\) _or_ \(B+C=B\)_._
3. _If_ \(A+B+C=B+C\)_, then_ \(A+B=B\) _or_ \(B+C=C\)
Proof.: First we prove (i). Let \(D\coloneqq A+B=A+C\neq A\). If \(B=C\) we are done and thus without loss of generality we may assume \(B\neq D\). Fix \(\alpha\in A\). By Lemma 4.4 there exists \(\beta\in B\) and \(\gamma\in C\) such that both \(\alpha+\beta\), and \(\alpha+\gamma\) are elements of \(D\). Moreover there exists a path \([\alpha+\beta;\kappa_{1},\kappa_{2},\ldots,\kappa_{n};\alpha+\gamma]\) in \(D\). Each of the differences \(\kappa_{i}\in\pm\Phi^{c}\). Put \(\delta_{0}\coloneqq\alpha+\beta\) and \(\delta_{i}=\delta_{i-1}+\kappa_{i}\) for \(i=1,2,\ldots,n\). We are required to prove that \(\beta\) and \(\gamma\) are connected by a path in the graph \(G_{\Phi}^{\Phi^{c}}\).
We proceed by induction on \(n\), the length of the path in \(G_{\Phi}^{\Phi^{c}}\) joining \(\alpha+\beta\) to \(\alpha+\gamma\). If \(n=0\), then \(\beta=\gamma\) and the result is true.
The induction hypothesis is that \(B=C\) whenever there exists \(\alpha^{\prime}\in A\), \(\beta^{\prime}\in B\) and \(\gamma^{\prime}\in C\) such that there is a path in \(G_{\Phi}^{\Phi^{c}}\) of length less than \(n\) joining \(\alpha^{\prime}+\beta^{\prime}\in D\) to \(\gamma^{\prime}+\delta^{\prime}\in D\).
If there exists \(\alpha^{\prime}\in A\), \(\beta^{\prime}\in B\) and \(\gamma^{\prime}\in C\) such that there is a path in \(G_{\Phi}^{\Phi^{c}}\) of length less than \(n\) joining \(\alpha^{\prime}+\beta^{\prime}\in D\) to \(\gamma^{\prime}+\delta^{\prime}\in D\) then by the induction hypothesis we are done. We assume by way of contradiction that no such path exists. This means that Lemma 4.22 applies.
Thus both \(\beta+\kappa_{1}\) and \(\gamma-\kappa_{1}\) are not roots. Furthermore \([\beta+\alpha_{1};\kappa_{2},\kappa_{3},\ldots,\kappa_{n},\kappa_{1};\gamma+ \alpha_{1}]\) is a path in \(D\) of length \(n\) where \(\alpha_{1}\coloneqq\alpha+\kappa_{1}\).
We may apply Lemma 4.22 to this new path. Thus both \(\beta+\kappa_{2}\) and \(\gamma-\kappa_{2}\) are not roots and \([\beta+\alpha_{2};\kappa_{2},\kappa_{3},\ldots,\kappa_{n},\kappa_{1},\kappa_{2 };\gamma+\alpha_{2}]\) is a path in \(D\) of length \(n\), where \(\alpha_{2}\coloneqq\alpha_{1}+\kappa_{2}=\alpha+\kappa_{1}+\kappa_{2}\).
Applying the Lemma 4.22 successively \(n\) times we discover that both \(\beta+\kappa_{i}\) and \(\gamma-\kappa_{i}\) are not roots for all \(i=1,2,\ldots,n\).
Since \(\alpha+\beta+\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}=\alpha+\gamma\) we have \(\beta+\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}=\gamma\). By Proposition 2.3 we must have that \(\kappa_{1}+\kappa_{2}+\cdots+\kappa_{n}\) is a root. Hence Proposition 2.6 applies. This implies that there is some \(i\) such that either \(\beta+\kappa_{i}\) or \(\gamma-\kappa_{i}\) is a root. This contradiction shows that a path satisfying the induction hypothesis must exist and thus \(B=C\).
We now prove (ii). If \(A+B=A\) this becomes \(A+C=A\). Similarly if \(A+B=B\) this becomes \(B+C=B\). Therefore we may suppose that \(A+B\neq A\) and \(A+B\neq B\). By Proposition 4.23 (i), either \((A+C)+B=A+B\) or \(A+(B+C)=A+B\). By (i) we can cancel \(B\) in the former case and \(A\) in the latter to obtain either \(A+C=A\) or \(B+C=B\) as required.
Now we prove the third assertion (iii). Suppose that \(A+B+C=B+C\). If \(B+C\neq C\) then we can use part (i) to cancel \(C\) from this equation to get \(A+B=B\) as required.
### Simple Components
**Definition 4.25**.: A component \(C\) of \(G_{\Phi}^{\Phi^{c}}\) is simple if, whenever \(A+B=C\), \(A=C\) or \(B=C\).
The next proposition tells us that the notion of simple for component addition is compatible with the same notion for their underlying QRS.
**Proposition 4.26**.: _A component \(C\) of \(G_{\Phi}^{\Phi^{c}}\) is simple if and only if it contains a simple root._
Proof.: \((\implies)\): Suppose \(C\) is a simple component. Let \(\gamma\in C\) be non-simple. Since \(\gamma\) is not simple, there exist positive roots \(\alpha,\beta\) such that \(\alpha+\beta=\gamma\). If, one of \(\alpha,\beta\) is in \(\phi^{c}\), we
would have that the other is in \(\phi\), and in particular, in \(C\). However, if \(\alpha,\beta\), are both in \(\phi\), they belong to their own components \(A,B\) respectively, such that \(A+B=C\). Since \(C\) is simple, either \(A\) or \(B\) is \(C\). In both cases, there is a root in \(C\) strictly smaller than \(\gamma\). By induction, \(C\) contains a simple root.
\((\Leftarrow)\): Suppose that \(C\) is a component which contains a simple root, but is not simple. let \(\theta\in C\) be a simple root and \(A,B\) be components not equal to \(C\) such that \(A+B=C\). By Lemma 4.4(ii), there exists \(\alpha\in A\) and \(\beta\in B\) such that \(\alpha+\beta=\theta\), a contradiction.
**Proposition 4.27**.: _There is at most one simple component \(C\) of \(G_{\Phi}^{\Phi^{c}}\) with \(\operatorname{supp}C=\operatorname{supp}\Phi\)._
Proof.: Suppose \(C_{1}\) and \(C_{2}\) are simple components in \(G_{\Phi}^{\Phi^{c}}\) with \(\operatorname{supp}C_{1}=\operatorname{supp}C_{2}=\operatorname{supp}\Phi\). By Proposition 4.26, there is a simple root \(\theta\in C_{1}\), and since \(C_{2}\) is of full support in \(\Phi\), there exists a root \(\alpha\in C_{2}\) supported on \(\theta\), that is \(\theta\leqslant\alpha\). This gives us by Proposition 4.21 that \(C_{1}\leqslant C_{2}\). Symmetrically, \(C_{2}\leqslant C_{1}\), and we conclude that \(C_{1}=C_{2}\).
**Proposition 4.28**.: _Any component \(A\) can be written as a standard sum of simple components, (cf. Definition 4.15)._
Proof.: If \(A\) is simple, we are done. Otherwise there exist components \(B\) and \(C\) such that \(A=B+C\) with \(B<A\) and \(C<A\). Using induction and Proposition 4.23 (ii), the result is clear.
## 5. Proof of Main Theorem
### Primitive implies Irreducible
**Proposition 5.1**.: _Let \(\Phi\) be an inversion set. If \(C\) is a component of \(G_{\Phi}^{\Phi^{c}}\), then \(\operatorname{supp}C\in\operatorname{Gen}(\Phi)\)._
Proof.: Let \(I=\operatorname{supp}C\) and \(D=\bigcup_{C^{\prime}\leqslant C}C^{\prime}\). It follows from Proposition 4.20 that \(\operatorname{supp}D=I\). Next, suppose \(\alpha+\beta\in D\) for some \(\alpha,\beta\in R^{+}\). We will show that either \(\alpha\) or \(\beta\) is in \(D\) to prove that \(D\) is co-closed. Since \(C\subseteq\Phi\) and \(\Phi\) is co-closed, either \(\alpha\) or \(\beta\) is in \(\Phi\), so without loss of generality we let \(\alpha\in\Phi\). We then give the labels \(C_{\alpha+\beta}\) and \(C_{\alpha}\) to the components respectively containing \(\alpha+\beta\) and \(\alpha\). Since \(\alpha\leqslant\alpha+\beta\), we have by Proposition 4.21 that \(C_{\alpha}\leqslant C_{\alpha+\beta}\leqslant C\), so \(C_{\alpha}\subseteq D\), \(\alpha\in D\), and \(D\) is co-closed.
Having constructed the co-closed set \(D\), we use it to prove our result. We suppose toward a contradiction that there exist \(\gamma,\delta\in R^{+}\) with \(\pi_{I}(\gamma)=\pi_{I}(\delta)\neq 0\), \(\gamma\in\Phi\), and \(\delta\in\Phi^{c}\). By Proposition 2.14, \(\delta-\gamma\in\operatorname{span}_{\mathbb{Z}}(D)\), and so Proposition 2.7 guarantees a path \([\gamma;\kappa_{1},\ldots,\kappa_{n};\delta]\) with \(\kappa_{i}\in\pm D\). Since \(\gamma\in\Phi\), \(\delta\in\Phi^{c}\), and all partial sums are positive, there will be some \(i\) such that the \(i\)-th partial sum \(\gamma_{i}\) is in \(\Phi\) while the \((i+1)\)-th partial sum \(\gamma_{i+1}\) is in \(\Phi^{c}\). We would then have \(\gamma_{i}+\kappa_{i+1}=\gamma_{i+1}\). We note that \(\kappa_{i+1}\) must be negative since since \(\Phi\) is closed and \(\gamma_{i+1}\in\Phi^{c}\). But then \(\gamma_{i}-(-\kappa_{i+1})=\gamma_{i+1}\), so that \(\gamma_{i}\) and \(-\kappa_{i+1}\) are linked in \(G_{\Phi}^{\Phi^{c}}\). Since \(-\kappa_{i+1}\in D\) and \(D\) is a union of components of \(G_{\Phi}^{\Phi^{c}}\), this implies \(\gamma^{\prime}\in D\), which is not possible because \(\pi_{I}(\gamma^{\prime})\notin\pi_{I}(D)=\{0\}\).
**Theorem 5.2**.: _Let \(\Phi\) be an inversion set. The following are equivalent:_
1. \(\Phi\) _is irreducible._
2. _Every component_ \(C\) _of_ \(G_{\Phi}^{\Phi^{c}}\) _has_ \(\operatorname{supp}C=\operatorname{supp}\Phi\)_._
3. \(G_{\Phi}^{\Phi^{c}}\) _has a unique simple component._
_._
2. _There exists a component_ \(A\) _of_ \(G_{\Phi}^{\Phi^{c}}\) _such that for all other components_ \(C\) _of_ \(G_{\Phi}^{\Phi^{c}}\)_,_ \(C=kA\) _for some_ \(k\geqslant 1\)_._
Proof.: We show that \((i)\implies(ii)\) by contrapositive. If there were a component \(C\) with \(\operatorname{supp}C\subsetneq\operatorname{supp}\Phi\), then \(\Phi=\inf_{\operatorname{supp}C}^{S}(\Psi,X)\) by Proposition 5.1. Because \(\operatorname{supp}C\neq\operatorname{supp}\Phi\), \(\Psi\) is non-empty. Since \(C\subset X\), \(X\) is also non-empty, and thus \(\Phi\) is reducible. That \((ii)\implies(iii)\) follows immediately from Proposition 4.27. \((iii)\implies(iv)\) because by Proposition 4.28 every component can be written as the sum of simple components, and by \((iii)\) there is a unique simple component. To see that \((iv)\implies(i)\), suppose that \(\Phi=\Phi_{1}\sqcup\Phi_{2}\) and, without loss of generality, let \(A\subseteq\Phi_{1}\). By closure of \(\Phi_{1}\), every other component also belongs to \(\Phi_{1}\), so \(\Phi_{1}=\Phi\), \(\Phi_{2}=\emptyset\), and \(\Phi\) is irreducible.
**Corollary 5.3**.: _Primitive inversion sets are irreducible._
### Proof of the Main Theorem
**Proposition 5.4**.: _Let \(A,B\) be components of \(G_{\Phi}^{\Phi^{c}}\) such that \(A+B\) is defined. If \(I\in\operatorname{Gen}(A)\cap\operatorname{Gen}(B)\), then \(I\in\operatorname{Gen}(A+B)\)._
Proof.: If \(A+B\) is equal to \(A\) or \(B\) the result is immediate. Otherwise, for any \(\gamma\in A+B\) such that \(\pi_{I}(\gamma)\neq 0\) and \(\theta\in I\), we show that \(\gamma\pm\theta\in A+B\) given that \(\gamma\pm\theta\in R\). To that end, we apply Lemma 4.4 (ii) and write \(\gamma=\alpha+\beta\) for \(\alpha\in A\), \(\beta\in B\). Then \(\gamma\pm\theta=\alpha+\beta\pm\theta\). Using Lemma 2.4, we conclude that either \(\beta\pm\theta\) is a root or \(\alpha\pm\theta\) is a root, provided that \(\alpha+\beta\neq 0\), which is true because both \(\alpha,\beta>0\). We can therefore take \(\alpha\pm\theta\in R\) without loss of generality. Note that \(\pi_{I}(\alpha)\neq 0\), since otherwise \(\beta\) and \(\alpha+\beta\) would be in the same fibre, and since \(I\in\operatorname{Gen}(B)\), we would have \(A+B=B\). This means that \(\alpha\pm\theta\in A\), since \(I\in\operatorname{Gen}(A)\), and therefore \((\alpha\pm\theta)+\beta\in A+B\). We thus conclude that \(I\in\operatorname{Gen}(A+B)\).
**Proposition 5.5**.: _Let \(\Phi=\inf_{I}^{S}(\Psi,X)\) expressed canonically with \(\Psi\) primitive. There is a unique simple component of full support in \(G_{\Phi}^{\Phi^{c}}\), while all other simple components are contained in \(X\)._
Proof.: Pick a simple root \(\theta\in\inf_{I}^{S}(\Psi,\emptyset)\) and let \(C\) be the component containing \(\theta\) in \(G_{\Phi}^{\Phi^{c}}\). By Proposition 5.1, \(\operatorname{supp}C\in\operatorname{Gen}(\Phi)\), and by Proposition 3.19, either \(\operatorname{supp}C\subseteq I\) or \(\operatorname{supp}C=S\). Since \(\theta\in\operatorname{supp}C\) but \(\theta\notin I\), we conclude that \(C\) is of full support. Because we chose \(\theta\) arbitrarily, it follows that any component containing a simple root of \(\inf_{I}^{S}(\Psi,\emptyset)\) is of full support, and by Proposition 4.27 there is only one such component. All other simple components therefore must be contained in \(X\).
**Proposition 5.6**.: _Let \(\Phi\) be an inversion set canonically inflated from \(I\). Then all components of \(G_{\Phi}^{\Phi^{c}}\) are inflated from \(I\)._
Proof.: We write \(\Phi=\inf_{I}^{S}(\Psi,X)\) and split the argument into three mutually exclusive cases.
1. \(\Psi=\emptyset\): Here the result is trivial, since \(\operatorname{supp}C\subseteq\operatorname{supp}\Phi=I\) for every component \(C\) of \(G_{\Phi}^{\Phi^{c}}\), so that \(I\in\operatorname{Gen}(C)\).
2. \(\Psi=(R/I)^{+}\): Since \(I\) is canonical, \(\operatorname{supp}\Phi^{c}=I\). This implies that each component of \(G_{\Phi}^{\Phi^{c}}\) is contained in a single fibre of \(\pi_{I}\). To show that each is inflated from \(I\) we need only to show that those components that are contained in a fibre over a non-zero root of
\(R/I\) are exactly the fibre that contains them. This is a consequence of Propositions 2.14 and 2.7, that state respectively that roots in \(\Phi^{c}\) span the difference between any two roots \(\alpha,\beta\) in the same fibre, and that roots of \(\Phi^{c}\) can therefore be used to construct a path between \(\alpha\) and \(\beta\). This path stays in the same fibre and therefore in \(\Phi\), so that components of \(G_{\Phi}^{\Phi^{c}}\) are either fibres of \(\pi_{I}\) over non-zero elements or contained in the zero fibre. In both cases, they are inflated from \(I\).
3. \(\Psi\) is primitive: Let \(C\) be the unique simple component of full support defined in Proposition 5.5. Since all other simple components are contained in \(X\), we label them \(X_{1},X_{2},\ldots,X_{n}\). Each \(X_{i}\) contains a simple root \(\theta_{i}\) such that \(C\) is supported at \(\theta_{i}\). It follows that \(X_{i}\leqslant C\) for all \(i\), and so \(X_{i}+A_{1}+\ldots+A_{n}=C\) for some components \(A_{1},A_{2},\ldots A_{n}\). If all \(A_{j}\subseteq X\), then \(\operatorname{supp}C\subseteq I\), which we know is not the case. Thus, there is some \(A_{k}\) that is not contained in \(X\). It is clear that \(A_{k}\leqslant C\), but because \(C\) is minimal among components not contained in \(X\), we must have that \(A_{k}=C\). We consider now the sum \(X_{i}+A_{1}+\ldots+A_{k-1}+C\). Once again, it is clear that \(C\leqslant X_{i}+A_{1}+\ldots+A_{k-1}+C\), but because \((X_{i}+A_{1}+\ldots+A_{k-1}+C)+A_{k+1}+\ldots+A_{n}=C\), we also have that \(X_{i}+A_{1}+\ldots+A_{k-1}+C\leqslant C\), for the conclusion that \(X_{i}+A_{1}+\ldots+A_{k-1}+C=C\). We are given \(X_{i}+A_{1}+\cdots+A_{k-1}+C=C\) where \(X_{i}\subseteq X\) for all \(i\). We may assume that for each \(j>1\), \(X_{i}+A_{1}+\cdots+A_{j-1}\neq X_{i}+A_{1}+\cdots+A_{j}\). It follows from Lemma 4.4 that there exist \(\kappa_{i}\in\operatorname{A}_{i}\) and \(\gamma\in C\) such that we have a path \([\alpha;\kappa_{1},\ldots,\kappa_{k-1},\gamma;\beta]\) where \(\alpha\in X_{i}\) and \(\beta\in C\). We proceed by induction on \(k\). If \(k=2\), then \(X_{i}+A_{1}=C\) forces \(A_{1}\) to have support outside of \(I\). Hence \(A_{1}=tC\) for some \(t\). If \(A_{1}\neq C\), this would yield \(tC=A_{1}<C\), a contradiction. Hence, \(X_{i}+C=C\). Now, assume \(k\geqslant 3\). In particular, we have \[(\alpha+\kappa_{1}+\cdots+\kappa_{k-2})+\kappa_{k-1}+\gamma=\beta\] where all three terms on the left-hand side are roots. It follows that either \(\kappa_{k-1}+\gamma\) is a root or that \((\alpha+\kappa_{1}+\cdots+\kappa_{k-2})+\gamma\) is a root. In the first case, as \(\kappa_{k-1}+\gamma\) belongs to a component larger than or equal to \(C\), hence to \(tC\) for some \(t\geqslant 1\). Given that adding \(\alpha+\kappa_{1}+\cdots+\kappa_{k-2}\) gives a root in \(C\), this yields \(tC\leq C\), so that \(tC=C\). Hence, \(\kappa_{k-1}+\gamma=\gamma^{\prime}\) for some \(\gamma^{\prime}\in C\). We proceed by induction. Assume the second case, that is, that \((\alpha+\kappa_{1}+\cdots+\kappa_{k-2})+\gamma\) is a root. Again, this root has to be in a component of the form \(tC\) for some \(t\geqslant 1\), but we get \(tC\leq C\) showing that \(tC=C\). Hence, we can use induction as well. With this knowledge, we show that \(I\in\operatorname{Gen}(C)\). To that end, let \(\gamma\in C\) and \(\theta\in\pm I\) such that \(\gamma+\theta\in R\). First, we note that \(C\subseteq\inf_{I}^{S}(\Psi,\emptyset)\subseteq\Phi\), so that \(\gamma+\theta\in\Phi\). If \(\theta\in\pm\Phi^{c}\), then \(\gamma+\theta\) is adjacent to \(\gamma\) in \(G_{\Phi}^{\Phi^{c}}\), so \(\gamma+\theta\in C\). If \(\theta\in\Phi\), then the component \(X_{\theta}\) of \(G_{\Phi}^{\Phi^{c}}\) containing \(\theta\) is simple and contained in \(X\), so that \(\gamma+\theta\in X_{\theta}+C=C\). Otherwise, if \(\theta\in-\Phi\), \(\gamma+\theta\leqslant\gamma\) and so the component containing it is lesser or equal to \(C\). Since \(C\) is minimal among components not contained in \(X\) and \(\pi_{I}(\gamma+\theta)=\pi_{I}(\gamma)\neq 0\), \(\gamma+\theta\in C\). We therefore find that \(I\in\operatorname{Gen}(C)\). Note that this means that all simple components are inflated from \(I\), since \(I\in\operatorname{Gen}(C)\) and all other simple components are contained in \(X\subseteq R_{I}^{+}\). Since all components are sums of simple components by Proposition 4.28 and sums of inflations from \(I\) are inflations from \(I\) by Proposition 5.4, every component is inflated from \(I\).
Since the three preceding cases cover all possibilities for \(\Psi\) in a canonical expression, the proof is complete.
**Corollary 5.7**.: _Suppose \(\inf_{I}^{S}(\Psi,X)=\Phi_{1}\sqcup\Phi_{2}\) with \(I\) canonical. Then \(I\in\operatorname{Gen}(\Phi_{1})\cap\operatorname{Gen}(\Phi_{2})\)._
**Remark 5.8**.: Note that if \(R\) is a QRS with connected components \(R_{1},R_{2},\ldots,R_{l}\), every decomposition of \(R^{+}\) into inversion sets yields a decomposition of each \(R_{i}\) into inversion sets and vice-versa. In particular, Theorem 5.9 provides a recursive description of all decompositions of a QRS into inversion sets.
**Theorem 5.9**.: _Let \(R\) be connected and \(R^{+}=\Phi_{1}\sqcup\Phi_{2}\sqcup\cdots\sqcup\Phi_{n}\) for inversion sets \(\Phi_{1},\Phi_{2},\ldots,\Phi_{n}\) where \(\Phi_{1}\) contains the highest root of \(R\). If \(\Phi_{1}=\inf_{I}^{S}(\Psi,X_{1})\) when expressed canonically, then up to a relabelling, \(\Phi_{2}=\inf_{I}^{S}(\Psi^{c},X_{2})\) and \(\Phi_{i}=\inf_{I}^{S}(\emptyset,X_{i})\) for \(i>2\)._
Proof.: We note that \(\Psi\neq\emptyset\), since in that case \(\operatorname{supp}\Phi_{1}=I\subsetneq S\), and \(\Phi_{1}\) would contain no roots of full support. We therefore split our argument into the two remaining cases:
1. \(\Psi=(R/I)^{+}\): In this case the result is clear, since \(\Phi_{i}\subseteq\Phi_{1}^{c}=\inf_{I}^{S}(\emptyset,X_{1}^{c})\) for each \(i\geqslant 2\).
2. \(\Psi\) is primitive: Note that \(\inf_{I}^{S}(\Psi^{c},X_{1}^{c})=\Phi_{1}^{c}=\Phi_{2}\sqcup\Phi_{3}\sqcup \ldots\sqcup\Phi_{n}\), so by Corollary 5.7, \(I\in\operatorname{Gen}(\Phi_{i})\) for each \(i\geqslant 2\). Writing \(\Phi_{i}=\inf_{I}^{S}(\Xi_{i},X_{i})\), we find that \(\Psi^{c}=\Xi_{2}\sqcup\Xi_{3}\sqcup\ldots\sqcup\Xi_{n}\). Since \(\Psi^{c}\) is primitive and therefore irreducible by Corollary 5.1, all but one \(\Xi_{i}\) is empty. Labelling the non-empty set \(\Xi_{2}\), we find that \(\Phi_{2}=\inf_{I}^{S}(\Psi^{c},X_{2})\) and \(\Phi_{i}=\inf_{I}^{S}(\emptyset,X_{i})\) for \(i>2\).
## 6. Fine decompositions of root systems
Recall that a decomposition \(R^{+}=\Phi_{1}\sqcup\Phi_{2}\sqcup\cdots\sqcup\Phi_{n}\) of the positive roots of a QRS \(R\) is said to be fine if \(n=\operatorname{rk}R\) and each inversion set is non-empty.
The aim of this section is to use Theorem 5.9 to count the number of fine decompositions of the classical root systems. The paper [USRA], derived formulae for the number of fine decompositions of \(\mathbb{A}_{n}\) and \(\mathbb{B}_{n}\) but not for \(\mathbb{D}_{n}\). With the hindsight of the more natural definition of inflation we see that part of the difficulty arises from the fact that there are multiple types of rank 2 quotients of \(\mathbb{D}_{n}\), which have different numbers of primitive inversion sets. As we will see, Theorem 5.9 allows us to treat each root system with a uniform approach. We begin by introducing a few relevant propositions.
**Proposition 6.1**.: _Let \(R^{+}=\Phi_{1}\sqcup\Phi_{2}\sqcup\cdots\sqcup\Phi_{n}\) be a fine decomposition of a QRS \(R\) with base \(S\). Then:_
1. _For each_ \(1\leqslant i\leqslant n\)_,_ \(|\Phi_{i}\cap S|=1\)_._
2. _There exists an_ \(I\subseteq S\) _such that for all_ \(1\leqslant i\leqslant n\)_,_ \(\Phi_{i}=\inf_{I}^{S}(\Psi_{i},\emptyset)\) _or_ \(\Phi_{i}=\inf_{I}^{S}(\emptyset,X_{i})\) _for some_ \(\Psi_{i}\subseteq(R/I)^{+}\) _or_ \(X_{i}\subseteq R_{I}^{+}\)_. Further,_ \(\operatorname{rk}\left(R/I\right)=1\) _or_ \(\operatorname{rk}\left(R/I\right)=2\)_._
Proof.: \((i)\) By co-closure, \(|\Phi_{i}\cap S|\geqslant 1\) for each \(i\). The \(n\) simple roots of \(S\) are divided into \(n\) pairwise disjoint inversion sets in such a way that each inversion set contains at least one simple root. This forces \(|\Phi_{i}\cap S|=1\) for all \(i\). \((ii)\) By Theorem 5.9, there exists some \(I\in\bigcap_{1\leqslant i\leqslant n}\operatorname{Gen}(\Phi_{i})\), so we may write \(\Phi_{i}=\inf_{I}^{S}(\Psi_{i},X_{i})\). By \((i)\), however, either
\(\Psi_{i}\) or \(X_{i}\) does not contain a simple root and is thus empty. Theorem 5.9 also implies that \(\Psi_{1}\sqcup\Psi_{2}=(R/I)^{+}\), but because both \(\Psi_{1}\) and \(\Psi_{2}\) contain at most one simple root, \(\operatorname{rk}\left(R/I\right)\leqslant 2\).
**Remark 6.2**.: The converse to (i) also holds, since if each inversion set in a decomposition contains exactly \(1\) simple root, there must be \(n=\operatorname{rk}R\) inversion sets in the decomposition.
Proposition 6.1 allows us to count fine decompositions of a QRS \(R\) with rank \(n\) as follows. We first take \(I\subseteq S\) where \(|I|=n-1\) or \(|I|=n-2\). If \(|I|=n-1\), we let \(\Phi_{1}=\inf_{I}^{S}((R/I)^{+},\emptyset)\) and \(\Phi_{2},\Phi_{3},\ldots,\Phi_{n}\) be a fine decomposition of \(R_{I}\). If \(|I|=n-2\), we choose a primitive inversion set \(\Psi\subset(R/I)^{+}\) and let \(\Phi_{1}=\inf_{I}^{S}(\Psi,\emptyset)\) and \(\Phi_{2}=\inf_{I}^{S}(\Psi^{c},\emptyset)\). Once again we let \(\Phi_{3},\Phi_{4},\ldots,\Phi_{n}\) be a fine decomposition of \(R_{I}\). It is clear that in both cases \(\Phi_{1},\Phi_{2},\ldots,\Phi_{n}\) is a fine decomposition of \(R\), and by Proposition 6.1, this characterizes all fine decompositions. Introducing some notation, we let \(S=\{\theta_{1},\theta_{2},\ldots,\theta_{n}\}\) be a base of the QRS \(R\) and let \(F(R)\) denote the number of fine decompositions of \(R\). Then
\[F(R)=\sum_{i=1}^{n}F(R_{\{\theta_{i}\}^{c}})+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n} \Pi(R/\{\theta_{i},\theta_{j}\}^{c})F(R_{\{\theta_{i},\theta_{j}\}^{c}}) \tag{6.1}\]
where \(\Pi(R)\ :=\ \frac{1}{2}\left|\{\Psi\subset R^{+}\mid\Psi\text{ is a primitive inversion set}\}\right|\). We introduce one more proposition before counting the fine decompositions of the classical root systems.
Recall that if \(R_{1},R_{2}\) are QRSs with ambient Euclidean spaces \(E_{1},E_{2}\), then we have a QRS \(R_{1}\times R_{2}\) with ambient Euclidean space \(\operatorname{E_{1}}\oplus E_{2}\). Its roots are identified with \(R_{1}\cup R_{2}\). It is realized as a quotient of the root system \(\Delta_{1}\times\Delta_{2}\) where \(R_{i}\) is a quotient of \(\Delta_{i}\) for \(i=1,2\).
In the following, we write \(\mathbb{A}_{0}=\mathbb{B}_{0}=\mathbb{D}_{0}\) to denote the rank zero root system with trivial ambient Euclidean space. We also use the convention that \(\mathbb{D}_{1}=\mathbb{B}_{1}=\mathbb{A}_{1}\), and \(\mathbb{D}_{2}=\mathbb{A}_{1}\times\mathbb{A}_{1}\) and \(\mathbb{D}_{3}=\mathbb{A}_{3}\). The proof of the following follows from Remark 5.8.
**Proposition 6.3**.: _Suppose \(R_{1}\) and \(R_{2}\) are QRSs. Then \(F(R_{1}\times R_{2})=F(R_{1})F(R_{2})\)._
### Type \(\mathbb{A}_{n}\)
We take some \(n\geqslant 1\) and consider the fine decompositions of the root system \(\mathbb{A}_{n}\). To do this, we first consider the quotients and subsystems of \(\mathbb{A}_{n}\). It is not hard to see that all quotients of \(\mathbb{A}_{n}\) are also of type \(\mathbb{A}_{k}\) for some \(k\leqslant n\). Moreover, if \(I=\{\theta_{i_{1}},\theta_{i_{2}},\ldots,\theta_{i_{m}}\}\) are simple roots for some \(1\leqslant i_{1}<i_{2}<\ldots<i_{m}\leqslant n\), then \((\mathbb{A}_{n})_{I}=\mathbb{A}_{i_{1}-1}\times\mathbb{A}_{i_{2}-i_{1}-1} \times\cdots\times\mathbb{A}_{n-i_{m}}\). Defining \(a_{n}:=F(\mathbb{A}_{n})\) to be the number of fine decompositions of \(\mathbb{A}_{n}\), we apply Equation (6.1) to find that
\[a_{n}=\sum_{i=1}^{n}a_{i-1}a_{n-i}+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\Pi(\mathbb{ A}_{2})a_{i-1}a_{j-i-1}a_{n-j}\]
It is quickly verified that there are no primitive inversion sets in \(\mathbb{A}_{2}\), so \(\Pi(\mathbb{A}_{2})=0\). Thus,
\[a_{n}=\sum_{i=1}^{n}a_{i-1}a_{n-i}\]
along with the fact that \(a_{0}=1\), this recurrence relation implies that \(a_{n}=\binom{2n}{n}/(n+1)\), the \(n\)-th Catalan number.
### Type \(\mathbb{B}_{n}\)
We proceed similarly for \(\mathbb{B}_{n}\). Unlike type \(\mathbb{A}\), the quotient of root systems of type \(\mathbb{B}\) are not necessarily of type \(\mathbb{B}\). However, the primitive elements in a quotient of a root system of type \(\mathbb{B}\) form a root system of type \(\mathbb{B}\) (see [DR]). Thus Remark 1.4 implies that counting the inversion sets (and the respective decompositions) in a quotient of a root system of type \(\mathbb{B}\) is equivalent to counting the inversion sets (and the respective decompositions) in the corresponding root system of type \(\mathbb{B}\).
Taking \(n\geqslant 1\), if \(I=\{\theta_{i_{1}},\theta_{i_{2}},\ldots,\theta_{i_{m}}\}\) are simple roots for some \(1\leqslant i_{1}<i_{2}<\ldots<i_{m}\leqslant n\), then \((\mathbb{B}_{n})_{I}\cong\mathbb{A}_{i_{1}-1}\times\mathbb{A}_{i_{2}-i_{1}-1} \times\cdots\times\mathbb{B}_{n-i_{m}}\). Unlike in \(\mathbb{A}_{2}\), there are two primitive inversion sets in \(\mathbb{B}_{2}\), so \(\Pi(\mathbb{B}_{2})=1\). Once again applying Equation (6.1), we define \(b_{n}:=F(\mathbb{B}_{n})\) to be the number of fine decompositions of \(\mathbb{B}_{n}\) and find
\[b_{n} =\sum_{i=1}^{n}a_{i-1}b_{n-i}+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}a_{ i-1}a_{j-i-1}b_{n-j}\] \[=\sum_{i=1}^{n}a_{i-1}b_{n-i}+\sum_{\begin{subarray}{c}k+\ell+m= n-2\\ k,\ell,m\geqslant 0\end{subarray}}a_{k}a_{\ell}b_{m}\] \[=\sum_{i=1}^{n}a_{i-1}b_{n-i}+\sum_{i=2}^{n}\sum_{j=1}^{i-1}a_{i-j -1}a_{j-1}b_{n-i}\] \[=\sum_{i=1}^{n}a_{i-1}b_{n-i}+\sum_{i=2}^{n}b_{n-i}\sum_{j=1}^{i-1 }a_{i-j-1}a_{j-1}\] \[=\sum_{i=1}^{n}a_{i-1}b_{n-i}+\sum_{i=2}^{n}b_{n-i}a_{i-1}\] \[=b_{n-1}+2\sum_{i=2}^{n}a_{i-1}b_{n-i}\]
where the second-to-last step follows from the recurrence relation \(a_{i-1}=\sum_{j=1}^{i-1}a_{i-j-1}a_{j-1}\) from previous subsection. The first few values of the sequence \((b_{n})_{n\geqslant 1}\) are as follows: \(1,3,9,29,97,333,1165\).
### Type \(\mathbb{D}_{n}\)
Let \(I:=\{\theta_{i},\theta_{j}\}^{c}\), where \(i<j\). Then for \(i,j\in\{1,n-1,n\}\), the QRS is of type \(\mathbb{A}_{2}\), which has no primitive inversion sets. Whereas in the other cases, the quotient has the same number of primitives as \(\mathbb{B}_{2}\). The remainder of the simple roots fall into a root system isomorphic to
\[\mathbb{D}_{n}^{(i,j)}:=\begin{cases}\mathbb{A}_{i-1}\times\mathbb{A}_{j-i-1 }\times\mathbb{D}_{n-j}&\text{ if }j<n-1\\ \mathbb{A}_{i-1}\times\mathbb{A}_{n-i-1}&\text{ if }j=n-1\text{ or }j=n\end{cases}\]
If instead \(I=\{\theta_{i}\}^{c}\), the remainder of the roots fall into a root system isomorphic to
\[\mathbb{D}_{n}^{(i)}:=\begin{cases}\mathbb{A}_{i-1}\times\mathbb{D}_{n-i}& \text{ if }i<n-1\\ \mathbb{A}_{n-1}&\text{ if }i=n-1\text{ or }i=n\end{cases}\]
We now let \(d_{n}:=F(\mathbb{D}_{n})\) for \(n\geqslant 4\) and \(d_{0}=d_{1}=d_{2}=1\) and \(d_{3}=a_{3}=5\). Then for \(n\geqslant 3\) we have
\[d_{n}= \sum_{i=1}^{n}F(\mathbb{D}_{n}^{(i)})+\sum_{i=1}^{n-1}\sum_{j=i+1}^ {n}\Pi(\mathbb{D}_{n}/\{\theta_{i},\theta_{j}\}^{c})F(\mathbb{D}_{n}^{(i,j)})\] \[= \sum_{i=1}^{n-2}a_{i-1}d_{n-i}+2a_{n-1}+\sum_{i=1}^{n-3}\sum_{j=i+ 1}^{n-2}\Pi(\mathbb{D}_{n}/\{\theta_{i},\theta_{j}\}^{c})F(\mathbb{D}_{n}^{(i,j )})\] \[+\sum_{i=1}^{n-2}\Pi(\mathbb{D}_{n}/\{\theta_{i},\theta_{n-1}\}^{ c})F(\mathbb{D}_{n}^{(i,n-1)})+\sum_{i=1}^{n-1}\Pi(\mathbb{D}_{n}/\{\theta_{i}, \theta_{n}\}^{c})F(\mathbb{D}_{n}^{(i,n)})\] \[= \sum_{i=1}^{n-2}a_{i-1}d_{n-i}+2a_{n-1}+\sum_{i=1}^{n-3}\sum_{j=i +1}^{n-2}a_{i-1}a_{j-i-1}d_{n-j}\] \[+\sum_{i=2}^{n-2}\Pi(\mathbb{D}_{n}/\{\theta_{i},\theta_{n-1}\}^{ c})F(\mathbb{D}_{n}^{(i,n-1)})+\sum_{i=2}^{n-2}\Pi(\mathbb{D}_{n}/\{\theta_{i}, \theta_{n}\}^{c})F(\mathbb{D}_{n}^{(i,n)})\] \[= \sum_{i=1}^{n-2}a_{i-1}d_{n-i}+2a_{n-1}+\sum_{i=1}^{n-3}\sum_{j=i +1}^{n-2}a_{i-1}a_{j-i-1}d_{n-j}\] \[+\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}+\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}\;.\]
Now
\[\sum_{i=1}^{n-3}\sum_{j=i+1}^{n-2}a_{i-1}a_{j-i-1}d_{n-j}=\sum_{j =2}^{n-2}\sum_{i=1}^{j-1}a_{i-1}a_{j-i-1}d_{n-j}\] \[=\sum_{j=2}^{n-2}\sum_{i=0}^{j-2}a_{i}a_{j-i-2}d_{n-j}=\sum_{j=0} ^{n-4}\sum_{i=0}^{j}a_{i}a_{j-i}d_{n-j-2}\] \[=\sum_{j=0}^{n-4}\left(\sum_{i=0}^{j}a_{i}a_{j-i}\right)d_{n-j-2 }=\sum_{j=0}^{n-4}a_{j+1}d_{n-j-2}\;.\]
Therefore
\[d_{n}= \sum_{i=1}^{n-2}a_{i-1}d_{n-i}+2a_{n-1}+\sum_{i=1}^{n-3}\sum_{j=i+1} ^{n-2}a_{i-1}a_{j-i-1}d_{n-j}\] \[+\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}+\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}\] \[= a_{0}d_{n-1}+\sum_{i=2}^{n-2}a_{i-1}d_{n-i}+2a_{n-1}+\sum_{j=0}^{ n-4}a_{j+1}d_{n-j-2}+2\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}\] \[= d_{n-1}+\sum_{i=0}^{n-4}a_{i+1}d_{n-i-2}+2a_{n-1}+\sum_{j=0}^{n-4 }a_{j+1}d_{n-j-2}+2\sum_{i=2}^{n-2}a_{i-1}a_{n-i-1}\] \[= d_{n-1}+2\sum_{i=0}^{n-4}a_{i+1}d_{n-i-2}+2a_{n-1}+2\sum_{i=1}^{ n-3}a_{i}a_{n-i-2}\] \[= d_{n-1}-2a_{n-2}d_{1}-2a_{n-1}d_{0}+2\sum_{i=1}^{n-1}a_{i}d_{n-i -1}+2a_{n-1}\] \[-2a_{0}a_{n-2}-2a_{n-2}a_{0}+2\sum_{i=0}^{n-2}a_{i}a_{n-i-2}\] \[= d_{n-1}+2\sum_{i=0}^{n-2}a_{i+1}d_{n-i-2}-6a_{n-2}+2\sum_{i=0}^{ n-2}a_{i}a_{n-i-2}\] \[= d_{n-1}+2\sum_{i=0}^{n-2}a_{i+1}d_{n-i-2}+2a_{n-1}-6a_{n-2}\;.\]
The first few values of the sequence \((d_{n})_{n\geqslant 1}\) are: 1, 1, 5, 19, 69, 249.
### Exceptional Types
For \(\mathbb{G}_{2}\) there are 5 fine decompositions. For \(\mathbb{F}_{4}\) there are 46 fine decompositions. The root systems \(\mathbb{E}_{6}\), \(\mathbb{E}_{7}\), and \(\mathbb{E}_{8}\) have 320, 1534 and 8932 fine decompositions respectively.
**Acknowledgement**. I.D., C.G., E.O., C.P., and D.W. were partially supported by the Natural Sciences and Engineering Research Council of Canada. In addition, C.P. and D.W. were partially supported by the Canadian Defence Academy Research Programme
|
2302.08990 | Efficiently Forgetting What You Have Learned in Graph Representation
Learning via Projection | As privacy protection receives much attention, unlearning the effect of a
specific node from a pre-trained graph learning model has become equally
important. However, due to the node dependency in the graph-structured data,
representation unlearning in Graph Neural Networks (GNNs) is challenging and
less well explored. In this paper, we fill in this gap by first studying the
unlearning problem in linear-GNNs, and then introducing its extension to
non-linear structures. Given a set of nodes to unlearn, we propose PROJECTOR
that unlearns by projecting the weight parameters of the pre-trained model onto
a subspace that is irrelevant to features of the nodes to be forgotten.
PROJECTOR could overcome the challenges caused by node dependency and enjoys a
perfect data removal, i.e., the unlearned model parameters do not contain any
information about the unlearned node features which is guaranteed by
algorithmic construction. Empirical results on real-world datasets illustrate
the effectiveness and efficiency of PROJECTOR. | Weilin Cong, Mehrdad Mahdavi | 2023-02-17T16:49:10Z | http://arxiv.org/abs/2302.08990v1 | # Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection
###### Abstract
As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important. However, due to the _node dependency_ in the graph-structured data, representation unlearning in Graph Neural Networks (GNNs) is challenging and less well explored. In this paper, we fill in this gap by first studying the unlearning problem in linear-GNNs, and then introducing its extension to non-linear structures. Given a set of nodes to unlearn, we propose Projector that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten. Projector could overcome the challenges caused by node dependency and enjoys a perfect data removal, i.e., the unlearned model parameters do not contain any information about the unlearned node features which is guaranteed by algorithmic construction. Empirical results on real-world datasets illustrate the effectiveness and efficiency of Projector. [Code].
## 1 Introduction
As graph representation learning has achieved great success in real-world applications (e.g., social networks Kipf and Welling (2017); Hamilton et al. (2017), knowledge graphs Wang et al. (2019, 2019), and recommender system Berg et al. (2017)), privacy protection in graph representation learning has become equally important. Recently, as "_Right to be forgotten_" gradually implemented in multiple jurisdictions, users are empowered with the right to request any organization or company to remove the effect of their private data from a machine learning model, which is known as "_machine unlearning_". For example, when a Twitter user deletes a post, the user not only may require Twitter to permanently remove the post from their database, but also might require Twitter to eliminate its impact on any machine learning models pre-trained on the deleted post, so as to prevent the private information in the deleted post be inferred by any malicious third party.
Existing unlearning approaches can be roughly classified into exact unlearning and approximate unlearning. The goal of "_exact unlearning_" is to exactly produce the model parameters trained without the deleted data. The most straightforward unlearning approach is to retrain the model from scratch using the remaining data, which could be computationally prohibitive when the dataset is large or even infeasible if not all the data are available to retrain. To avoid re-training on large data, SISA Bourtoule et al. (2021) proposes to split the original dataset into multiple shards and train a model on each data shard, then aggregate their prediction during inference. Upon receiving unlearning requirements, they only need to re-train the specific shard model that the unlearned data belongs to. While being more efficient compared to retraining from scratch, the model performance suffers because each model has fewer data to be trained on and data heterogeneity also deteriorates the performance. To further reduce the computation overhead, "_approximate unlearning_" is proposed to trade-off between the unlearning efficiency and the data removal effectiveness. For example, Influence Guo et al. (2020) proposes to approximate the unlearned model using first-order Taylor approximation and Fisher Golatkar et al. (2020) proposes to directly fine-tune with Newton's method on the remaining data. Since approximate unlearning methods lack guarantee on whether all information associated with the deleted data is eliminated, it is necessary to inject random noise to model parameters or objective functions to amplify privacy, which could significantly hurt the performance of unlearned model. Employing these methods in graph-structured data is even more challenging due to the dependency among nodes. Motivated by the importance of unlearning graph-structured data, we aim at answering the following questions in the context of GNNs:
_Q1. Can existing machine unlearning methods be utilized
to solve graph unlearning problem?_ Most of the existing methods are designed for settings where the loss function can be decomposed over individual training samples, and the node dependency in graph-structured data render these methods inapplicable to GNNs and makes them sub-optimal. For example, exact graph unlearning method GraphEraser Chen et al. (2021) extends Bourtoule et al. (2021) by partitioning the original graph into multiple subgraphs. However, graph partitioning will result in loosing part of the structure information due to ignorance of the edges that span subgraphs, which could further hurt the model performance. Moreover, the data heterogeneity issue on homophily graphs is more severe because nodes with similar properties/categories are more likely to be partitioned into the same subgraph. Applying approximate unlearning for graph structured data is also non-trivial. For example, most of these methods require "_the objective function before data deletion_" could be formulated as a summation of "_the objective after data deletion_" and "_the loss on deleted data_". However, this is not the case on graph-structured data because the representation of the deleted nodes' multi-hop neighbors \(\mathcal{V}_{\text{affect}}\) are also affected after node deletion. Please refer to Figure 1 on how node dependency would affect the GNN models' output after deleting a single node from graph, refer to Appendix C for a detailed mathematical explanation on node dependency. To overcome this issue, we need to update all affected nodes \(\mathcal{V}_{\text{affect}}\) in parallel, which results in massive computation overhead because \(|\mathcal{V}_{\text{affect}}|\) grows exponentially with the number of layers.
_Q2. If not, can we effectively unlearn representations in GNNs in a computationally efficient manner?_ We propose a projection-based unlearning approach for linear-GNNs that not only "_bypasses the node dependency issue_" but also "_enjoys a perfect data removal guarantee_". More specifically, we propose to unlearn node features by orthogonal projecting linear-GNN's weight parameters to a subspace that is irrelevant to the unlearned node features (Section 3). The projection step guarantees our weight parameters do not carry any information about the deleted node features, please refer to Figure 2 for an illustration of our main idea. Projector could bypass the node dependency issue because the graph convolutions in linear-GNN can be re-formulated as a linear combination of the input node features and the projection-step is directly applied to the node features (Section 3.2). Notice that this is different from most approximate unlearning approaches because their gradient and Hessian are computed on the output of GNN models, therefore they are affected by the node dependency.
_Q3. How to assess the effectiveness of unlearning in GNNs?_ We consider two criteria to evaluate the effectiveness of unlearning. Our first criterion is "_the distance between the unlearned weights to the exactly retrained weights_". We evaluate this criterion by theoretically upper bound the distance of two models. We show that Projector enjoys a tighter upper bound than approximate unlearning methods Guo et al. (2020); Goldtkar et al. (2020) (Section 3.3). Although this criterion has become the de facto way to measure the success of unlearning for approximate unlearning methods, it has been pointed out by Thudi et al. (2021) that we cannot infer "whether the data have been deleted" solely from it. Our theoretical explanation on this point is deferred to the Appendix E. Therefore, we introduce our second criterion by checking "_whether unlearned weights contain the deleted node features_". To achieve this, we introduce "feature injection test" in the experiment section to rigorously verify this criterion.
**Contributions.** The main contributions of the present paper are summarized as follows:
* We propose an efficient graph representation unlearning method Projector, which could overcome the node dependency issue and is guaranteed to remove the trace of the deleted node features (Section 3.2).
* We theoretically show that _unlearned model_ of Projector is closer to the _model retrained from scratch_ than other approximate unlearning methods, which indicates that Projector is more preferred if only approximate unlearning is required (Section 3.3).
Figure 1: An illustration of how the node representations of a \(2\)-layer GNN (with neighbor average aggregation) are affected after deleting the node \(v_{4}\) from the graph. After removing node \(v_{4}\), the node representation of nodes \(\{v_{2},v_{3}\}\) are also affected since these nodes require node \(v_{4}\) to compute their representations. Such dependency grows exponentially with respect to the number of GNN layers.
* To improve the expressive of the linear-GNN used with Projector, we introduce two unlearning-favorable extension, i.e., non-linearity extension and adaptive diffusion graph convolution (Section 3.4).
* We introduce the "feature injection test" to rigorously verify whether an unlearning method could perfectly remove the trace of the deleted node features. Our results show that Projector could perfectly remove the trace of the deleted node features, however, other approximate unlearning methods cannot, which emphasizes the importance of Projector (Section 4).
* Empirical results on large-scale real-world datasets of different sizes illustrate the effectiveness, efficiency, and robustness of Projector (Section 4 and Appendix A).
## 2 Related work and backgrounds
Exact unlearning.The most straightforward way is to retrain the model from scratch, which is computationally demanding, except for some model-specific problems such as SVM Cauwenberghs and Poggio (2000), K-means Ginart et al. (2019), and decision tree Brophy and Lowd (2021). To reduce the computation cost, Bourtoule et al. (2021) proposes to split the dataset into multiple shards and train an independent model on each data shard, then aggregate their prediction during inference. A similar idea is explored in Aldaghri et al. (2021); He et al. (2021). GraphEraser Chen et al. (2021) extends Bourtoule et al. (2021) to graph-structured data by proposing a graph partition method that can preserve the structural information as much as possible and weighted prediction aggregation for evaluation. Chen et al. (2022) further generalize Chen et al. (2021) to the recommender system. Although the data partition schema allows for a more efficient retrain of models on a smaller fragment of data, the model performance suffers because each model has fewer data to be trained on and data heterogeneity can also deteriorate the performance. Moreover, if a large set of deleted nodes are selected at random, it could still result in massive retraining efforts. Ullah et al. (2021) proposes to retrain at the iteration that deleted data the first time appears, which is not suitable if it requires iterating the full dataset multiple rounds. Neel et al. (2020); Ullah et al. (2021); Sekhari et al. (2021) study the unlearning from the generalization theory perspective, Fu et al. (2022); Nguyen et al. (2022) study unlearning for Bayesian inference, which is orthogonal to the main focus of this paper.
Approximate unlearning.The main idea is to approximate the model trained without the deleted data in the parameter space. For example, Guo et al. (2020) proposes to unlearn by removing the influence of the deleted data on the model parameters by first-order Taylor approximation, where the Hessian is computed on the remaining data and gradient is computed on the deleted data. Chien et al. (2022) generalize the analysis in Guo et al. (2020) to graph. A similar idea has been explored in Wu et al. (2022) but requires an objective function as a finite-sum formulation, which is non-trivial to extend onto graph-structured data. Golatkar et al. (2020) performs Fisher forgetting by taking a single step of Newton's method on the remaining training data. Golatkar et al. (2021) generalizes the idea to deep neural networks by assuming a subset of training samples are never forgotten, which can be used to pre-train a neural network as a feature extractor and only unlearn the last layer. Izzo et al. (2021) speeds up Guo et al. (2020) by using the leave-one-out residuals for the linear model update, which reduces the time complexity to linear in the dimension of the deleted data and is independent of the size of the dataset. Wu et al. (2020) proposes to first save all the intermediate weight parameters and gradients during training, then utilize such information to efficiently estimate the optimization path. Similar idea have been explored in Wu et al. (2020) for logistic regression. Notice that due to the nature of approximate unlearning, these methods only approximately unlearn the information of deleted data, require adding random noise, and lack of perfect data removal guarantee in practice Thudi et al. (2021).
Linearity requirement in unlearning.Linearity is required in most unlearning methods Guo et al. (2020); Golatkar et al. (2020); Wu et al. (2020) to verify whether the trace of deleted data has been perfectly unlearned. Unless re-training from scratch, it is still an open problem to theoretically or rigorously empirically verify this in the non-linear models Thudi et al. (2021); Guo et al. (2020). Therefore, we initiate our study on linear-GNNs in Section 3.2 and provide its non-linearity extension in Section 3.4. We will rigorously test whether the information is perfectly unlearned on linear-GNNs and demonstrate the application of using Projector with non-linear GNNs.
Relation between unlearning and differential privacy.Unlearning and differential privacy (DP) are two concepts that could be used in parallel. More specifically, _DP_ aims to prevent the privacy leakage issue, while _unlearning_ seeks to remove some data points' effect on the pre-trained model.
Figure 2: The orthogonal projection unlearning in Projector. The original weight \(\mathbf{w}\) exists inside the subspace defined by node feature vectors \(\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}\}\). We can unlearn \(\mathbf{x}_{3}\) and obtain the new weight \(\mathbf{w}_{p}\) by projecting \(\mathbf{w}\) onto the subspace defined without \(\mathbf{x}_{3}\).
Recently, a number of approximate unlearning methods Guo et al. (2020); Golatkar et al. (2020); Chien et al. (2022) are inspired by DP to unlearn by injecting random noises and derive an approximate unlearning DP-like upper bound. However, not all unlearning methods require using random noises and could be evaluated under a DP-like framework. For example, Ullah et al. (2021); Chen et al. (2021) unlearn by re-training from scratch and Projector unlearns by orthogonal projection, therefore adding random noise is not required. Please refer to Appendix D for more details. In this paper, we only consider fully removing the trace of data from the model by unlearning, but do not consider preventing the privacy leakage issue with DP.
## 3 Graph representation unlearning
We first introduce backgrounds on graph learning and unlearning in Section 3.1. Then, we introduce our graph representation unlearning approach Projector on linear-GNN in Section 3.2 and theoretically analyzing its effectiveness in Section 3.3. Finally, we introduce Projector's non-linearity extension in Section 3.4.
### Backgrounds
We consider solving semi-supervised binary node classification using the linear-GNN, which could be easily extended to multi-class classification. More specifically, given a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\) with \(n=|\mathcal{V}|\) nodes and \(|\mathcal{E}|\) edges, let us suppose each node \(v_{i}\in\mathcal{V}\) is associated with a node feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\). Let \(\mathbf{A},\mathbf{D}\in\mathbb{R}^{n\times n}\) denote the adjacency matrix and its associated degree matrix. Then, an \(L\)-layer linear-GNN1 computes the node representation \(\mathbf{H}=\mathbf{P}^{L}\mathbf{X}\in\mathbb{R}^{n\times d}\) by applying \(L\) propagation matrices \(\mathbf{P}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) to the node features matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\). During training, only training set nodes \(\mathcal{V}_{\text{train}}\subset\mathcal{V}\) are labeled by a binary label \(y_{i}\in\{-1,+1\}\), our goal is to estimate the label of the unlabeled nodes \(\mathcal{V}_{\text{eval}}=\mathcal{V}\setminus\mathcal{V}_{\text{train}}\). More specifically, we want to find the weight parameters \(\mathbf{w}\in\mathbb{R}^{d}\) that minimize
Footnote 1: Non-linear GNNs usually add activation function and weight matrix after each graph convolution. For example, the GCN’s hidden representation is computed by \(\mathbf{H}^{(\ell)}=\sigma(\mathbf{PH}^{(\ell-1)}\mathbf{W}^{(\ell)})\).
\[F(\mathbf{w}) =\frac{\lambda}{2}\|\mathbf{w}\|_{2}^{2}+\frac{1}{\mathcal{V}_{ \text{train}}}\sum_{v_{i}\in\mathcal{V}_{\text{train}}}f_{i}(\mathbf{w}), \tag{1}\] \[f_{i}(\mathbf{w}) =\log\left(1+\exp(-y_{i}\mathbf{w}^{\top}\mathbf{h}_{i})\right),\mathbf{h}_{i}=[\mathbf{P}^{L}\mathbf{X}]_{i}.\]
For graph representation unlearning, let \(\mathcal{V}_{\text{delete}}\subset\mathcal{V}_{\text{train}}\) denote the set of deleted nodes and \(\mathcal{V}_{\text{remain}}=\mathcal{V}_{\text{train}}\setminus\mathcal{V}_{ \text{delete}}\) denote the remaining nodes. Our goal is to unlearn the node feature information \(\{\mathbf{x}_{i}\mid v_{i}\in\mathcal{V}_{\text{delete}}\}\) of the deleted nodes \(\mathcal{V}_{\text{delete}}\). In terms of the notations, we denote \(\mathbf{w}\) as the solution before unlearning, \(\mathbf{w}_{p}\) as the solution obtained by Projector, and \(\mathbf{w}_{u}\) as the solution obtained by re-training from scratch on the dataset without the deleted nodes.
### Graph representation unlearning via Projector
The main idea behind Projector is as follows: "_If the weight parameters of linear-GNN are located inside the linear span of all node features (precondition), then we can unlearn a set of node features by projecting the weight parameters onto a subspace that is irrelevant to the node features that we want to unlearn (how to unlearn)_." In the following, we will first explain why the precondition holds in linear-GNNs, then introduce how to unlearn, and explain why Projector can bypass the node dependency.
_Why precondition holds in linear-GNN?_ The precondition holds because the graph convolution in linear-GNN is a linear operator on node features. As a result, all gradients are inside the linear span of all node features. Therefore, if we optimizing linear-GNN (Eq. 1) using SGD with weight initialization satisfying \(\mathbf{w}_{\text{init}}\in\text{span}\{\mathbf{x}_{1},\dots,\mathbf{x}_{n}\}\), regardless of how many steps of gradient updates, we still have \(\mathbf{w}\in\text{span}\{\mathbf{x}_{1},\dots,\mathbf{x}_{n}\}\) holds. To see this, let us first recall that the gradient of Eq. 1 with respect to any \(\mathbf{w}\) is
\[\begin{split}\nabla F(\mathbf{w})&=\lambda\mathbf{w }+\frac{1}{|\mathcal{V}_{\text{train}}|}\sum_{j\in\mathcal{V}_{\text{train}} }\nu_{j}\mathbf{x}_{j},\\ \nu_{j}&=\sum_{(a)}\sum_{i\in\mathcal{V}_{\text{ train}}}\mu_{i}[\mathbf{P}^{L}]_{ij},\;\mu_{i}=-y_{i}\sigma(-y_{i}\mathbf{w}^{\top} \mathbf{h}_{i}),\end{split} \tag{2}\]
where \([\mathbf{P}^{L}]_{ij}\) denotes the \(i\)-th row \(j\)-th column of \(\mathbf{P}^{L}\) and \(\sigma(\cdot)\) is the Sigmoid function. Then, Eq. 2 implies that the gradient \(F(\mathbf{w})\) is inside the linear span of all node features, i.e., \(\nabla F(\mathbf{w})\in\text{span}\{\mathbf{x}_{1},\dots,\mathbf{x}_{n}\}\). Therefore, when using gradient update rule \(\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\nabla F(\mathbf{w}_{t})\), the weight after gradient updates still stays inside the same subspace defined by the linear span of all node features.
_How to unlearn?_ Recall that our goal is to unlearn node features \(\mathbf{X}_{\text{delete}}=\{\mathbf{x}_{i}\mid v_{i}\in\mathcal{V}_{\text{ delete}}\}\) of size \(m=|\mathcal{V}_{\text{delete}}|\) by making sure the unlearned solution does not carry any information about \(\mathbf{X}_{\text{delete}}\). This can be achieved by finding an alternative solution \(\mathbf{w}_{p}\) from a subspace that is irrelevant to \(\mathbf{X}_{\text{delete}}\). Meanwhile, we hope \(\mathbf{w}_{p}\) is close to \(\mathbf{w}\) because small changes in the input data are expected to lead to small changes in the optimal solutions. Formally, let us define \(\mathcal{U}=\text{span}\{\mathbf{x}_{i}\mid v_{i}\in\mathcal{V}_{\text{remain}}\}\) as the linear subspace spanned by all remaining samples and our goal is to find \(\mathbf{w}_{p}=\operatorname*{arg\,min}_{v\in\mathcal{U}}\|\mathbf{v}- \mathbf{w}\|_{2}^{2}\). Because the vertical distance is the shortest, we can obtain \(\mathbf{w}_{p}\) by orthogonal projecting \(\mathbf{w}\) onto the subspace \(\mathcal{U}\). Knowing that any projection \(\Pi_{\mathcal{U}}(\mathbf{w})\) onto \(\mathcal{U}\) is necessarily an element of \(\mathcal{U}\), i.e., \(\Pi_{\mathcal{U}}(\mathbf{w})\in\mathcal{U}\), the results after orthogonal projection can be represented as a weighted combination of all remaining node features \(\mathbf{w}_{p}=\Pi_{\mathcal{U}}(\mathbf{w})=\sum_{v_{i}\in\mathcal{V}_{\text{ min}}}\alpha_{i}\mathbf{x}_{i}\), where the coefficients of the orthogonal projection \(\mathbf{\alpha}\) is derived in Proposition 1. An
illustration of the projection-based unlearning is shown in Figure 2 and the proof is provided in Appendix F.
**Proposition 1**: _The coefficients of the orthogonal projection is computed as \(\mathbf{\alpha}=\mathbf{X}_{\text{remain}}(\mathbf{X}_{\text{remain}}^{\top}\mathbf{X }_{\text{remain}})^{\dagger}\mathbf{w}\), where \(\mathbf{X}_{\text{remain}}=\{\mathbf{x}_{j}\mid v_{j}\in\mathcal{V}_{\text{remain}}\}\) is the remaining node features and \(\dagger\) is the pseudo-inverse operator._
The significant computation required in Proposition 1 includes computing \(\mathbf{X}_{\text{remain}}^{\top}\mathbf{X}_{\text{remain}}\in\mathbb{R}^{d \times d}\) and its inverse with \(\mathcal{O}(rd^{2})\) and \(\mathcal{O}(d^{3})\) computation complexity, where \(r=|\mathcal{V}_{\text{remain}}|\) is the size of remaining nodes and \(d\) is node feature dimension. However, if we could pre-computed \(\mathbf{X}^{\top}\mathbf{X}\) before the unlearning requests arrive, then we could efficiently compute \(\mathbf{X}_{\text{remain}}^{\top}\mathbf{X}_{\text{remain}}=\mathbf{X}^{\top} \mathbf{X}-\mathbf{X}_{\text{delete}}^{\top}\mathbf{X}_{\text{delete}}\) and compute \((\mathbf{X}_{\text{remain}}^{\top}\mathbf{X}_{\text{remain}})^{\dagger}\) by applying the Woodbury identity Golub and Van Loan (2013) on \(\mathbf{X}_{\text{delete}}^{\top}\mathbf{X}_{\text{delete}}\), \(\mathbf{X}^{\top}\mathbf{X}\), which leads to a lower computation complexity of \(\mathcal{O}(\max\{m^{3},md^{2}\})\) if \(m<\max\{r,d\}\). After obtaining \(\mathbf{\alpha}\), Projector computes the unlearned weight parameters by \(\mathbf{w}_{p}=\mathbf{X}_{\text{remain}}^{\top}\mathbf{\alpha}\). Intuitively, the projection step in Projector could be thought of as a re-weighting on the remaining nodes, which allows our model to behave as close to the model before unlearning as possible, but without carrying any information about the deleted node features. Therefore, the output of Projector could be interpreted as re-training on the remaining graph under some unknown importance sampling distribution.
To this end, we summarize Projector in Algorithm 1, where two different types of input options are available that lead to identical results. More specifically, we can use _option 1_ if only remaining node features are available, otherwise we can use _option 2_ if only the features of deleted nodes are available but pre-computing is feasible. Besides, due to the similarity between logistic regression and SVM, Projector could also be used in primal-based SVM unlearning Chu et al. (2015) to alleviate the high computation cost of the dual-based SVM unlearning approach Cauwenberghs and Poggio (2000). Readers could refer to Appendix J for more details on its application to SVM.
_Why node dependency is bypassed?_ From Eq. 2 (a), we could tell that node dependencies in \(\mathbf{P}\) are included inside the finite sum weight \(\mu_{j}\), which is a constant that is multiplied with its features \(\mathbf{x}_{j}\). Projector could bypass the node dependency because our projection-step is directly applied to the input node features, instead of the final outputs of GNNs. This is not the case for most approximate unlearning methods, e.g., Guo et al. (2020); Golatkar et al. (2020); Wu et al. (2020), because their unlearning requires computing the gradient or Hessian on the final layer outputs.
_Extension to multi-class classification._ Please notice that Projector also works with cross-entropy loss for multi-class classification. To see this, let us consider \(C\) categories and \(N\) data but without considering the node dependency for simplicity, i.e., optimizing \(\mathbf{W}=[\mathbf{w}_{1},\dots,\mathbf{w}_{C}]\in\mathbb{R}^{C\times d}\) on \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{N}\}\) where \(\mathbf{w}_{c}\) is the \(c\)-th row of \(\mathbf{W}\). Then, the softmax's \(c\)-th class probability computed on \(\mathbf{x}_{n}\) is
\[p_{n,c}=\frac{\exp(a_{n,c})}{\sum_{i=1}^{C}\exp(a_{n,i})},\;a_{n,c}=\mathbf{w} _{c}^{\top}\mathbf{x}_{n}.\]
We define the objective function as
\[L_{\mathbf{W}}=-\sum_{n=1}^{N}\sum_{c=1}^{C}y_{n,c}\log(p_{n,c}),\]
then its gradient is
\[\frac{\partial L_{\mathbf{W}}}{\partial\mathbf{w}_{c}}=\sum_{n=1}^{N}\sum_{i=1 }^{C}\frac{\partial L_{\mathbf{W}}}{\partial a_{n,i}}\frac{\partial a_{n,i}}{ \partial\mathbf{w}_{c}}=\sum_{n=1}^{N}(p_{n,c}-y_{n,c})\mathbf{x}_{n}\]
because
\[\frac{\partial L_{\mathbf{W}}}{\partial a_{n,i}}=p_{n,i}-y_{n,i}\;\text{and} \;\frac{\partial a_{n,i}}{\partial\mathbf{w}_{c}}=\begin{cases}\mathbf{x}_{n}& \text{if }i=c\\ \mathbf{0}&\text{if }i\neq c.\end{cases}\]
As a result, for any \(j\in[C]\) we have
\[\frac{\partial L_{\mathbf{W}}}{\partial\mathbf{w}_{j}}\in\text{span}\{\mathbf{ x}_{1},\dots,\mathbf{x}_{N}\},\]
which means each row of the \(\mathbf{W}\) is in the span of all node features, and we can apply Projector on each row of \(\mathbf{W}\) independently to unlearn.
### On the effectiveness of Projector
In this section, we study the effectiveness of Projector by measuring the \(\ell_{2}\)-norm on the difference between Projector's unlearned solution \(\mathbf{w}_{p}\) to the solution obtained by re-training from scratch \(\mathbf{w}_{u}\) on the dataset without the deleted nodes, and we are expecting \(\|\mathbf{w}_{p}-\mathbf{w}_{u}\|_{2}\) to be small for good unlearning methods. For unlearning, we suppose a random subset of nodes \(\mathcal{V}_{\text{delete}}\subset\mathcal{V}_{\text{train}}\) are selected and the remaining nodes are denoted as \(\mathcal{V}_{\text{remain}}=\mathcal{V}_{\text{train}}\setminus\mathcal{V}_{ \text{delete}}\). Since removing the nodes \(\mathcal{V}_{\text{delete}}\) is the same as updating the propagation matrix from \(\mathbf{P}\) to \(\mathbf{P}_{u}\), where all edges that are connected to node \(v_{i}\in\mathcal{V}_{\text{delete}}\) are removed in \(\mathbf{P}_{u}\), we can write down the objective after data deletion \(F^{u}(\mathbf{w}_{u})\) as
\[\begin{split} F^{u}(\mathbf{w}_{u})&=\frac{1}{| \mathcal{V}_{\text{remain}}|}\sum_{v_{i}\in\mathcal{V}_{\text{remain}}}f_{i}^{ u}(\mathbf{w}_{u}),\\ f_{i}^{u}(\mathbf{w}_{u})&=\log\left(1+\exp(-y_{i }\mathbf{w}_{u}^{\top}\mathbf{h}_{i}^{u})\right)+\lambda\|\mathbf{w}_{u}\|_{ 2},\end{split} \tag{3}\]
where \(\mathbf{h}_{i}^{u}=[\mathbf{P}_{u}^{L}\mathbf{X}]_{i}\).
Before proceeding to our result, we make the following customary assumptions on graph propagation matrices, node features, and weight parameters in Assumption 1, on the variance of stochastic gradients in Assumption 2, and on the correlation between node feature in Assumption 3. Please notice that Assumption 1, 2 are standard assumptions in GNN's theoretical analysis Cong et al. (2021); Ramezani et al. (2022) and Assumption 3 is a mild assumption that could be empirically verified in Table 3 on real-world dataset, where \(\delta\) could be think of as a measurement on the closeness of the subspace defined with and without the deleted node features. In practice, \(\delta\) is small if only a small amount of nodes are removed from the original graph.
**Assumption 1**: _We assume each row of the propagation matrices before and after node deletion is bounded by \(P_{s}\geq 0\), i.e., \(\max_{j}\left\|\left[\mathbf{P}^{L}\right]_{j}\right\|_{2}\leq P_{s},\ \max_{j}\left\|\left[\mathbf{P}^{L}_{u}\right]_{j}\right\|_{2}\leq P_{s}\). Besides, we assume each row of the difference of the propagation matrices before and after data deletion is bounded by \(P_{d}\geq 0\), i.e., \(\max_{j}\left\|\left[\mathbf{P}^{L}_{u}-\mathbf{P}^{L}\right]_{j}\right\|_{2} \leq P_{d}\). Furthermore, we assume the norm of any node features \(\mathbf{x}_{i},\ v_{i}\in\mathcal{V}\) and weight parameters \(\mathbf{w}\) are bounded by \(B_{x},B_{w}\geq 0\), i.e., \(\|\mathbf{x}_{i}\|_{2}\leq B_{x},\ \|\mathbf{w}\|_{2}\leq B_{w}\)._
**Assumption 2**: _For any deleted nodes \(\mathcal{V}_{\text{delete}}\), the gradient variance computed on the remaining nodes \(\mathcal{V}_{\text{remain}}=\mathcal{V}\setminus\mathcal{V}_{\text{delete}}\) can be upper bounded by \(G\geq 0\), i.e., we have \(\mathbb{E}_{\mathcal{V}_{\text{delete}}}[\|\mathbf{g}-\tilde{\mathbf{g}}\|_{2}]\leq G\), where \(\mathbf{g}=\frac{1}{|\mathcal{V}_{\text{remain}}|}\sum_{v_{i}\in\mathcal{V}_{ \text{min}}}\nabla f_{i}^{u}(\mathbf{w})\) and \(\tilde{\mathbf{g}}=\frac{1}{|\mathcal{V}_{\text{remain}}|}\sum_{v_{i}\in \mathcal{V}_{\text{remain}}}\nabla f_{i}^{u}(\mathbf{w})\) for any \(\mathbf{w}\)._
**Assumption 3**: _For any node \(v_{j}\in\mathcal{V}_{\text{delete}}\), its node feature \(\mathbf{x}_{j}\) can be approximated by the linear combination of all node features in the remaining node set \(\{\mathbf{x}_{i}\mid v_{i}\in\mathcal{V}_{\text{remain}}\}\) up to an error \(\delta\geq 0\). Formally, we have \(\max_{v_{j}\in\mathcal{V}_{\text{delete}}}\min_{\alpha}\left\|\sum_{i\in \mathcal{V}_{\text{minmin}}}\alpha_{i}\mathbf{x}_{i}-\mathbf{x}_{j}\right\|_{2} \leq\delta\)._
To this end, let us introduce our main results. From Theorem 1, we know that \(\|\mathbf{w}_{p}-\mathbf{w}_{u}\|_{2}\) is mainly controlled by three key factors: 1 the difference between the propagation matrices before and after data deletion, which is captured by \(P_{d}\) in Assumption 1; 2 the variance of stochastic gradient computed on the remaining nodes, which is captured by \(G\) in Assumption 2; 3 the closeness of any deleted node features that could be approximated by weighted combination of all node features in the remaining node sets, which is captured by \(\delta\) in Assumption 3. By reducing the number of nodes in \(\mathcal{V}_{\text{delete}}\), all \(P_{d},\delta,G\) are expected to decrease. At an extreme case with \(|\mathcal{V}_{\text{delete}}|=0\), we have \(P_{d}=\delta=G=0\) and \(\mathbf{w}_{p}=\mathbf{w}=\mathbf{w}_{u}\). The proof is deferred to Appendix G.
**Theorem 1**: _Let us suppose Assumptions 1,2,3 hold. Let us define \(\mathbf{w}_{p}\) as the solution obtained by Projector, \(\mathbf{w}_{u}\) is the solution obtained by re-training from scratch with objective function \(F^{u}(\mathbf{w})\), and we assume \(\mathbf{w}_{u}\) is well trained such that \(\mathbf{w}_{u}\approx\arg\min_{\mathbf{w}}F^{u}(\mathbf{w})\). Then, the closeness of \(\mathbf{w}_{p}\) to the weight parameters \(\mathbf{w}_{u}\) can be bounded by_
\[\begin{split}&\mathbb{E}_{\mathcal{V}_{\text{delete}}}[\|\mathbf{w}_{ u}-\mathbf{w}_{p}\|_{2}]\leq\Delta=\\ & Q\sum\nolimits_{t=1}^{T}\left(1+\eta(\lambda+B_{x}^{2}P_{s}^{2}) \right)^{t-1}+\delta\eta T\times|\mathcal{V}_{\text{delete}}|,\end{split} \tag{4}\]
_where \(Q=\eta\big{(}(1+B_{x}B_{w}P_{s})B_{x}P_{d}+G\big{)}\) and \(\eta\) is the learning rate used to pre-train the weight \(\mathbf{w}\) for \(T\) steps of gradient descent updates. After projection, we can fine-tune \(\mathbf{w}_{p}\) for \(K\) iterations with learning rate \((\lambda+B_{x}^{2}P_{s}^{2})^{-1}\) to obtain \(\tilde{\mathbf{w}}_{p}\) that has an error \(F^{u}(\tilde{\mathbf{w}}_{p})-\min_{\mathbf{w}}F^{u}(\mathbf{w})\leq\mathcal{O}(( \lambda+B_{x}^{2}P_{s}^{2})\Delta/K)\)._
Besides, we know the solution of Projector is probably closer to the model retrained from scratch compared to Guo et al. (2020); Golatkar et al. (2020) if \(\delta\) satisfies the condition in Proposition 2. In practice, the condition is very likely to be satisfied because learning rate \(\eta\), regularization term \(\lambda\), and the ratio of deleted nodes \(|\mathcal{V}_{\text{delete}}|/|\mathcal{V}|\) are usually very small. For example, a common choice of learning rate and regularization is \(\eta=0.01,\lambda=10^{-6}\) for most model training. Moreover, we empirically validate the difference between the weight before and after unlearning in the experiment section to validate our theoretical results. The proof of Proposition 2 is deferred to Appendix H.
**Proposition 2**: _If the approximation error in Assumption 3 satisfying \(\delta<\left((\lambda\eta T)^{-1}+1\right)B_{x}\times\frac{|\mathcal{V}|}{| \mathcal{V}_{\text{delete}}|}\), then Projector's output is provably closer to re-training from scratch then using approximate unlearning Influence Guo et al. (2020) and Fisher Golatkar et al. (2020)._
### Toward a more powerful structure
To boost the model performance Projector, we first introduce an unlearning-favorable non-linearity extension to help Projector better leverage node feature information,
then we introduce an unlearning favorable adaptive diffusion graph convolution to help Projector better leverage the graph structure information.
**An extension from linear to non-linear.** Recall that the geometric view of solving logistic regression is finding a hyperplane to linearly separate the node representations \(\mathbf{H}\) computed by linear-GNN. However, node representations computed by linear-GNNs might not be linearly separable. To overcome this issue, we propose to first apply a _MLP_ on all node features, then apply _linear-GNN_ onto the output of the MLP before classification, i.e., \(\mathbf{Z}=\sigma(\sigma(\mathbf{X}\mathbf{W}_{\text{mip}}^{(1)})\mathbf{W}_{ \text{mip}}^{(2)})\), \(\mathbf{H}=\mathbf{P}^{L}\mathbf{Z}\mathbf{W}_{\text{gnn}}\). The above extension can be interpreted as finding a non-linear separation in the input space. During training, we could first pre-train on a public dataset with training samples that do not need to be forgotten, then we only need to unlearn the linear-GNN model by applying Projector onto the output of the MLP. By doing so, Projector enjoys both the separation power brought by the non-linearity of MLP and the efficiency brought by the projection-based unlearning.
**Adaptive diffusion graph convolution.** To help the linear-GNN fully take advantages of the graph structure, we propose an unlearning favorable adaptive diffusion graph convolution operation that take the similarity of both node feature and node label category information into consideration. To achieve this, let us first initialize the node features as \(\mathbf{h}_{i}^{(0)}=\mathbf{x}_{i}\), initialize node labels as \(\mathbf{z}_{i}^{(0)}=\mathbf{y}_{i}\) if \(i\not\in\mathcal{V}_{\text{test}}\) and \(\mathbf{z}_{i}^{(0)}=\mathbf{0}\) if \(i\in\mathcal{V}_{\text{test}}\). Then, the forward propagation of the adaptive diffusion graph convolution operation is computed as
\[[\mathbf{H}^{(\ell+1)},\mathbf{Z}^{(\ell+1)}]=\big{(}(1-\gamma)\mathbf{I}+ \gamma\mathbf{D}_{\mathcal{G}}^{(\ell)}\big{)}[\mathbf{H}^{(\ell)},\mathbf{Z} ^{(\ell)}],\]
where we denote \([\cdot,\cdot]\) as the feature channel concatenation operation and the \(i\)-th row \(j\)-th column of the \(\ell\)-th diffusion operator is defined by
\[[\mathbf{D}^{(\ell)}(\mathcal{G})]_{i,j}=\frac{1}{Z}\exp(-\sigma_{h}^{2}\| \mathbf{h}_{i}^{(\ell)}-\mathbf{h}_{j}^{(\ell)}\|_{2}^{2}-\sigma_{z}^{2}\| \mathbf{z}_{i}^{(\ell)}-\mathbf{z}_{j}^{(\ell)}\|_{2}^{2}),\]
where \(\sigma_{h},\sigma_{z}\in\mathbb{R}\) are learned during training. Intuitively, our diffusion operator assign a higher neighbor aggregation weight to a node if it has a similar node feature and label information. Then, we set \(\mathbf{H}=[\mathbf{H}^{(1)},\mathbf{Z}^{(1)},\ldots,\mathbf{H}^{(L)},\mathbf{ Z}^{(L)}]\) as the final node representation for prediction. During unlearning, we do not have to modify \(\sigma_{h},\sigma_{z}\) since these scalars will not leak the node feature information.
To this end, we conclude this section by showing in Proposition 3 that under mild conditions on \(\mathbf{X}\) and \(\mathbf{P}\), the linear-GNN used in Projector could approximate any function defined on the graph. Since non-linearity extension and adaptive diffusion graph convolution could potentially alleviate the conditions on \(\mathbf{X}\) and \(\mathbf{P}\), these extensions could improve the expressive power of linear-GNN.
**Proposition 3**: _Let us define \(\mathbf{U},\boldsymbol{\lambda}\) as the eigenvectors and eigenvalues of graph propagation matrix \(\mathbf{P}\), \(g_{\mathbf{w}}(\mathbf{L},\mathbf{X})=\sum_{\ell=1}^{n}(\mathbf{P}^{\ell-1} \mathbf{X})\mathbf{w}_{\ell}\) as the linear-GNN, and \(f(\mathbf{P},\mathbf{X})\in\mathbb{R}^{n\times 1}\) as the target function we want to approximate by linear-GNN. If no elements in \(\boldsymbol{\lambda}\) are identical and no rows of \(\tilde{\mathbf{X}}=\mathbf{U}\mathbf{X}\) are zero vectors, then there is always exists a set of \(\mathbf{w}_{\ell}^{*}\in\mathbb{R}^{d}\) such that \(g_{\mathbf{w}^{*}}(\mathbf{P},\mathbf{X})=f(\mathbf{P},\mathbf{X})\). Replacing \(\mathbf{P}\) with adaptive diffusion graph convolution and replace \(\mathbf{X}\) as the output of MLP model could potentially alleviate our requirement on the \(\boldsymbol{\lambda}\) and \(\tilde{\mathbf{X}}\) since their values are learned by training, therefore improving its expressiveness._
The intuition behind above proposition is that the expressive power of the linear-GNN \(g_{\mathbf{w}}(\mathbf{L},\mathbf{X})\) mainly comes from its graph convolution. Given a dataset with \(n\) nodes, using graph convolutions with polynomial from \(0\) to \(n-1\) allows us map each node feature to its desired value with \(n\) different weight parameters, therefore it could approximate any function defined on graph. Proof deferred to Appendix I.
## 4 Experiments
We consider GraphEraser as our exact graph unlearning baseline. For approximate graph unlearning baselines, we extend Influence and Fisher to graph structured data by taking the node dependency into consideration and rename them as Influence+ and Fisher+. The details on the baselines are introduced in Appendix B.1. Moreover, since each experiment is designed to evaluate different aspect of unlearning, the setup of each experiment could be slightly different (e.g., linear or non-linear, different deleted node size, different datasets, etc). Therefore, we choose to provide a brief introduction on the experiment design and setup at the beginning of each experiment paragraph, but defer the detailed descriptions to Appendix B.2.
### Experiment results
**Feature injection test.** This experiment is designed to verify whether Projector and baselines could perfectly unlearn the trace of deleted node features from the weight parameters. To achieve this goal, we append an extra binary feature to all nodes and set the extra binary feature as \(1\) for the deleted nodes and as \(0\) for other nodes. To make sure this extra binary feature is an important feature and is heavily used during training, we add an extra category and change all deleted nodes to this extra category, then pre-train on the modified dataset. We measure the effectiveness of unlearning by checking 1 whether unlearning method can _fully unlearn_ by comparing weight norm of the injected channel before and after unlearning2; 2 whether unlearning
method _hurt the model performance_ by comparing the accuracy before and after unlearning; 3 the _computation cost_ by comparing the time required for unlearning. We randomly select \(5\%,10\%\) of the nodes from the training set as deleted nodes. We have the following observations from Table 1:
[MISSING_PAGE_POST]
Footnote 33: Comparison on the weight difference and model prediction (after the final layer activation function) before and after the unlearning process.
mance of non-linear GNNs (introduced in Section 3.4) and linear GNNs, where the MLP extractor in non-linear Projector is pre-trained by supervised learning on the features of all training set nodes but except the deleted ones. We have the following observations from Table 2: \(\boldsymbol{\mathsf{Q}}\) By comparing results in block 1, we know that using MLP as a feature extractor can improve the average F1-score accuracy, but it also increases the variance of the model performance; \(\boldsymbol{\mathsf{Q}}\) By comparing the results in block 1 and 2, we know that linear-GNN could achieve better performance than ordinary GNNs; \(\boldsymbol{\mathsf{Q}}\) By comparing results in block 2 and 3, we know that employing GraphEraser with non-linear GNNs will significantly hurt the performance of the original GNN models, which is due to the data heterogeneously and the lack of training data for each subgraph model.
**Robustness of Projector.** We study the change of testing accuracy as we progressively increase the unlearning ratio from \(1\%\) to \(20\%\), where a more stable model performance is preferred in real-world scenarios. As shown in Figure 4, the change of testing accuracy in Projector is smaller (e.g., on the OGB-Arvix dataset the test accuracy of Projector changes around \(0.5\%\) while the GNNs change around \(0.8\%\sim 1\%\) ), more stable (i.e., the test accuracy fluctuate less when the fraction of unlearning nodes increases), and with accuracy even better than re-training ordinary GNNs.
**Evaluation on the \(\delta\) term in Assumption 3.**
The performance of Projector's unlearned solution is highly dependent on the correlation between node features, which is captured by the \(\delta\) term in Assumption 3. Therefore, we report the \(\delta\) by computing
\[\delta=\max_{v_{i}\in\mathcal{V}}\left\|\mathbf{x}_{i}-\mathbf{X}_{\text{ remain}}^{\top}\mathbf{X}_{\text{remain}}(\mathbf{X}_{\text{remain}}^{\top}\mathbf{X}_{ \text{remain}})^{\dagger}\mathbf{x}_{i}\right\|_{2}, \tag{5}\]
where \(\mathbf{X}_{\text{remain}}=[\mathbf{x}_{1},\dots,\mathbf{x}_{i-1},\mathbf{x}_ {i+1},\dots,\mathbf{x}_{n}]\) is the stack of all remaining node features. As shown in Table 3, the \(\delta\) value is relatively small compared to the norm of average node features, which indicates the realism of our assumption and guarantees the performance of Projector's unlearned solution (even without finetuning). Besides, we can observe that the \(\delta\) value on the Cora dataset is larger than other datasets, this is because the feature of the Cora dataset is a binary-valued vector of size \(1433\) which is very close to the total number of nodes in the graph (\(2708\) nodes). When the node feature dimension is large and all values are either \(0\) or \(1\), representing any vectors with others becomes difficult, therefore resulting in a larger \(\delta\).
**More experiment results.** More experiment results are deferred to the appendix. We compare Projector with re-training non-linear GNNs under different node deletion schemes in Appendix A.1. We evaluate unlearning with membership inference attack in Appendix A.2. We ablation study the effectiveness of fine-tuning on Projector in Appendix A.3.
## 5 Conclusion
In this paper, we study graph representation unlearning by proposing a projection-based unlearning approach Projector. Projector unlearns the deleted node features by projecting the weight parameters of a pre-trained model onto a subspace that is irrelevant to the deleted node features. Empirical results on real-world dataset illustrate its effectiveness, efficiency, and robustness.
## Acknowledgements
This work was supported in part by NSF grant 2008398.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Method & Accuracy \\ \hline \multirow{4}{*}{OGB-Arvix} & \(\boldsymbol{\mathsf{Q}}\) & Linear GNN + Adap diff & \(73.35\pm 0.12\) \\ & & Linear GNN + Adap diff + MLP & \(73.41\pm 0.31\) \\ \cline{2-3} & GCN & \(71.74\pm 0.29\) \\ \cline{2-3} & GraphSAGE & \(71.49\pm 0.27\) \\ \cline{2-3} & GCN + GraphEraser & \(66.52\pm 0.31\) \\ \cline{2-3} & GraphSAGE + GraphEraser & \(62.96\pm 0.26\) \\ \hline \hline \multirow{4}{*}{OGB-Product} & \(\boldsymbol{\mathsf{L}}\) & Linear GNN + Adap diff & \(80.25\pm 0.09\) \\ & Linear GNN + Adap diff + MLP & \(80.30\pm 0.40\) \\ \cline{2-3} & GAT & \(79.45\pm 0.59\) \\ \cline{2-3} & GraphSAGE & \(78.70\pm 0.36\) \\ \cline{2-3} & GraphSaint & \(79.08\pm 0.24\) \\ \cline{2-3} & GAT + GraphEraser & \(60.23\pm 0.71\) \\ \cline{2-3} & GraphSAGE + GraphEraser & \(58.99\pm 0.40\) \\ \cline{2-3} & GraphSaint + GraphEraser & \(59.54\pm 0.41\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison on the performance of linear GNN and its non-linear extension with ordinary GNNs.
Figure 4: Comparison on the test performance with different number of node to unlearn.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & OGB-Arvix & OGB-Product & Cora & Pubmed \\ \(\delta\) & \(0.3815\) & \(0.0915\) & \(0.2984\) & \(0.0049\) \\ \(\|\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\|_{2}\) & \(9.6369\) & \(161.4997\) & \(0.6923\) & \(0.5546\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation \(\delta\) on real-world datasets. |
2310.19454 | MMM and MMMSynth: Clustering of heterogeneous tabular data, and
synthetic data generation | We provide new algorithms for two tasks relating to heterogeneous tabular
datasets: clustering, and synthetic data generation. Tabular datasets typically
consist of heterogeneous data types (numerical, ordinal, categorical) in
columns, but may also have hidden cluster structure in their rows: for example,
they may be drawn from heterogeneous (geographical, socioeconomic,
methodological) sources, such that the outcome variable they describe (such as
the presence of a disease) may depend not only on the other variables but on
the cluster context. Moreover, sharing of biomedical data is often hindered by
patient confidentiality laws, and there is current interest in algorithms to
generate synthetic tabular data from real data, for example via deep learning.
We demonstrate a novel EM-based clustering algorithm, MMM (``Madras Mixture
Model''), that outperforms standard algorithms in determining clusters in
synthetic heterogeneous data, and recovers structure in real data. Based on
this, we demonstrate a synthetic tabular data generation algorithm, MMMsynth,
that pre-clusters the input data, and generates cluster-wise synthetic data
assuming cluster-specific data distributions for the input columns. We
benchmark this algorithm by testing the performance of standard ML algorithms
when they are trained on synthetic data and tested on real published datasets.
Our synthetic data generation algorithm outperforms other literature
tabular-data generators, and approaches the performance of training purely with
real data. | Chandrani Kumari, Rahul Siddharthan | 2023-10-30T11:26:01Z | http://arxiv.org/abs/2310.19454v2 | # MMM and MMMSynth: Clustering of heterogeneous tabular data, and synthetic data generation
###### Abstract
We provide new algorithms for two tasks relating to heterogeneous tabular datasets: clustering, and synthetic data generation. Tabular datasets typically consist of heterogeneous data types (numerical, ordinal, categorical) in columns, but may also have hidden cluster structure in their rows: for example, they may be drawn from heterogeneous (geographical, socioeconomic, methodological) sources, such that the outcome variable they describe (such as the presence of a disease) may depend not only on the other variables but on the cluster context. Moreover, sharing of biomedical data is often hindered by patient confidentiality laws, and there is current interest in algorithms to generate synthetic tabular data from real data, for example via deep learning.
We demonstrate a novel EM-based clustering algorithm, MMM ("Madras Mixture Model"), that outperforms standard algorithms in determining clusters in synthetic heterogeneous data, and recovers structure in real data. Based on this, we demonstrate a synthetic tabular data generation algorithm, MMMsynth, that pre-clusters the input data, and generates cluster-wise synthetic data assuming cluster- specific data distributions for the input columns. We benchmark this algorithm by testing the performance of standard ML algorithms when they are trained on synthetic data and tested on real published datasets. Our synthetic data generation algorithm outperforms other literature tabular-data generators, and approaches the performance of training purely with real data.
## 1 Introduction
Tabular datasets, consisting of heterogeneous variable types (categorical, ordinal, numeric), are ubiquitous in data science and, in particular, in biomedical research. Such datasets may additionally be heterogeneous in rows: they may consist of a mix of different subtypes or categories, corresponding to hidden structures not originally measured or are part of the dataset (such as geographical location, socioeconomic class, genotype, etc).
For machine learning applications, often one variable (often called a "response" or "output" variable) is of clinical interest (such as presence or absence of a disease or disorder such as diabetes, birth weight of a fetus) and the goal is to train a model to predict it from the other measurable variables (often called "predictor " or "input" variables). Patient confidentiality often restricts the ability to share such datasets freely, and several algorithms have been developed [1, 2, 3] to generate synthetic datasets that closely resemble real datasets and can be used to train ML models and shared freely.
Here we address both tasks, of clustering heterogeneous data, and of generating realistic synthetic datasets as measured by their performance in training models on them for ML prediction on the real data.
Several standard clustering algorithms exist for multidimensional data and are implemented in machine learning packages such as scikit-learn[4], ClusterR[5] and Clustering.jl[6]. These include agglomerative (hierarchical) clustering methods such as UPGMA, partitioning algorithms such as \(K\)-means that seek to optimize a particular distance metric across \(K\) clusters, and mixture models such as Gaussian Mixture Models (GMMs) that assume that the data is drawn from a mixture of distributions and simultaneously learn the parameters of the distributions and the assignment of data points to distributions.
These methods assume an underlying appropriate distance metric (such as Euclidean distance) (agglomerative clustering, or \(K\)-means), or assume an underlying probability distribution for the data (GMMs) which is to be learned.
Many datasets consist of a mixture of binary (eg, gender) categorical (eg, ethnicity), ordinal (eg, number of children), and numerical data (eg, height, weight). Columns in such a tabular dataset may be correlated or interdependent.
Here we propose an algorithm, which we call the Madras Mixture Model (MMM), to cluster tabular data where the columns are assumed independently drawn from either categorical or real data. For each clustering, we optimize the likelihood of the total data being drawn from that clustering:
\[P(D|K)=\prod_{j=1}^{K}P(d:d\in j) \tag{1}\]
where \(K\) is the number of clusters, and the right hand side is a product over clusters for the likelihood that the subset of rows \(d\) that are assigned to cluster \(j\) would be co-clustered (this is stated more precisely in Methods).
We assume that categorical variables are drawn from an unknown categorical distribution with a Dirichlet prior, and numeric variables are drawn from an unknown normal distribution with a normal-Gamma prior. Unlike with standard GMM algorithms, we do not attempt to estimate the parameters of the distributions, but integrate over them to obtain the likelihoods (see Methods). Our clustering approach is a variation of expectation maximization (EM) where, essentially, the M-step in the usual GMM algorithm is replaced with this integration.
To determine the true number of clusters, we use the marginal likelihood (ML) (sometimes called the Bayesian Occam's razor) [7]. While the Bayesian information criterion[8], is widely used as an approximation to the ML, its performance on our synthetic dataset benchmarks was inferior.
The most accurate numerical calculation of the ML is using thermodynamic integration (TI) [9, 10], reviewed in Methods, and we provide an implementation using TI. This is computationally expensive since it involves sampling at multiple different "temperatures" and integrating. As a faster alternative, the ML is frequently estimated using the harmonic mean (HM) of samples[11], but this is known to give a biased estimate in practice[12]. We give an improved approximation, which we call HM\(\beta\), involving a fictitious inverse temperature \(\beta\), which, for suitable \(\beta\), converges to the true value much faster than the HM on small datasets where the exact answer is calculable, and
also converges rapidly to a fixed value on larger datasets. We demonstrate that HM\(\beta\) with \(\beta\approx 0.5\) produces results comparable to TI on our synthetic datasets.
Finally, we use MMM as a basis for a synthetic tabular data generation algorithm, MMMsynth. Each cluster is replaced with a synthetic cluster which column-wise has the same statistical properties for the input variable, and whose output variable is estimated with a noisy linear function learned from the corresponding cluster in the true data. All these synthetic clusters are then pooled to generate a synthetic dataset.
We assess quality of synthetic data by the performance, in predicting on real data, of ML models trained on synthetic data. We demonstrate that this rather simple approach significantly outperforms other published methods CT-GAN and CGAN, and performs comparably with or better than Gaussian Copula and TVAE. Our performance in many cases approaches the quality of prediction from training on real data.
## 2 Methods
### Mmm: Clustering of heterogeneous data
Consider a tabular data set consisting of \(L\) heterogeneous columns and \(N\) rows. Each row of the set then consists of variables \(x_{i}\), \(i=1,2,\ldots,L\). Each \(x_{i}\) can be binary, categorical, ordinal, integer, or real. We consider only categorical (including binary) or numeric data; ordinal or numeric integer data can be treated as either categorical or numeric depending on context. If there are missing data, they should first be interpolated or imputed via a suitable method.
### Discrete data, Dirichlet prior
For a categorical variable with \(k\) values, the Dirichlet prior is \(P_{(}p)\propto\prod_{i=1}^{k}p_{i}^{c_{i}-1}\). For binary variables (\(k=2\)) this is called the beta prior. If we have already observed data \(D\) consisting of \(N\) observations, with each outcome \(j\) occurring \(N_{j}\) times, the posterior predictive for outcome \(x=i\) (\(1\leq i,j\leq k\)) is
\[P(x=i|D)=\frac{N_{i}+c_{i}}{N+C} \tag{2}\]
where \(C=\sum_{i=1}^{k}c_{i}\).
### Continuous data, normal-gamma prior
For a continuous normally-distributed variable, we use a normal-gamma prior, as described in [13]. with four hyperparameters, which we call \(\mu_{0},\beta_{0},a_{0},b_{0}\):
\[p(\mu,\lambda) =\mathcal{N}\left(\mu|\mu_{0},(\beta_{0}\lambda)^{-1}\right) \mathrm{Gam}(\lambda|a_{0},b_{0}) \tag{3}\] \[=\frac{(\beta_{0}\lambda)^{1/2}}{\sqrt{2\pi}}e^{-\frac{\beta_{0} \lambda}{2}(\mu-\mu_{0})^{2}}\frac{1}{\Gamma(a_{0})}b_{0}^{a_{0}}\lambda^{a_{0 }-1}e^{-b_{0}\lambda}\] \[=\left(\frac{\beta_{0}}{2\pi}\right)^{1/2}\frac{b_{0}^{a_{0}}}{ \Gamma(a_{0})}\lambda^{a_{0}-1/2}\exp\left(-\frac{\lambda}{2}\left[\beta_{0}( \mu-\mu_{0})^{2}+2b_{0}\right]\right). \tag{4}\]
Here \(\lambda\) is the inverse of the variance, \(\lambda=\frac{1}{\sigma^{2}}\).
Given data \(D\) consisting of \(n\) items \(x_{i}\), \(i=1\ldots n\), the posterior is
\[p(\mu,\lambda|D) =NG(\mu,\lambda|\mu_{n},\beta_{n},a_{n},b_{n})\] \[=\left(\frac{\beta_{n}}{2\pi}\right)^{1/2}\frac{b_{n}^{a_{n}}}{ \Gamma(a_{n})}\lambda^{a_{n}-1/2}\exp\left(-\frac{\lambda}{2}\left[\beta_{n}( \mu-\mu_{0})^{2}+2b_{n}\right]\right) \tag{5}\]
where
\[\mu_{n} =\frac{\beta_{0}\mu_{0}+n\bar{x}}{\beta_{0}+n} \tag{6}\] \[\beta_{n} =\beta_{0}+n\] (7) \[a_{n} =a_{0}+\frac{n}{2}\] (8) \[b_{n} =b_{0}+\frac{1}{2}\sum_{i=1}^{n}(x_{i}-\bar{x})^{2}+\frac{\beta_{ 0}n(\bar{x}-\mu_{0})^{2}}{2(\beta_{0}+n)} \tag{9}\]
The posterior predictive, for seeing a single new data item \(x\) given the previous data \(D\), is [13]
\[p(x|D)=\pi^{-1/2}\frac{\Gamma(a_{n}+\frac{1}{2})}{\Gamma(a_{n})}\left(\frac{ \Lambda}{2a_{n}}\right)^{\frac{1}{2}}\left(1+\frac{\Lambda(x-\mu_{n})^{2}}{2a _{n}}\right)^{-\left(a_{n}+\frac{1}{2}\right)} \tag{10}\]
where
\[\Lambda=\frac{a_{n}\beta_{n}}{b_{n}(\beta_{n}+1)} \tag{11}\]
In log space:
\[\log p(x|D)=-0.5\log\pi+\log\Gamma(a_{n}+\frac{1}{2})-\log\Gamma(a_{n})+0.5( \log\Lambda-\log(2a_{n}))-\left(a_{n}+\frac{1}{2}\right)\log\left(1+\frac{ \Lambda(x-\mu_{n})^{2}}{2a_{n}}\right) \tag{12}\]
The marginal likelihood is
\[p(D)=\frac{\Gamma(a_{n})}{\Gamma(a_{0})}\frac{b_{0}^{a_{0}}}{b_{n}^{a_{n}}} \left(\frac{\beta_{0}}{\beta_{n}}\right)^{1/2}(2\pi)^{-n/2} \tag{13}\]
### Optimizing likelihood of a clustering by expectation maximization
Let the data \(D\) consist of \(N\) rows, so that \(D_{i}\) is the \(i\)'th row. Let the model be denoted by \(M_{K}\) where \(K\) is the number of clusters. Each cluster has its own parameters of the categorical or normal distribution for each column which we call \(\vec{\theta}\), with \(\vec{\theta}_{\ell j}\) being the vector of parameters for column \(\ell\) in cluster \(j\). The vector has \(k-1\) independent components for a categorical distribution of \(k\) categories, and 2 components for a normal distribution, which are all continuous. Another, discrete parameter for the model is the detailed cluster assignment of each row \(D_{i}\) to each cluster \(C_{j}\). This can be described by a vector \(\vec{A}\) of length \(N\), whose elements \(A_{i}\) take values from 1 to \(K\). A specific clustering is described by \(\vec{\Theta}=\left\{\vec{\theta},\vec{A}\right\}\).
Given \(K\), we seek an optimal clustering (i.e, optimal choice of \(\vec{A}\)) by maximum likelihood. The likelihood being maximized is
\[\vec{A}^{*} =\operatorname*{argmax}_{\vec{A}}\prod_{j=1}^{K}P(d_{j})\] \[=\operatorname*{argmax}_{\vec{A}}\prod_{j=1}^{K}\int P(d_{j}| \vec{\theta}_{j})P(\vec{\theta}_{j})d\vec{\theta}_{j}. \tag{14}\]
Here \(d_{j}\) is shorthand for \(\{D_{i}|A_{i}=j\}\), that is, it is the set of rows from \(D\) that belong to cluster \(j\), and \(\vec{\theta}_{j}\) is the set of parameters \(\vec{\theta}\) specific to cluster \(j\). The unknown parameters \(\vec{\theta_{j}}\) are integrated over, separately for each cluster, and \(P(\vec{\theta_{j}})\) is a Dirichlet or normal-Gamma prior as appropriate. That is, we seek to find that assignment of individual rows to clusters, \(\vec{A}\), such that the product of the likelihoods that the set of rows \(D_{j}\) that have been assigned to the cluster \(j\) are described by the same probabilistic model is maximized. There is an implicit product over columns, which are assumed independent.
In contrast to EM implementations of GMMs, we only seek to learn \(\vec{A}\); we do not seek the parameters of the probabilistic model \(\vec{\theta}\), but integrate over them.
An algorithm for clustering into \(K\) clusters could be
1. Initialize with a random assignment of rows to clusters.
2. Calculate a score matrix \(L_{ij}\). For each row \(i\), this is the likelihood that \(D_{i}\) belongs to cluster \(j\) for all \(j\) (excluding \(i\) from its current cluster).
3. Assign each row \(i\) to the cluster corresponding to that value of \(j\) which maximises \(L_{ij}\). (This is similar to the E-step in EM.)
4. Repeat from step 2 until no reassignments are made.
Instead of the M-step in EM, step 2 calculates likelihoods according to the posterior predictives for the categorical (2) and normal (10) distributions. The likelihood for a row is the product of the likelihoods over columns. We work with log likelihoods.
In our implementation, we iterate starting from one cluster. After optimizing each \(K\) clustering, starting at \(K=1\), we use a heuristic to initialize \(K+1\) clusters: we pick the poorest-fitting \(\frac{N}{K+1}\) rows (measured by their posterior predictive for the cluster that they are currently in) and move them to a new cluster, and then run the algorithm as above.
We can choose to either stop at a pre-defined \(K\), or identify the optimal \(K\) via marginal likelihood, as described below (the optimal \(K\) could be 1).
### Identifying the correct \(K\): Marginal likelihood
Equation 14 maximizes the likelihood of a clustering over \(\vec{A}\), while marginalizing over \(\vec{\theta}\). The correct \(K\) in the Bayesian approach is the \(K\) that maximises the marginal likelihood (ML) marginalized over _all_ parameters including \(\vec{A}\). In other words, while one can increase the likelihood in eq 14 by splitting into more and more clusters, beyond a point this will lead to overfitting; the full ML penalizes this (an approach also called the Bayesian Occam razor [7]).
Unfortunately, exact calculation of the ML (marginalizing over \(\vec{A}\)) is impossible. There are several approaches using sampling; we review two below (harmonic mean [HM], and thermodynamic integration [TI]), before introducing our own, a variant of HM which we call HM\(\beta\), which we show is more accurate than HM, and on our data, comparably accurate to TI while being faster.
#### 2.5.1 Arithmetic Mean and Harmonic Mean
A straightforward estimation of the marginal likelihood for \(K\) clusters (\(ML_{K}\)) would be to sample uniformly from the parameter space for \(\vec{A}\), and calculate the average likelihood over \(M\) samples (there is an implicit marginalization over \(\vec{\theta}\) throughout):
\[ML_{K}\equiv P(D|K)\approx\frac{1}{M}\sum_{m=1}^{M}P(D|K,\vec{A}_{m}) \tag{15}\]
This is the arithmetic mean (AM) estimate, and tends to be biased to lower likelihoods because the region of high-likelihood parameters is very small.
An alternative is to start from Bayes' theorem:
\[P(\vec{A}|D,K)=\frac{P(D|\vec{A},K)P(\vec{A}|K)}{\sum_{\vec{A}^{\prime}}P(D| \vec{A}^{\prime},K)P(\vec{A}^{\prime}|k)} \tag{16}\]
The denominator on the right is the marginal likelihood for \(K\) clusters. Rearranging,
\[\frac{P(\vec{A}|K)}{ML_{K}}=\frac{P(\vec{A}|D,K)}{P(D|\vec{A},K)} \tag{17}\]
and summing over \(\vec{A}\) with \(\sum_{\vec{A}}P(\vec{A}|K)=1\),
\[\frac{1}{ML_{K}}=\sum_{\vec{A}}\frac{P(\vec{A}|D,K)}{P(D|\vec{A},K)} \tag{18}\]
If we sample \(\vec{A}\) from the distribution \(P(\vec{A}|D,K)\), then for \(M\) samples we have
\[ML_{K}\approx\left(\frac{1}{M}\sum_{m=1}^{M}\frac{1}{P(D|\vec{A}_{m},K)} \right)^{-1}. \tag{19}\]
This is the "harmonic mean" (HM) approximation. Both the HM and the AM can be derived via different choices of an importance sampling distribution in a Metropolis-Hastings scheme [10]. The HM is known to be biased towards higher likelihoods in practice, oppositely to the AM.
#### 2.5.2 Thermodynamic integration
Thermodynamic integration (TI), a technique borrowed from physics, was described in the statistical inference context by Gelman and Meng [9]. The following quick summary is adapted to our notation from Lartillot and Philippe [10].
Suppose one has an un-normalized density in parameter space, parametrized by \(\beta\), \(q_{\beta}(\vec{A})\). We can define a "partition function"
\[Z_{\beta}=\int q_{\beta}(\vec{A})d\vec{A}. \tag{20}\]
In our case \(\vec{A}\) is discrete, so the integral, here and below, should be interpreted as a sum. From this we get a normalized density
\[p_{\beta}(\vec{A})=\frac{1}{Z_{\beta}}q_{\beta}(\vec{A}). \tag{21}\]
We then have
\[\frac{\partial}{\partial\beta}\log Z_{\beta} =\frac{1}{Z_{\beta}}\frac{\partial Z_{\beta}}{\partial\beta}\] \[=\frac{1}{Z_{\beta}}\frac{\partial}{\partial\beta}\int q_{\beta}( \vec{A})d\vec{A}\] \[=\int\frac{1}{q_{\beta}(\vec{A})}\frac{\partial q_{\beta}(\vec{A} )}{\partial\beta}\frac{q_{\beta}(\vec{A})}{Z_{\beta}}dA\] \[=E_{\beta}\left[\frac{\partial\log q_{\beta}(\vec{A})}{\partial \beta}\right]\]
Defining \(U(\theta)=\frac{\partial}{\partial\beta}\log q_{\beta}(\vec{A})\), and integrating from \(\beta=0\) to \(1\),
\[\log Z_{1}-\log Z_{0}=\int_{0}^{1}E_{\beta}[U]d\beta. \tag{22}\]
Consider the particular choice
\[q_{\beta}(\vec{A})=P(D|\vec{A},K)^{\beta}P(\vec{A}|K) \tag{23}\]
where, as above, \(K\) is the number of clusters. Then \(q_{0}\) is the prior for \(\vec{\theta}\), and \(q_{1}\) is proportional to the posterior. Therefore \(Z_{0}\) is \(1\) (since \(q_{0}\) is normalized) and \(Z_{1}\) is the marginal likelihood. Substituting,
\[\log\mathrm{ML}=\int_{0}^{1}E_{\beta}\left[\log P(D|\vec{A},K)\right]d\beta \tag{24}\]
The expectation \(E_{\beta}\) is calculated by sampling at various \(\beta\) and the integral is found by Simpson's rule.
#### 2.5.3 A faster approximation
We return to the HM approximation (eq 18). The problem is that the distribution \(P(\vec{A}|D,K)\) is strongly peaked around the optimal parameters \(\vec{A}\). To broaden the distribution we can introduce a fictitious inverse temperature \(\beta\) (not the same as in TI), and write
\[\frac{1}{ML_{K}}=\sum_{\vec{A}}\frac{P(\vec{A}|D,K)^{\beta}P(\vec{A}|D,K)^{1- \beta}}{P(D|\vec{A},K)} \tag{25}\]
But
\[P(\vec{A}|D,K)=\frac{P(D|\vec{A},K)P(\vec{A}|K)}{ML_{K}} \tag{26}\]
and this gives
\[\frac{1}{ML_{K}}=\sum_{\vec{A}}P(\vec{A}|D,K)^{\beta}\left(\frac{P(\vec{A}|K) }{ML_{K}}\right)^{1-\beta}P(D|\vec{A},K)^{-\beta} \tag{27}\]
and therefore
\[ML_{K}^{-\beta}=\sum_{\vec{A}}P(\vec{A}|D,K)^{\beta}P(\vec{A}|K)^{1-\beta}P(D| \vec{A},K)^{-\beta} \tag{28}\]
We can evaluate \(ML_{K}\) from this by sampling from
\[\tilde{P}(\vec{A}|D,K)\equiv P(\vec{A}|D,K)^{\beta}P(\vec{A}|K)^{1-\beta} \tag{29}\]
(in practice from \(P(D|\vec{A},K)^{\beta}\) which is proportional to this) rather than \(P(\vec{A}|D,K)\). However, this distribution is not normalized. Let \(\sum_{\vec{A}}\tilde{P}(\vec{A}|D,K)=\sum_{\vec{A}}P(\vec{A}|D,K)^{\beta}P(\vec {A}|K)^{1-\beta}=Z\neq 1\). If we take \(M\) samples from \(\tilde{P}(\vec{A}|D,K)\), then
\[ML_{K}^{-\beta} =\frac{1}{M}\sum_{\stackrel{{ m=1}}{{\text{samples from }}}}^{M}P(D|\vec{A},K)^{-\beta}\times Z \tag{30}\] \[\equiv\left\langle P(D|\vec{A},K)^{-\beta}\right\rangle_{\tilde{P }}Z. \tag{31}\]
To estimate \(Z\) we are forced to sample from the normalized distribution \(P(\vec{A}|D,K)\):
\[Z =\sum_{\vec{A}}\tilde{P}(\vec{A}|D,K)=\sum_{\vec{A}}\frac{P(\vec{ A}|D,K)^{\beta}P(\vec{A}|K)^{1-\beta}}{P(\vec{A}|D,K)}P(\vec{A}|D,K) \tag{32}\] \[\equiv\left\langle\frac{P(\vec{A}|D,K)^{\beta}P(\vec{A}|K)^{1- \beta}}{P(\vec{A}|D,K)}\right\rangle_{P}\] (33) \[=\left\langle\frac{P(D|\vec{A},K)^{\beta-1}P(\vec{A}|K)^{\beta-1 }}{ML_{K}^{\beta-1}}P(\vec{A}|K)^{1-\beta}\right\rangle_{P} \tag{34}\]
Substituting,
\[ML_{K}^{-\beta}=ML_{K}^{1-\beta}\left\langle P(D|\vec{A},K)^{-\beta}\right\rangle _{\tilde{P}}\left\langle P(D|\vec{A},K)^{\beta-1}\right\rangle_{P}\]
\[ML_{K}^{-1}=\left\langle P(D|\vec{A},K)^{-\beta}\right\rangle_{\tilde{P}} \left\langle P(D|\vec{A},K)^{\beta-1}\right\rangle_{P}\]
and finally
\[\log ML_{K}=-\log\left\langle P(D|\vec{A},K)^{-\beta}\right\rangle_{\tilde{P} }-\log\left\langle P(D|\vec{A},K)^{\beta-1}\right\rangle_{P} \tag{35}\]
where, again, \(\langle\ldots\rangle_{\tilde{P}}\) means an average over \(M\) samples from \(P(D|\vec{A},K)^{\beta}\), and \(\langle\ldots\rangle_{P}\) means an average over \(M\) samples from \(P(\vec{A}D,K)\). This expression reduces to the arithmetic mean if \(\beta=0\) and to the harmonic mean if \(\beta=1\). We refer to it as HM\(\beta\).
We assess an optimal choice of \(\beta\) using a small dataset of 20 rows where the marginalization can be carried out explicitly, by summing over all 1,048,574 possible cluster assignments to two clusters (figure 1). The TI method converges quite quickly to the exact answer, and the HM\(\beta\) for \(\beta=0.5\) also gives good results. On larger datasets too we found \(\beta=0.5\) an optimal choice (as in next subsection), though the exact value of the ML is not known.
#### 2.5.4 Synthetic data for benchmarking clustering
We generated multiple datasets, each with 10 columns, 5000 rows, with five clusters each, in a 5:4:3:2:1 ratio, with varying parameters, as follows. For categorical data, columns in each cluster \(j\) were sampled from vectors \(\mathbf{v}_{0}+\Delta\mathbf{v}_{j}\) where \(\mathbf{v}_{0}\) was common to all clusters and \(\mathbf{v}_{j}\) was specific to the \(j\)'th cluster, and \(\Delta\) was varied from 0.5 to 4.5, with smaller values indicating greater similarity among clusters. Here, five columns were binary and five were 4-valued. For numeric data, the means
Figure 1: For a dataset of 20 rows, the marginal likelihood can be calculated exactly (solid line); this is compared with TI and with HM\(\beta\) at various \(\beta\). \(\beta=0.5\) gives results close to the exact value.
in different clusters were separated by 1.0 and the standard deviations were separated by \(\delta\sigma\), which varied from 0.5 to 4.5, with smaller \(\delta\sigma\) indicating more distinct clusters. We also simulated numeric data where the means in all clusters were the same and only the standard deviations differed by \(\delta\sigma\). Finally, we generated mixed datasets, which included five numeric and five categorical (4-valued) columns, varying the parameter \(\Delta\) as above and fixing \(\delta\sigma=5.0-\Delta\), so that increasing \(\Delta\) implies increasing similarity among all columns.
### Assessment of benchmarking
We used the adjusted Rand index (ARI) [14] to compare true and predicted clusters.
We also used normalized clustering accuracy, as provided by the GenieClust [15] suite, and defined by Gagolewski in [16], where applicable; the results were similar, but this formula applies only when the true and predicted numbers of clusters are the same.
### MMMsynth: generating synthetic data with MMM
#### 2.7.1 Synthetic data generation algorithm
MMMSynth uses MMM to pre-cluster an input dataset, excluding the output column. Each cluster is assumed, as in MMM, to consist of independent columns that are either categorical or numeric. The parameters of the corresponding multinomial or Gaussian distribution are fitted to each column in each cluster, and a new cluster of the same size is generated by sampling from these distributions. A linear model is fitted to the output column in each real cluster and used to generate the output column in the synthetic clusters. The synthetic clusters are finally combined to produce a full dataset of the same size as the original dataset.
For comparison we generated synthetic data using the methods availbale in literature. We used synthetic data vault[17] libraries in python to generate synthetic data using TVAE, Gaussian Copula, CTGAN and CGAN.
#### 2.7.2 Benchmarking MMMsynth-generated synthetic datasets
To evaluate the similarity of the generated data of the real data, we trained machine learning models (logistic regression, random forest) on the synthetic data and evaluated their predictive performance on the real dataset. We also compared the performance of a model trained on the real dataset in predicting on the same dataset. We used six datasets from the UCI machine learning repository.
### Benchmarks: Real datasets used
We used the following datasets from the UCI Machine Learning Repository[18], some of which were obtained via the Kaggle platform:
* Abalone: predicting age of abalone from physical measurements, 8 predictors (input variables), 1323 rows, from UCI [https://archive.ics.uci.edu/ml/datasets/abalone/](https://archive.ics.uci.edu/ml/datasets/abalone/)
* Heart failure prediction dataset: compiled from UCI Machine learning Repository, 11 predictors, 918 rows, from [https://www.kaggle.com/fedsoriaon/heart-failure/prediction](https://www.kaggle.com/fedsoriaon/heart-failure/prediction)
* Pima Indians diabetes: 8 predictors, 768 rows, from [https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database](https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database), source UCI
* Breast cancer Wisconsin (Diagnostic) dataset: 30 predictors, 569 rows, from [https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29)
* Maternal health risk data: 7 predictors, 676 rows, from [https://www.kaggle.com/datasets/csafrit2/maternal-health-risk-data](https://www.kaggle.com/datasets/csafrit2/maternal-health-risk-data)
* Stroke prediction dataset: 10 predictors, 4909 rows, of which 209 positive and 4700 negative, from [https://www.kaggle.com/datasets/zzettrkalpakbal/full-filled-brain-stroke-dataset](https://www.kaggle.com/datasets/zzettrkalpakbal/full-filled-brain-stroke-dataset)
* Connectionist Bench (Sonar, Mines vs. Rocks) dataset (60 predictors, 207 rows, from [https://archive.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks](https://archive.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks)
All of these have binary output variables. For clustering benchmarking, all were used (sonar was used as part of the ClustBench set, below). For synthetic data, the stroke dataset was omitted since it was highly unbalanced.
In addition, datasets from the UCI subdirectory of ClustBench [19] were used, many of which have multi-valued outputs. These consist of sonar (described above), as well as
* ecoli: 7 predictors, 335 rows, 8 categories
* glass: 9 predictors, 213 rows, 6 categories
* ionosphere: 34 predictors, 350 rows, 2 categories
* statlog: 19 predictors, 2309 rows, 7 categories
* wdbc: 30 predictors, 568 rows, 2 categories
* wine: 13 predictors, 177 rows, 3 categories
* yeast: 8 predictors, 1483 rows, 10 categories
## 3 Results
### Clustering algorithm performance
#### 3.1.1 Synthetic data: purely categorical, purely normal, mixed
We generate three kinds of synthetic datasets: purely categorical, purely numeric (normally distributed), and mixed, as described in Methods. Each dataset has five clusters. We vary a parameter (\(\delta\sigma\) for numeric, \(D\) for categorical), as described in Methods, to tune how similar the clusters are to each other: large \(\delta\sigma\) or small \(D\) will indicate similar clusters and a harder problem. Figure 2 shows the results for normalized accuracy: for purely normally distributed data MMM performs comparably with Gaussian mixture models, for purely categorical data we greatly outperform all methods, with only \(K\)-means with one-hot encoding coming close, and for mixed normal+categorical data, too, our performance is superior to other methods. We run MMM in three modes: telling it the true cluster size, or using the TI and HM\(\beta\) methods (with \(\beta=0.5\)). All other methods are told the correct cluster size.
Figure 2: Clustering of four kinds of synthetic datasets: (A) purely numeric (normally distributed, differing means and variances), (B) purely numeric (normally distributed, same mean but differing variances), (C) purely categorical, and (D) mixed.
#### 3.1.2 Real data
We consider the UCI abalone, breast, diabetes, heart, MHRD and stroke datasets described in Methods, which have binary outcome variables, and also eight datasets from the UCI machine learning database which are included in ClustBench [19], which have output labels of varying number from 2 to 10. These are intended as tests of classification, not clustering, tasks. Nevertheless, clustering these datasets on input variables (excluding the outcome variable) often shows significant overlap with the clustering according to the output variable, as shown in figure 3. This suggests that we are recovering real underlying structure in the data. We compare various other methods, all of which were run with the correct known number of clusters, while MMM was run with ("fixed") and without telling it the true number of clusters.
#### 3.1.3 MMMSynth: ML performance on synthetic data
We use the six kaggle/UCI datasets (Abalone, Breast Cancer, Diabetes, Heart, MHRD and Sonar) benchmarked above to generate synthetic data, as described in Methods, train ML algorithms (logistic regression, random forest) on these, and measure the predictive performance on the real data. Figures 4 show the results. Also tested are Triplet-based Variational Autoencoder (TVAE), Gaussian Copula (GC), Conditional Generative Adversarial Network (CTGAN), Copula Generative Adversarial Network (CGAN). In all datasets, we show performance comparable to training on real data. TVAE and GC also perform well on most datasets, while CGAN and CTGAN show poorer performance. All programs were run with default parameters.
## 4 Discussion
We present a clustering algorithm, MMM, that clusters heterogeneous data consisting of categorical and numeric columns. We demonstrate good performance on a variety of publicly available datasets
Figure 3: Performance of MMM, using the TI criterion, and “MMM (fixed)” where it is told the true number of clusters versus other programs, on eight datasets from ClustBench (A) and six datasets from UCI (B). Performance is reported using Adjusted Rand Index.
and on our synthetic data benchmarks. Speed optimizations will be explored in future work.
Currently the columns are assumed to be independent, but it will be a straightforward exercise to use a multivariate Gaussian to describe the numeric columns. This too will be explored in future. Despite this, our performance is comparable to and sometimes better than scikit-learn's GMM implementation on the real-world UCI and clustbench data presented here.
We further use MMM as a basis for a synthetic data generation algorithm, MMMSynth. We demonstrate that MMMSynth generates synthetic data that closely resembles real datasets. Our method performs better than current synthetic data generation algorithms in the literature (TVAE, Gaussian Copula, CGAN and CTGAN). Notably, all these methods explicitly assume and model correlations between input columns: CGAN and Gaussian Copula use copula functions to capture correlations between variables, while TVAE and CTGAN employ deep learning using all columns. In contrast, we first cluster the data and then assume that, within each cluster, columns are uncorrelated.
Our approach indirectly accounts for some correlations: for example, if two binary columns are correlated (1 tends to occur with 1, and 0 with 0), we would be likely to cluster the 1's together and 0's together. It will also account for multimodal numeric columns since these would be better represented as a sum of Gaussians. In tests on synthetically-generated non-normally-distributed numeric data (for example, Gamma-distributed with long tails), MMM breaks the data into multiple clusters, suggesting an attempt to approximate the Gamma distribution as a sum of Gaussians. This will be explored in a future work. Nevertheless, we do not see such a proliferation of clusters when running on real datasets.
## 5 Acknowledgements
This work grew from a project on personalized clinical predictions, for which we acknowledge discussion and collaboration with Gautam Menon, Uma Ram, Ponnusamy Saravanan, and particularly Leelavati Narlikar with whom we extensively discussed this work and whose insights were invaluable. We also acknowledge useful discussions with Durga Parkhi on synthetic data generation.
Figure 4: Random forest (A) and logistic regression (B) models were trained on the real data and on synthetic data generated using MMMSynth, TVAE, GC, CGAN and CTGAN and their predictive performance evaluated on the real datasets. The AUC (area under ROC curve) is shown for each method and each dataset.
## 6 Funding
We acknowledge funding from BIRAC grant bt/ki-data0404/06/18 (RS), and the IMSc Centre for Disease Modelling (ICDM) funded via an apex project at IMSc by the Department of Atomic Energy, Government of India (CK, RS). The funders had no role in the data collection, research, analysis, writing or submission of the manuscript.
## 7 Author contributions
**Conceptualization:** CK, RS
**Data Curation:** CK, RS
**Formal Analysis:** CK, RS
**Funding Acquisition:** RS
**Investigation:** CK, RS
**Methodology:** CK, RS
**Project Administration:** RS
**Resources:** RS
**Software:** CK, RS
**Supervision:** RS
**Validation:** CK, RS
**Visualization:** CK, RS
**Writing - Original Draft Preparation:** CK, RS
**Writing - Review & Editing:** CK, RS
## 8 Code availability
MMM and MMMsynth are available on [https://github.com/rsidd120/MadrasMixtureModel](https://github.com/rsidd120/MadrasMixtureModel) under the MIT licence. They are implemented in Julia.
## 9 Declaration of interests
The authors declare no competing interests.
|
2302.09456 | Distributional Offline Policy Evaluation with Predictive Error
Guarantees | We study the problem of estimating the distribution of the return of a policy
using an offline dataset that is not generated from the policy, i.e.,
distributional offline policy evaluation (OPE). We propose an algorithm called
Fitted Likelihood Estimation (FLE), which conducts a sequence of Maximum
Likelihood Estimation (MLE) and has the flexibility of integrating any
state-of-the-art probabilistic generative models as long as it can be trained
via MLE. FLE can be used for both finite-horizon and infinite-horizon
discounted settings where rewards can be multi-dimensional vectors. Our
theoretical results show that for both finite-horizon and infinite-horizon
discounted settings, FLE can learn distributions that are close to the ground
truth under total variation distance and Wasserstein distance, respectively.
Our theoretical results hold under the conditions that the offline data covers
the test policy's traces and that the supervised learning MLE procedures
succeed. Experimentally, we demonstrate the performance of FLE with two
generative models, Gaussian mixture models and diffusion models. For the
multi-dimensional reward setting, FLE with diffusion models is capable of
estimating the complicated distribution of the return of a test policy. | Runzhe Wu, Masatoshi Uehara, Wen Sun | 2023-02-19T02:11:22Z | http://arxiv.org/abs/2302.09456v3 | # Distributional Offline Policy Evaluation with Predictive Error Guarantees
###### Abstract
We study the problem of estimating the distribution of the return of a policy using an offline dataset that is not generated from the policy, i.e., distributional offline policy evaluation (OPE). We propose an algorithm called Fitted Likelihood Estimation (FLE), which conducts a sequence of Maximum Likelihood Estimation (MLE) problems and has the flexibility of integrating any state-of-art probabilistic generative models as long as it can be trained via MLE. FLE can be used for both finite horizon and infinite horizon discounted settings where rewards can be multi-dimensional vectors. In our theoretical results, we show that for both finite and infinite horizon discounted settings, FLE can learn distributions that are close to the ground truth under total variation distance and Wasserstein distance, respectively. Our theoretical results hold under the conditions that the offline data covers the test policy's traces and the supervised learning MLE procedures succeed. Experimentally, we demonstrate the performance of FLE with two generative models, Gaussian mixture models and diffusion models. For the multi-dimensional reward setting, FLE with diffusion models is capable of estimating the complicated distribution of the return of a test policy.
## 1 Introduction
Traditional Reinforcement Learning (RL) focuses on studying the expected behaviors of a learning agent. However, modeling the expected behavior is not enough for many interesting applications. For instance, when estimating the value of a new medical treatment, instead of just predicting its expected value, we may be interested in estimating the variance of the value as well. For a self-driving car whose goal is to reach a destination as soon as possible, in addition to predicting the expected traveling time, we may be interested in estimating the tails of the distribution of traveling time so that customers can prepare for worst-case situations. Other risk-sensitive applications in finance and control often require one to model beyond the expectation as well.
In this work, we study the question of how to estimate the distribution of the return of a policy in Markov Decision Processes (MDPs) using only an offline dataset that is not necessarily generated from the test policy (i.e., distributional offline policy evaluation). Estimating distributions of returns has been studied in the setting called distributional RL (Bellemare et al., 2017), where most of the existing works focus on solving the regular RL problem, i.e., finding a policy that maximizes the expected return by treating the task of predicting additional information beyond the mean as an auxiliary task. Empirically, it is believed that this auxiliary task helps representation learning which in turn leads to better empirical performance. Instead of focusing on this auxiliary loss perspective, we aim to design distributional OPE algorithms, which can accurately estimate the distribution of returns with provable guarantees. We are also interested in the setting where the one-step reward could be _multi-dimensional_ (i.e.,
multi-objective RL), and the state/action spaces could be large or even continuous. This requires us to design new algorithms that can leverage rich function approximation (e.g., state-of-art probabilistic generative models).
Our algorithm, _Fitted Likelihood Estimation_ (FLE), is inspired by the classic OPE algorithm Fitted Q Evaluation (FQE) (Munos and Szepesvari, 2008). Given a test policy and an offline dataset, FLE iteratively calls a supervised learning oracle -- Maximum Likelihood Estimation (MLE) in this case, to fit a conditional distribution to approximate a target distribution which is constructed using the distribution learned from the previous iteration. At the end of the training procedure, it outputs an estimator which aims to approximate the true distribution of the return of the test policy. Our algorithm is simple: like FQE, it decomposes the distributional OPE problem into a sequence of supervised learning problems (in this case, MLE). Thus it has great flexibility to leverage any state-of-art probabilistic generative models as long as it can be trained via MLE. Such flexibility is important, especially when we have large state/action space, and reward vectors coming from complicated high-dimensional distributions. FLE naturally works for both finite horizon setting and infinite horizon discounted setting.
Theoretically, we prove that our algorithm, FLE, can learn an accurate estimator of the return distribution for both finite horizon MDPs and infinite horizon discounted MDPs, under the assumptions that (1) _MLE can achieve good in-distribution generalization bounds (i.e., supervised learning succeeds)_, and (2) _the offline state-action distribution covers the test policy's state-action distribution_. The first condition is well studied in statistical learning theory, and in practice, the state-of-the-art probabilistic generative models trained via MLE (e.g., FLOW models (Dinh et al., 2014) and Diffusion models (Sohl-Dickstein et al., 2015)) indeed also exhibit amazing generalization ability. The second condition is necessary for offline RL and is widely used in the regular offline RL literature (e.g., Munos and Szepesvari (2008)). In other words, our analysis is modular: it simply transfers the supervised learning MLE in-distribution generalization bounds to a bound of distributional OPE. The accuracy of the estimator computed by FLE is measured under total variation distance and p-Wasserstein distance, for finite horizon setting and infinite horizon discounted setting, respectively. To complete the picture, we further provide concrete examples showing that MLE can provably have small in-distribution generalization errors in the analysis section. To the best of our knowledge, this is the first PAC (Probably Approximately Correct) learning algorithm for distributional OPE with general function approximation.
Finally, we demonstrate our approach on a rich observation combination lock MDP where it has a latent structure with the observations being high-dimensional and continuous (Misra et al., 2020; Agarwal et al., 2020; Zhang et al., 2022). We consider the setting where the reward comes from complicated multi-dimensional continuous distributions (thus existing algorithms such as quantile-regression TD (Dabney et al., 2018) do not directly apply here). We demonstrate the flexibility of our approach by using two generative models in FLE: the classic Gaussian mixture model and state-of-the-art diffusion model (Ho et al., 2020).
### Related Works
Distributional RL.Quantile regression TD (Dabney et al., 2018) is one of the common approaches for distributional OPE. A very recent work (Rowland et al., 2023) demonstrates that quantile regression TD can converge to the TD fixed point solution of which the existence is proved under an \(\ell_{\infty}\)-style norm (i.e., \(\sup\) over all states). Rowland et al. (2023) do not consider the sample complexity of OPE and the impact of learning from off-policy samples, and their convergence analysis is asymptotic. Also, quantile regression TD only works for scalar rewards. Another popular approach is categorical TD (Bellemare et al., 2017), where one explicitly discretizes the return space. However, for high-dimensional rewards, explicitly discretizing the return space evenly can suffer the curse of dimensionality and fail to capture some low-dimensional structures in the data distribution. Also, there is no convergence or sample complexity analysis of the categorical algorithm for OPE.
Ma et al. (2021) studied distributional offline policy optimization. In their analysis, they focused on tabular MDPs with scalar rewards, and their algorithm can learn a pessimistic estimate of the true inverse CDF of the return. Keramati et al. (2020) also uses the distributional RL framework to derive an approach that can optimistically estimate the CVaR value of a policy's return. Their analysis also only applies to tabular MDPs with scalar rewards.
In contrast, we focus on distributional OPE with general function approximation beyond tabular or linear formats and MDPs with multi-dimensional rewards.
Zhang et al. (2021) also consider learning from vector-valued rewards. They propose a practical algorithm that minimizes the Maximum Mean Discrepancy (MMD) without providing a sample complexity analysis. In contrast, we use MLE to minimize total variation distance, and our error bound is based on total variation distance. Note that a small total variation distance implies a small MMD but not vice versa, which implies that our results are stronger.
Offline policy evaluation.Fitted Q evaluation (FQE) (Munos and Szepesvari, 2008; Ernst et al., 2005) is one of the most classic OPE algorithms. Many alternative approaches have been recently proposed, such as minimax algorithms (Yang et al., 2020; Feng et al., 2019; Uehara et al., 2020). Somewhat surprisingly, algorithms based on FQE are often robust and achieve stronger empirical performance in various benchmark tasks (Fu et al., 2021; Chang et al., 2022). Our proposed algorithm can be understood as a direct generalization of FQE to the distributional setting. Note sequential importance sampling approaches (Jiang and Li, 2016; Precup et al., 2000) in regular RL have been applied to estimate distributions (Chandak et al., 2021). However, these methods suffer from the curse of the horizon, i.e., the variance necessarily grows exponentially in the horizon.
## 2 Preliminaries
### Finite-Horizon MDPs
We consider finite-horizon MDP with a vector-valued reward function, which is a tuple \(M(\mathcal{X},\mathcal{A},r,P,H,\mu)\) where \(\mathcal{X}\) and \(\mathcal{A}\) are the state and action spaces, respectively, \(P\) is the transition kernel, \(r\) is the reward function, i.e., \(r(x,a)\in\Delta([0,1]^{d})\) where \(d\in\mathbb{Z}^{+}\), \(H\) is the length of each episode, and \(\mu\in\Delta(\mathcal{X})\) is the initial state distribution. A policy is a mapping \(\pi:\mathcal{X}\rightarrow\Delta(\mathcal{A})\). We denote \(z\in\mathbb{R}^{d}\) as the accumulative reward vector across \(H\) steps, i.e., \(z=\sum_{h=1}^{H}r_{h}\in[0,H]^{d}\). Note that \(z\) is a random vector whose distribution is determined by a policy \(\pi\) and the MDP. We denote \(Z^{\pi}\in\Delta([0,H]^{d})\) as the distribution1 of the random variable \(z\) under policy \(\pi\). In this paper, we are interested in estimating \(Z^{\pi}\) using offline data. We also define conditional distributions \(Z^{\pi}_{h}(x,a)\in\Delta([0,H]^{d})\) which is the distribution of the return under policy \(\pi\) starting with state action \((x_{h},a_{h}):=(x,a)\) at time step \(h\). It is easy to see that \(Z^{\pi}=\mathbb{E}_{x\sim\mu,a\sim\pi(x)}\left[Z^{\pi}_{1}(x,a)\right]\).
Footnote 1: Formally, they are called probability density functions in the continuous setting and probability mass functions in discrete settings, which are different from cumulative distribution functions.
Given \(\pi\), we also define \(d^{\pi}_{h}\) as the state-action distribution induced by policy \(\pi\) at time step \(h\), and \(d^{\pi}=\sum_{h=1}^{H}d^{\pi}_{h}/H\) as the average state-action distribution induced by \(\pi\).
We denote distributional Bellman operator (Morimura et al., 2012) associated with \(\pi\) as \(\mathcal{T}^{\pi}\), which maps a conditional distribution to another conditional distribution: given a state-action conditional distribution \(f\in\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,H]^{d})\), we have \(\mathcal{T}^{\pi}f\in\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,H]^{d})\), such that for any \((x,a,z)\):
\[[\mathcal{T}^{\pi}f](z\,|\,x,a)=\mathbb{E}_{r\sim r(s,a),x^{\prime}\sim P(x,a),a^{\prime}\sim\pi(x)}\left[f\left(z-r|x^{\prime},a^{\prime}\right)\right].\]
It is easy to see that for \(\{Z^{\pi}_{h}\}_{h=1}^{H}\), we have \(\mathcal{T}^{\pi}Z^{\pi}_{h+1}=Z^{\pi}_{h}\) for all \(h\).
### Discounted Infinite-Horizon MDPs
The discounted infinite-horizon MDP is a tuple \(M(\mathcal{X},\mathcal{A},r,P,\gamma,\mu)\). The return vector is defined as \(z=\sum_{h=1}^{\infty}\gamma^{h-1}r_{h}\). We call \(\gamma\in(0,1)\) the discount factor. The distribution of return \(z\) is thus \(Z^{\pi}\in\Delta([0,(1-\gamma)^{-1}]^{d})\). We also define the conditional distributions \(\bar{Z}^{\pi}(x,a)\in\Delta([0,(1-\gamma)^{-1}]^{d})\) which is the distribution of the return under policy \(\pi\) starting with state action \((x,a)\). It is easy to see that \(Z^{\pi}=\mathbb{E}_{x\sim\mu,a\sim\pi(x)}\left[\bar{Z}^{\pi}(x,a)\right]\).
The state-action distribution of a given policy \(\pi\) is also defined in a discounted way: \(d^{\pi}=(1-\gamma)^{-1}\sum_{h=1}^{\infty}\gamma^{h-1}d_{h}^{\pi}\) where \(d_{h}^{\pi}\) is the state-action distribution induced by \(\pi\) at time step \(h\).
The distributional Bellman operator maps a state-action conditional distribution \(f\in\mathcal{X}\times\mathcal{A}\mapsto([0,(1-\gamma)^{-1}]^{d})\) to \(\mathcal{T}^{\pi}f\in\mathcal{X}\times\mathcal{A}\mapsto([0,(1-\gamma)^{-1}]^ {d})\), such that for any \((x,a,z)\):
\[[\mathcal{T}^{\pi}f](z\,|\,x,a)=\mathbb{E}_{r\sim r(s,a),x^{\prime}\sim P(x,a),a^{\prime}\sim\pi(x)}\left[f\left(\frac{z-r}{\gamma}\,\middle|\,x^{\prime},a^{ \prime}\right)\right].\]
We can verify that \(\bar{Z}^{\pi}\) is a fixed point of the distributional Bellman operator, i.e., \(\mathcal{T}^{\pi}\bar{Z}^{\pi}=\bar{Z}^{\pi}\).
### Offline Policy Evaluation Setup
We consider estimating the distribution \(Z^{\pi}\) using offline data which does not come from \(\pi\) (i.e., off-policy setting). We assume we have a dataset \(\mathcal{D}=\{x_{i},a_{i},r_{i},x_{i}^{\prime}\}_{i=1}^{n}\) that contains i.i.d. tuples, such that \(x,a\sim\rho\in\Delta(\mathcal{X}\times\mathcal{A})\), \(s^{\prime}\sim P(\cdot|s,a)\), and \(r\sim r(s,a)\). For finite-horizon MDPs, we randomly and evenly split \(\mathcal{D}\) into \(H\) subsets, \(\mathcal{D}_{1},\ldots,\mathcal{D}_{H}\), for the convenience of analysis. Each subset contains \(n/H\) samples. For infinite-horizon MDPs, we split it into \(T\) subsets in the same way. Here \(T\) is the number of iterations which we will define later.
We consider learning distribution \(Z^{\pi}\) via general function approximation. For finite-horizon MDPs, we denote \(\mathcal{F}_{h}\) as a function class that contains state-action conditional distributions, i.e., \(\mathcal{F}_{h}\subset\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,H]^{d})\), which will be used to learn \(Z_{h}^{\pi}\). For infinite-horizon MDPs, we assume a function class \(\mathcal{F}\subset\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,(1-\gamma)^{-1 }]^{d})\).
Notations.Given distributions \(P_{1}\) and \(P_{2}\) on a set \(\mathcal{S}\), we denote \(d_{tv}\) as the total variation distance between the two distributions, i.e., \(d_{tv}(P_{1},P_{2})=\|P_{1}-P_{2}\|_{1}/2\). We denote \(d_{w,p}\) as the \(p\)-Wasserstein distance for which the metric is induced by \(\ell_{2}\) norm, i.e., \(d_{w,p}(P_{1},P_{2})=(\inf_{c\in\mathcal{C}}\mathbb{E}_{x,y\sim c}\,\|x-y\|^{p })^{1/p}\) where \(\mathcal{C}\) denotes the set of all couplings of \(P_{1}\) and \(P_{2}\). We note that \(d_{tv}\) dominates \(d_{w,p}\) when the support is bounded (see Lemma C.6 for details):
\[d_{w,p}^{p}(P_{1},P_{2})\leq\text{diam}^{p}(\mathcal{S})\cdot d_{tv}(P_{1},P_ {2}). \tag{1}\]
where \(\text{diam}(\mathcal{S})=\sup_{x,y\in\mathcal{S}}\|x-y\|\) is the diameter of \(\mathcal{S}\).
## 3 Fitted Likelihood Estimation
In this section, we present our algorithm -- _Fitted Likelihood Estimation_ (FLE) for distributional OPE. Algorithm 1 is for finite-horizon MDPs and Algorithm 2 is for infinite-horizon MDPs. We first introduce the former.
``` Input: \(\mathcal{D}=\{\mathcal{D}_{h}\}_{h=1}^{H}\) and the function class \(\{\mathcal{F}_{h}\}_{h=1}^{H}\). Output: \(\mathcal{D}=\{\mathcal{D}_{h}\}_{h=1}^{H}\). Output: \(\mathcal{D}=\{\mathcal{D}_{h}\}_{h=1}^{H}\).
conditional likelihood, i.e., we can compute \(f(z|x,a)\). Such function approximation \(f\) is widely available in practice, including discrete histogram-based models, Gaussian mixture models, Flow models (Dinh et al., 2014), and diffusion model (Sohl-Dickstein et al., 2015). Indeed in our experiment, we implement FLE with Gaussian mixture models and diffusion models (Ho et al., 2020), both of which are optimized via MLE.
Regarding computation, the main bottleneck is the MLE step (Line 13 and 10). While we present it with a \(\arg\max\) oracle, in both practice and theory, an approximation optimization oracle is enough. In theory, as we will demonstrate, as long as we can find some \(\hat{f}_{h}\) that exhibits good in-distribution generalization bound (i.e., \(\mathbb{E}_{x,a\sim\rho}d_{tv}(\hat{f}_{h}(x,a),[\mathcal{T}^{\pi}\hat{f}_{h+1 }](x,a))\) or \(\mathbb{E}_{x,a\sim\rho}d_{tv}(\hat{f}_{t}(x,a),[\mathcal{T}^{\pi}\hat{f}_{t-1 }](x,a))\) is small), then we can guarantee to have an accurate estimator for \(Z^{\pi}\). Note that here \(\rho\) is the training distribution for MLE, thus we care about in-distribution generalization. Thus our approach is truly a reduction to supervised learning: as long as the supervised learning procedure (in this case, MLE) learns a model with good in-distribution generalization performance, we can guarantee good prediction performance for FLE. Any advancements from training generative models via MLE (e.g., better training heuristics and better models) thus can immediately lead to improvement in distributional OPE.
_Remark 3.1_ (Comparison to prior models).: The categorical algorithm (Bellemare et al., 2017) works by minimizing the cross-entropy loss between the (projected) target distribution and the parametric distribution, which is equivalent to maximizing the likelihood of the parametric model.
_Remark 3.2_ (FQE as a special instance).: When reward is only a scalar, and we use fixed-variance Gaussian distribution \(f(\cdot|x,a):=\mathcal{N}(g(x,a),\sigma^{2})\) where \(g:\mathcal{X}\times\mathcal{A}\mapsto[0,H]\), and \(\sigma>0\) is a fixed (not learnable) parameter, MLE becomes a least square oracle, and FLE reduces to FQE -- the classic offline policy evaluation algorithm.
```
1:Input: dataset \(\{\mathcal{D}_{t}\}_{t=1}^{T}\) and function classes \(\mathcal{F}\)
2:for\(t=1,2,\ldots,T\)do
3:\(\mathcal{D}^{\prime}_{t}=\emptyset\)
4:for\(x,a,r,x^{\prime}\in\mathcal{D}_{t}\)do
5:\(a^{\prime}\sim\pi(x^{\prime})\)
6:\(y\sim\hat{f}_{t-1}(\cdot\,|\,x^{\prime},a^{\prime})\)
7:\(z=r+\gamma y\)
8:\(\mathcal{D}^{\prime}_{t}=\mathcal{D}^{\prime}_{t}\cup\{(x,a,z)\}\)
9:endfor
10:\(\hat{f}_{t}=\arg\max_{f\in\mathcal{F}}\sum_{(x,a,z)\in\mathcal{D}^{\prime}_{t }}\log f(z\,|\,x,a)\)
11:endfor
```
**Algorithm 2** Fitted Likelihood Estimation (FLE) for infinite-horizon MDPs
## 4 Theoretical Analysis
In this section, we present the theoretical guarantees of FLE. As a warmup, we start by analyzing the performance of FLE for the finite horizon setting (Section 4.1), where we bound the prediction error using total variation distance. Then we study the guarantees of FLE for the infinite-horizon discounted scenario in Section 4.2 where the prediction error is measured under \(p\)-Wasserstein distance. Note that from Equation (1), TD distance dominates \(p\)-Wasserstein
distance which indicates that our guarantee for the finite horizon setting is stronger. This shows an interesting difference between the two settings. Finally, we present some concrete examples in Section 4.3. All proofs can be found in Appendix A.
### Finite Horizon
We start by stating the key assumption for OPE, which concerns the overlap between \(\pi\)'s distribution and the offline distribution \(\rho\).
**Assumption 4.1** (Coverage).: We assume there exists a constant \(C\) such that for all \(h\in[H]\) the following holds
\[\sup_{\begin{subarray}{c}f_{h}\in\mathcal{F}_{h}\\ f_{h+1}\in\mathcal{F}_{h+1}\end{subarray}}\frac{\mathbb{E}_{x,a\sim d_{h}^{2}} \,d_{tv}^{2}\,\left(f_{h}(x,a),[\mathcal{T}^{\pi}f_{h+1}](x,a)\right)}{\mathbb{ E}_{x,a\sim\rho}\,d_{tv}^{2}\,\left(f_{h}(x,a),[\mathcal{T}^{\pi}f_{h+1}](x,a) \right)}\leq C.\]
The data coverage assumption is necessary for off-policy learning. Assumption 4.1 incorporates the function class into the definition of data coverage and is always no larger than the usual density ratio-based coverage definition, i.e., \(\sup_{h,x,a}d_{h}^{\pi}(x,a)/\rho(x,a)\) which is a classic coverage measure in offline RL literature (e.g., Munos and Szepesvari (2008)). This type of refined coverage is used in the regular RL setting (Xie et al., 2021; Uehara and Sun, 2021).
Next, we present the theoretical guarantee of our approach under the _assumption that MLE can achieve good supervised learning style in-distribution generalization bound_. Recall that in each iteration of our algorithm, we perform MLE to learn a function \(\hat{f}_{h}\) to approximate the target \(\mathcal{T}^{\pi}\hat{f}_{h+1}\) under the training data from \(\rho\). By supervised learning style in-distribution generalization error, we mean the divergence \(d_{tv}\) between \(\hat{f}_{h}\) and the target \(\mathcal{T}^{\pi}\hat{f}_{h+1}\) under the _training distribution_\(\rho\). Such an in-distribution generalization bound for MLE is widely studied in statistical learning theory literature (Van de Geer, 2000; Zhang, 2006), and used in RL literature (e.g., Agarwal et al. (2020); Uehara et al. (2021); Zhan et al. (2022)). The following theorem demonstrates a reduction framework: as long as supervised learning MLE works, our estimator of \(Z^{\pi}\) is accurate.
**Theorem 4.2**.: _Under Assumption 4.1, suppose we have a sequence of functions \(\hat{f}_{1},\ldots,\hat{f}_{H}:\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,H] ^{d})\) and a sequence of values \(\zeta_{1},\ldots,\zeta_{H}\in\mathbb{R}\) such that_
\[\left(\,\mathbb{E}_{x,a\sim\rho}\,\,d_{tv}^{2}\,\Big{(}\hat{f}_{h}(x,a),[ \mathcal{T}^{\pi}\hat{f}_{h+1}](x,a)\Big{)}\,\right)^{1/2}\leq\zeta_{h}\]
_holds for all \(h\in[H]\). Let our estimator \(\hat{f}\coloneqq\mathbb{E}_{x\sim\mu,a\sim\pi(x)}\,\hat{f}_{1}(x,a)\). Then we have_
\[d_{tv}\left(\hat{f},Z^{\pi}\right)\leq\sqrt{C}\sum_{h=1}^{H}\zeta_{h}.\]
Here recall that \(C\) is the coverage definition. Thus the above theorem demonstrates that when \(\rho\) covers \(d^{\pi}\) (i.e., \(C<\infty\)), small supervised learning errors (i.e., \(\zeta_{h}\)) imply small prediction error for distributional OPE.
Now to complete the picture, we provide some sufficient conditions where MLE can achieve small in-distribution generalization errors. The first condition is stated below.
**Assumption 4.3** (Bellman completeness).: We assume the following holds:
\[\max_{h\in[H],f\in\mathcal{F}_{h+1}}\min_{g\in\mathcal{F}_{h}}\mathbb{E}_{x,a \sim\rho}\,d_{tv}\big{(}g(x,a),[\mathcal{T}^{\pi}f](x,a)\big{)}=0.\]
We call the LHS of the above inequality _inherent (distributional) Bellman error_.
This condition ensures that in each call of MLE in our algorithm, our function class \(\mathcal{F}_{h}\) contains the target \(\mathcal{T}^{\pi^{\prime}}\hat{f}_{h+1}\). It is possible to relax the above condition to a setting where the inherent Bellman error is bounded by some small number \(\delta\) (i.e., for MLE, this corresponds to agnostic learning where the hypothesis class may not contain the target, which is also a well-studied problem in statistical learning theory (Van de Geer, 2000)). Here we mainly focus on \(\delta=0\) case.
The Bellman completeness assumption (or, more generally, inherent Bellman error being small) is standard in offline RL literature (Munos and Szepesvari, 2008). Indeed, in the regular RL setting, when learning with off-policy data, without such a Bellman completeness condition, algorithms such as TD learning or value iteration-based approaches (e.g., FQE) can diverge (Tsitsiklis and Van Roy, 1996), and TD fixed solution can be arbitrarily bad in terms of approximating the true value (e.g., Munos (2003); Munos (2003); Scherrer (2010)). Since distributional RL generalizes regular RL, to prove convergence and provide an explicit sample complexity, we also need such a Bellman completeness condition.
The second condition is the bounded complexity of \(\mathcal{F}_{h}\). A simple case is when \(\mathcal{F}\) is discrete where the standard statistical complexity of \(\mathcal{F}\) is \(\ln(|\mathcal{F}_{h}|)\). We show the following result for MLE's in-distribution generalization error.
**Lemma 4.4**.: _Assume \(|\mathcal{F}_{h}|<\infty\). For FLE (Algorithm 1), under Assumption 4.3, MLEs have the following guarantee:_
\[\mathop{\mathbb{E}}_{x,a\sim\rho}d_{tv}^{2}\Big{(}\hat{f}_{h}(x,a),[\mathcal{ T}^{\pi}\hat{f}_{h+1}](x,a)\Big{)}\leq\frac{4H}{n}\log(|\mathcal{F}_{h}|H/\delta)\]
_for all \(h\in[H]\) with probability at least \(1-\delta\)._
For infinite hypothesis classes, we use bracketing number (Van de Geer, 2000) to quantify the statistical complexities.
**Definition 4.5** (Bracketing number).: Consider a function class \(\mathcal{F}\) that maps \(\mathcal{X}\) to \(\mathbb{R}\). Given two functions \(l\) and \(u\), the bracket \([l,u]\) is the set of all functions \(f\in\mathcal{F}\) with \(l(x)\leq f(x)\leq u(x)\) for all \(x\in\mathcal{X}\). An \(\epsilon\)-bracket is a bracket \([l,u]\) with \(\|l-u\|\leq\epsilon\). The bracketing number of \(\mathcal{F}\) w.r.t. the metric \(\|\cdot\|\) denoted by \(N_{[\\Big{)}\leq\frac{10H}{n}\log\Big{(}N_{[\![} \big{(}(nH^{d})^{-1},\mathcal{F}_{h},\|\cdot\|_{\infty}\big{)}H/\delta\Big{)}\]
_for all \(h\in[H]\) with probability at least \(1-\delta\)._
With the generalization bounds of MLE, via Theorem 4.2, we can derive the following specific error bound for FLE.
**Corollary 4.7**.: _Under Assumption 4.1 and 4.3, for FLE (Algorithm 1), with probability at least \(1-\delta\), we have_
\[d_{tv}\left(\hat{f},Z^{\pi}\right)\leq\sqrt{C}\sum_{h=1}^{H}\sqrt{\frac{4H}{n }\log(|\mathcal{F}_{h}|H/\delta)}\]
_when \(|\mathcal{F}_{h}|<\infty\) for all \(h\in[H]\), and_
\[d_{tv}\Big{(}\hat{f},Z^{\pi}\Big{)}\leq\sqrt{C}\sum_{h=1}^{H}\sqrt{\frac{10H} {n}\log\big{(}N_{[\![}\big{(}(nH^{d})^{-1},\mathcal{F}_{h},\|\cdot\|_{\infty} \big{)}H/\delta\big{)}.\]
_for infinite function class \(\mathcal{F}_{h}\)._
Overall, our theory indicates that if we can train accurate distributions (e.g., generative models) via supervised learning (i.e., MLE here), we automatically have good predictive performance on estimating \(Z^{\pi}\). This provides great flexibility for designing special algorithms.
_Remark 4.8_ (Offline CVaR Estimation).: As a simple application, FLE can derive an estimator for the CVaR of the return under the test policy \(\pi\). This is doable because CVaR is Lipschitz with respect to distributions in total variation distance, and thus our results can be directly transferred. See Appendix B for details. Essentially, any quantity that is Lipschitz with respect to distributions in total variation distance can be estimated using our method and the error bound of FLE directly applies.
### Infinite Horizon
Next we introduce the theoretical guarantees of FLE for infinite horizon MDPs. Although the idea is similar, there is an obstacle: we can no longer obtain guarantees in terms of the total variation distance. This is perhaps not surprising considering that the distributional Bellman operator for discounted setting is _not_ contractive in total variation distance (Bellemare et al., 2017). Fortunately, we found the Bellman operator is contractive under the Wasserstein distance measure. Note that the contractive result we established under Wasserstein distance is different from previous works (Bellemare et al., 2017, 2023; Zhang et al., 2021) in that these previous works consider the _supremum_ Wasserstein distance: \(\sup_{x,a}d_{w,p}\), while our contractive property is measured under an _average_ Wasserstein distance: \((\mathbb{E}_{x,a\sim d^{x}}\ d_{w,p}^{2p})^{1/(2p)}\) which is critical to get a sample complexity bound for distributional OPE. More formally, the following lemma summarizes the contractive property.
**Lemma 4.9**.: _The distributional Bellman operator is \(\gamma^{1-1/(2p)}\)-contractive under the metric \((\mathbb{E}_{x,a\sim d^{x}}\ d_{w,p}^{2p})^{1/(2p)}\), i.e., for any \(f,f^{\prime}\in\mathcal{X}\times\mathcal{A}\mapsto[0,(1-\gamma)^{-1}]^{d}\), it holds that_
\[\left(\mathop{\mathbb{E}}_{x,a\sim d^{x}}\ d_{w,p}^{2p}\left([\mathcal{T}^{\pi} f](x,a),[\mathcal{T}^{\pi}f^{\prime}](x,a)\right)\right)^{\frac{1}{2p}}\leq\gamma^{1- \frac{1}{2p}}\cdot\left(\mathop{\mathbb{E}}_{x,a\sim d^{x}}\ d_{w,p}^{2p} \left(f(x,a),f^{\prime}(x,a)\right)\right)^{\frac{1}{2p}}.\]
We note that the contractive result in \(\sup_{s,a}d_{w,p}\) does not imply the result in the above lemma, thus cannot be directly applied to the OPE setting.
Additionally, since the total variation distance dominates the Wasserstein distance on bounded sets (see (1)), the estimation error of MLE measured under total variation distance can be used to bound the Wasserstein distance. This allows us to derive theoretical guarantees for FLE on Wasserstein distance. To this end, we start again with the coverage assumption, which is similar to Assumption 4.1. Note that we have replaced the total variation distance with the Wasserstein distance.
**Assumption 4.10** (Coverage).: We assume there exists a constant \(C\) such that the following holds
\[\sup_{f,f^{\prime}\in\mathcal{F}}\frac{\mathbb{E}_{x,a\sim d^{\pi}}\ d_{w,p}^{2p }\left(f(x,a),[\mathcal{T}^{\pi}f^{\prime}](x,a)\right)}{\mathbb{E}_{x,a\sim p }\ d_{w,p}^{2p}\left(f(x,a),[\mathcal{T}^{\pi}f^{\prime}](x,a)\right)}\leq C.\]
As similar to Theorem 4.2, the following theorem states that as long as the supervised learning is accurate, our estimator of \(Z^{\pi}\) will be accurate as well, but under \(p\)-Wasserstein distance this time.
**Theorem 4.11**.: _Under Assumption 4.10, suppose we have a sequence of functions \(\hat{f}_{1},\ldots,\hat{f}_{T}:\mathcal{X}\times\mathcal{A}\mapsto\Delta([0,(1 -\gamma)^{-1}]^{d})\) and an upper bound \(\zeta\in\mathbb{R}\) such that_
\[\left(\ \mathbb{E}_{x,a\sim\rho}\ d_{w,p}^{2p}\left(\hat{f}_{t}(x,a),[\mathcal{T}^ {\pi}\hat{f}_{t-1}](x,a)\right)\right)^{\frac{1}{2p}}\leq\zeta\]
_holds for all \(t\in[T]\). Let our estimator \(\hat{f}\coloneqq\mathbb{E}_{x\sim\mu,a\sim\pi(x)}\,\hat{f}_{T}(x,a)\). Then we have, for all \(p\geq 1\),_
\[d_{w,p}\left(\hat{f},Z^{\pi}\right)\leq\frac{2C^{\frac{1}{2p}}}{(1-\gamma)^{ \frac{3}{2}}}\cdot\zeta+\frac{\sqrt{d}\cdot\gamma^{\frac{T}{2}}}{(1-\gamma)^{ \frac{3}{2}}}. \tag{2}\]
The upper bound in (2) is actually a simplified version as we aim to present a cleaner result. For a more refined upper bound that has more \(p\)-dependent terms, please refer to Theorem A.2 in the appendix.
For the first additive term in (2), we will later demonstrate that the \(\zeta\) derived from MLE depends on \(p^{-1}\) at an exponential rate. The second term is relatively insignificant as it converges to zero at the rate of \(\gamma^{T/2}\).
To proceed, we introduce the Bellman completeness assumption for infinite-horizon MDPs, which is one of the sufficient conditions for MLE to achieve small in-distribution generalization errors.
**Assumption 4.12** (Bellman completeness).: We assume the following holds:
\[\max_{f\in\mathcal{F}}\min_{g\in\mathcal{F}}\mathbb{E}_{x,a\sim\rho}\,d_{w,p} \big{(}g(x,a),[\mathcal{T}^{\pi}f](x,a)\big{)}=0.\]
As similar to the previous section, when Bellman completeness holds and the function class has bounded complexity, MLE can get a small generalization error, which is stated in the following lemma.
**Lemma 4.13**.: _For FLE (Algorithm 2), under Assumption 4.12, by applying MLEs we have, for all \(t\in[T]\),_
\[\mathop{\mathbb{E}}_{x,a\sim\rho}d_{w,p}^{2p}\left(\hat{f}_{t}(x,a),[ \mathcal{T}^{\pi}\hat{f}_{t-1}](x,a)\right)\leq\left(\frac{\sqrt{d}}{1-\gamma} \right)^{2p}\frac{4T}{n}\log(|\mathcal{F}|T/\delta)\]
_when \(|\mathcal{F}|<\infty\), and_
\[\mathop{\mathbb{E}}_{x,a\sim\rho}d_{w,p}^{2p}\left(\hat{f}_{t}(x,a),[ \mathcal{T}^{\pi}\hat{f}_{t-1}](x,a)\right)\leq\left(\frac{\sqrt{d}}{1-\gamma }\right)^{2p}\frac{10T}{n}\log\left(N_{\|}\left(\frac{(1-\gamma)^{d}}{n}, \mathcal{F},\|\cdot\|_{\infty}\right)T\Big{/}\delta\right)\]
_when \(|\mathcal{F}|=\infty\), with probability at least \(1-\delta\)._
The multiplicative term \(T\) in the upper bounds above comes from the data splitting (recall that we have split the dataset \(\mathcal{D}\) into \(T\) subsets: \(\mathcal{D}_{1},\ldots,\mathcal{D}_{T}\)). We think that a more careful analysis may be able to remove the need of data splitting and thus eliminate the term \(T\) here, leading to a slightly better polynomial dependence on the effective horizon \(1/(1-\gamma)\) in the final sample complexity bound. We leave this for future work.
In view of the above mentioned, to derive the specific error bound of FLE, we need to choose an appropriate \(T\) to make a good balance. The \(T\) we choose is of the logarithmic order. The result is shown in the corollary below.
**Corollary 4.14**.: _We define_
\[\iota=\begin{cases}\log(|\mathcal{F}|/\delta),&\text{if}\quad|\mathcal{F}|< \infty;\\ \log\left(N_{\|}\left(\frac{(1-\gamma)^{d}}{n},\mathcal{F},\|\cdot\|_{\infty }\right)\Big{/}\delta\right),&\text{if}\quad|\mathcal{F}|=\infty.\end{cases}\]
_Then under Assumption 4.10 and 4.12, for FLE (Algorithm 2), if we pick_
\[T=\log\left(C^{\frac{1}{2p}}\cdot\iota^{\frac{1}{2p}}\cdot\left(1-\gamma^{ \frac{1}{2}}\right)^{-1}\cdot n^{-\frac{1}{2p}}\right)\Big{/}\log\left(\gamma^ {1-\frac{1}{2p}}\right)\]
_then with probability at least \(1-\delta\), we have_
\[d_{w,p}\left(\hat{f},Z^{\pi}\right)\leq\widetilde{O}\left(\frac{C^{\frac{1}{2p }}\cdot\iota^{\frac{1}{2p}}\cdot\sqrt{d}}{(1-\gamma)^{\frac{5}{2}}}\cdot n^{- \frac{1}{2p}}\right)\]
_where \(\hat{f}\coloneqq\mathbb{E}_{x\sim\mu,a\sim\pi(x)}\,\hat{f}_{T}(x,a)\)._
The above upper bound depends on \(n^{-1/(2p)}\), which seems unsatisfactory, especially when \(p\) is large. However, we believe that it is actually tight since the previous study has shown that the minimax rate of estimating \(d_{w,p}\) using i.i.d samples from the given distribution is around \(O(n^{-1/(2p)})\)(Singh and Poczos, 2018). More formally, given a distribution \(Q\) and \(n\) i.i.d samples from \(Q\), any algorithm that maps the \(n\) i.i.d samples to a distribution \(\hat{Q}\), must have \(d_{w,p}(\hat{Q},Q)=\widetilde{\Omega}(n^{-1/(2p)})\) in the worst case. Note that distributional OPE is strictly harder than this problem.
### Examples
In this section, we discuss two examples: one is tabular MDPs, and the other one is Linear Quadratic Regulators. For simplicity of presentation, we focus on scalar rewards and finite horizon.
#### 4.3.1 Tabular MDPs
The example we consider is tabular MDP (i.e., \(|\mathcal{X}|\) and \(|\mathcal{A}|\) are finite) with continuous known reward distributions. Specifically, we consider the sparse reward case where we only have a reward at the last time step \(H\) and have zero rewards at time step \(h<H\). For each \((x,a)\), Denote \(r_{H}(x,a)\in\Delta([0,1])\).
Note that in this setup, via induction, it is easy to verify that for any \(h,x,a\), \(Z_{h}^{\pi}(\cdot|x,a)\) is a mixture of the distributions \(\{r_{H}(x,a):x\in\mathcal{X},a\in\mathcal{A}\}\), i.e., for any \(h,x,a\), there exists a probability weight vector \(w\in\Delta(|\mathcal{X}||\mathcal{A}|)\), such that \(Z_{h}^{\pi}(\cdot|x,a)=\sum_{x^{\prime},a^{\prime}\in\mathcal{X}\times \mathcal{A}}w(x^{\prime},a^{\prime})r_{H}(\cdot|x^{\prime},a^{\prime})\). Note that the parameters \(w(x,a)\) are unknown due to the unknown transition operator \(P\), and need to be learned. Thus, in this case, we can design function class \(\mathcal{F}_{h}\) as follows:
\[\mathcal{F}_{h}=\bigg{\{}f(\cdot|x,a)=\sum_{x^{\prime},a^{\prime}\in \mathcal{X}\times\mathcal{A}}w_{x,a}(x^{\prime},a^{\prime})r_{H}(\cdot|x^{ \prime},a^{\prime}):\big{\{}w_{x,a}\in\Delta(|\mathcal{X}||\mathcal{A}|)\big{\}} _{x,a\in\mathcal{X}\times\mathcal{A}}\bigg{\}}.\]
It is not hard to verify that \(\{\mathcal{F}_{h}\}_{h=1}^{H}\) does satisfy the Bellman complete condition. The log of the bracket number of \(\mathcal{F}_{h}\) is polynomial with respect to \(|\mathcal{X}||\mathcal{A}|\).
**Lemma 4.15**.: _In the above example, the complexity of \(\mathcal{F}_{h}\) in bounded: \(\log N_{\|}(\epsilon,\mathcal{F}_{h},\|\cdot\|_{\infty})\leq O(|\mathcal{X}|^ {2}|\mathcal{A}|^{2}\log(r_{\infty}|\mathcal{X}||\mathcal{A}|/\epsilon))\), where \(r_{\infty}\coloneqq\|r_{H}\|_{\infty}\)._
Thus Algorithm 1 is capable of finding an accurate estimator of \(Z^{\pi}\) with sample complexity scaling polynomially with respect to the size of the state and action spaces and horizon.
#### 4.3.2 Linear Quadratic Regulator
The second example is LQR. We have \(\mathcal{X}\subset\mathbb{R}^{d_{x}},\mathcal{A}\subset\mathbb{R}^{d_{a}}\).
\[x_{h+1}=Ax_{h}+Ba_{h},\quad r(x_{h},a_{h})=-(x_{h}^{\top}Qx_{h}+a_{h}^{\top} Ra_{h})+\varepsilon\]
where \(\varepsilon\sim\mathcal{N}(0,\sigma^{2})\). Since the optimal policy for LQR is a linear policy, we consider evaluating a linear policy \(\pi(x):=Kx\) where \(K\in\mathbb{R}^{d_{a}\times d_{x}}\). For this linear policy, \(Z_{h}^{\pi}(\cdot|x,a)\) is a Gaussian distribution, i.e., \(Z_{h}^{\pi}(\cdot|x,a)=\mathcal{N}(\mu_{h}(x,a),\sigma_{h}(x,a))\), where \(\mu_{h}(x,a)\) and \(\sigma_{h}(x,a)\) has closed form solutions.
**Lemma 4.16**.: _For LQR defined above, \(\mu_{h}(x,a)\) and \(\sigma_{h}(x,a)\) has the following closed form solutions_
\[\mu_{h}(x,a)=-(Ax+Ba)^{\top}U_{h+1}(Ax+Ba)-x^{\top}Qx-a^{\top}Ra,\quad\sigma_ {h}^{2}(x,a)=(H-h+1)\sigma^{2}\]
_where we denote \(U_{h}=\sum_{i=h}^{H}((A+BK)^{i-h-1})^{\top}(Q+K^{\top}RK)(A+BK)^{i-h-1}\)._
Thus our function class \(\mathcal{F}_{h}\) can be designed as follows:
\[\mathcal{F}_{h}=\Big{\{}f(\cdot|x,a)=\mathcal{N}\big{(}\cdot\,\,\big{|}\,x^{\top} M_{1}x+a^{\top}M_{2}x+a^{\top}M_{3}a,(H-h+1)\sigma^{2}\big{)},\,\forall M_{1},M_{2},M_ {3}\Big{\}}\]
We can show that this function class satisfies Bellman completeness. Furthermore, here, we can refine \(C\) in Assumption 4.1 to a relative condition number following the derivation in Uehara and Sun (2021). More specifically, \(C\) is \(\sup_{w\neq 0,h}\frac{w^{\top}\mathbb{E}_{d_{h}^{\top}}[\phi(x,a)\phi^{\top}(x,a)]w} {w^{\top}\mathbb{E}_{c}[\phi(x,a)\phi^{\top}(x,a)]w}\) where \(\phi(x,a)=(x^{\top},a^{\top})^{\top}\otimes(x^{\top},a^{\top})^{\top}\) is a quadratic feature and \(\otimes\) is the Kronecker product. Under some regularity assumption (i.e., norm \(M_{1},M_{2},M_{3}\) are bounded, which is the case when the dynamical system induced by the linear policy is stable), this function class has bounded statistical complexity.
**Lemma 4.17**.: _We assume there exist parameters \(m_{x},m_{a},m_{1},m_{2},m_{3}\) for which \(\|x\|_{2}\leq m_{x}\) for all \(x\in\mathcal{X}\) and \(\|a\|_{2}\leq m_{a}\) for all \(a\in\mathcal{A}\), and \(\|M_{i}\|_{\mathrm{F}}\leq m_{i}\) for \(i=1,2,3\). Then we have_
\[\log N_{\llbracket}(\epsilon,\mathcal{F}_{h},\|\cdot\|_{\infty})\leq\mathrm{ Poly}\left(d_{x},d_{a},\log\frac{m_{x}m_{a}m_{1}m_{2}m_{3}}{\epsilon\sigma} \right).\]
It is unclear if quantile regression TD or categorical TD can achieve meaningful guarantees on LQR since the function classes used by them do not satisfy Bellman completeness (i.e., given any conditional density \(f(\cdot|x,a)\), \(\mathcal{T}^{\pi}f\) will not be discrete since here \(r(x,a)\) is continuous). Even for regular RL, without Bellman completeness, in the off-policy setting, it is possible that TD-based algorithms may diverge, and TD fixed point solutions can be arbitrarily bad.
## 5 Simulation
In this section, we show the empirical performance of two instances of FLE's algorithm: _GMM-FLE_, which uses Gaussian mixture models (GMM) for \(\mathcal{F}\), and _Diff-FLE_, which uses diffusion probabilistic models. We elaborate on each component of the experiments below. See Append D.3 for a full list of experiment results. We release our code at [https://github.com/ziqian2000/Fitted-Likelihood-Estimation](https://github.com/ziqian2000/Fitted-Likelihood-Estimation).
Gmm-Fle.The GMM-FLE uses conditional Gaussian mixture models, where the weights and the mean and covariance of Gaussians are all learnable.
Figure 1: Visualization of the combination lock. The dotted lines denote transiting from good states (white) to bad states (gray). Once the agent transits to a bad state, it stays there forever. The observation is composed of three parts: one-hot encoding of the latent state \(w_{h}\), one-hot encoding of the time step \(h\), and random noise.
Diff-FLE.For Diff-FLE, we simply model the distribution \(f(\cdot\,|\,x,a)\) as a conditional diffusion probabilistic model [14]. Our implementation is based on DDPM [13].
The combination lock environment.There are two chains in the combination lock. Intuitively, the first chain is a "good" chain, while the second one is bad. The agent wants to stay on the first chain, for which the only approach is to take the unique optimal action at each time step. See Figure 1 for an illustration. Mathematically, the combination lock is a finite horizon MDP of horizon \(H\). There are two latent states \(w_{h}\in\{0,1\}\). At any time step \(h\in[H]\), there is only one optimal action \(a_{h}^{\star}\) among \(A\) actions. If the agent is in the latent state \(w_{h}=0\) and takes \(a_{h}^{\star}\), it transits to \(w_{h+1}=0\), otherwise transits to \(w_{h+1}=1\). If it is already in \(w_{h}=1\), no matter what action it takes, the latent state transits to \(w_{h+1}=1\). When \(h=H\), it receives a random reward \(r^{+}\) if \(w_{H}=0\); otherwise, it gets \(r^{-}\). The agent cannot observe the latent state \(w_{h}\) directly. Instead, the observation it receives, \(\psi(w_{h},h)\), is the concatenation of one-hot coding of the latent state \(w_{h}\) and the current time step \(h\), appended with Gaussian noise. This environment has been used in multiple prior works [15, 16] where it was shown that standard deep RL methods struggle due to the challenges from exploration and high-dimensional observation.
Test policy.The test policy is stochastic: it takes a random action with probability \(\epsilon\) and takes the optimal policy otherwise. In all experiments, we set \(\epsilon=1/7\).
Offline data generation.The offline dataset is generated uniformly. Specifically, for each time step \(h\in[H]\) and each latent state \(w_{h}\in\{0,1\}\), we first randomly sample 10000 observable state \(\phi(w_{h})\). Then for each of them, we uniformly randomly sample action and perform one step simulation. It is clear that the offline data distribution here satisfies the coverage assumption (Assumption 4.1).
### One-Dimensional Reward
To compare to classic methods such as categorical algorithm [1] and quantile TD [1], we first run experiments with a 1-d reward. Specifically, we set \(r^{+}\sim\mathcal{N}(1,0.1^{2})\) and \(r^{-}\sim\mathcal{N}(-1,0.1^{2})\). The horizon is \(H=20\).
The categorical algorithm discretizes the range \([-1.5,1.5]\) using 100 atoms. For quantile TD, we set the number of quantiles to 100 as well. The GMM-FLE uses 10 atomic Gaussian distributions, although eventually, only two are significant. See Append D for a detailed description of implementations. We plot the pdfs \(\mathbb{E}_{x\sim\psi(0,h)}\,\hat{f}_{h}(x,a_{h}^{\star})\) (here \(0\) denotes the good latent state in \(h\)) learned by different methods in Figure 2, at three different time steps. As we can see, GMM-FLE in general fits the ground truth the best.
We also compute the approximated \(d_{tv}\) between the learned distribution and the true one. Ideally, we want to compute \(d_{tv}(\mathbb{E}_{x\sim\psi(0,h)}\,\hat{f}_{h}(x,a_{h}^{\star}),\mathbb{E}_{ x\sim\psi(0,h)}\,Z_{h}^{\pi}(x,a_{h}^{\star}))\). However, since obtaining the density of certain models is impossible (e.g., Diff-FLE) and certain other models have only discrete supports, we use an approximated version: we sample \(20k\) points from each distribution, construct two histograms, and calculate \(d_{tv}\) between the two histograms. The results are shown in Table 1. Again, GMM-FLE achieves the smallest total variation distance. This intuitively makes sense since the ground truth return is a mixture of Gaussians. Moreover, we notice that GMM-FLE, Diff-FLE, and categorical algorithms achieve significantly better performance than the quantile regression TD algorithm. This perhaps is not surprising because our theory has provided performance guarantees for those three algorithms under \(d_{tv}\) (recall that the categorical algorithm can be roughly considered a specification of FLE, see Remark 3.1), while it is unclear if quantile regression TD can achieve similar guarantees in the off-policy learning setting.
### Two-Dimensional Reward
We conducted experiments on a two-dimensional reward. For now, \(r^{+}\) is sampled from a ring in \(\mathbb{R}^{2}\) of radius 2, and \(r^{-}\) is a Gaussian centered at the origin. The horizon is \(H=10\). The categorical algorithm discretizes the range \([-4,4]^{2}\) using \(30\) atoms for each dimension (thus using \(900\) atoms in total). Note that the two-dimensional version of the categorical algorithm is not introduced in the original paper (Bellemare et al., 2017), but the extension is intuitive. The GMM-FLE uses 30 atomic Gaussian distributions, although eventually, at most six of them are significant. We note that it is not straightforward to extend quantile regression TD to multi-dimensional rewards. More implementation details can be found in Appendix D.2.
We plot the 2-d visualization of the learned distribution in Figure 3 and also computed the approximated TV distance using the same method as in the 1-d case. The numerical results are shown in Table 2. Note that Diff-FLE performs the best (Table 2), and as a powerful model, it captures the correlation among dimensions (i.e., see Figure 3 where Diff-FLE does capture the ring structures in all steps). However, the GMM-FLE also doesn't perform well since it is hard for vanilla GMM with a finite number of mixtures to capture a ring-like data distribution. The two-dimensional categorical algorithm performed badly as well, even though it uses a larger number of atoms (recall that for the 1-d case it only uses 100 atoms and already achieves excellent performance), implying that it suffers from the curse of dimensionality statistically, i.e., explicitly discretizing the 2-d return space evenly can fail to capture the underlying data structure (e.g., in our ring example, data actually approximately lives in a submanifold). Moreover, the training is also significantly slower. In our implementation, we found that running the 2-d categorical algorithm with \(100^{2}\) atoms is about 100 times slower than running the 1-d algorithm with \(100\) atoms, while the training time of Diff-FLE and GMM-FLE does not change too much.
## 6 Conclusion
In this paper, we proposed a simple algorithm, Fitted Likelihood Estimation (FLE), for distributional OPE where rewards could be multi-dimensional vectors. FLE consists of a sequence of MLEs and has the flexibility to integrate any state-of-art generative models as long as it can be trained via MLE. Thus, FLE is scalable to the setting where reward vectors are high-dimensional. Theoretically, we showed that the distribution learned by FLE is accurate under
total variation distance and \(p\)-Wasserstein distance, for the finite-horizon setting and the infinite-horizon discounted setting, respectively. In practice, we show that FLE is flexible enough to use generative models such as GMMs and diffusion models.
|
2308.04129 | Rising and settling 2D cylinders with centre-of-mass offset | Rotational effects are commonly neglected when considering the dynamics of
freely rising or settling isotropic particles. Here, we demonstrate that
particle rotations play an important role for rising as well as for settling
cylinders in situations when mass eccentricity, and thereby a new pendulum
timescale, is introduced to the system. We employ two-dimensional simulations
to study the motion of a single cylinder in a quiescent unbounded
incompressible Newtonian fluid. This allows us to vary the Galileo number,
density ratio, relative moment of inertia, and Centre-Of-Mass offset (COM)
systematically and beyond what is feasible experimentally. For certain buoyant
density ratios, the particle dynamics exhibit a resonance mode, during which
the coupling via the Magnus lift force causes a positive feedback between
translational and rotational motions. This mode results in vastly different
trajectories with significantly larger rotational and translational amplitudes
and an increase of the drag coefficient easily exceeding a factor two. We
propose a simple model that captures how the occurrence of the COM offset
induced resonance regime varies, depending on the other input parameters,
specifically the density ratio, the Galileo number, and the relative moment of
inertia. Remarkably, depending on the input parameters, resonance can be
observed for centre-of-mass offsets as small as a few percent of the particle
diameter, showing that the particle dynamics can be highly sensitive to this
parameter. | Martin P. A. Assen, Jelle B. Will, Chong Shen Ng, Detlef Lohse, Roberto Verzicco, Dominik Krug | 2023-08-08T08:36:50Z | http://arxiv.org/abs/2308.04129v2 | [
###### Abstract
Rotational effects are commonly neglected when considering the dynamics of freely rising or settling isotropic particles. Here, we demonstrate that particle rotations play an important role for rising as well as for settling cylinders in situations when mass eccentricity, and thereby a new pendulum timescale, is introduced to the system. We employ two-dimensional simulations to study the motion of a single cylinder in a quiescent unbounded incompressible Newtonian fluid. This allows us to vary the Galileo number, density ratio, relative moment of inertia, and Centre-Of-Mass offset (COM) systematically and beyond what is feasible experimentally. For certain buoyant density ratios, the particle dynamics exhibit a resonance mode, during which the coupling via the Magnus lift force causes a positive feedback between translational and rotational motions. This mode results in vastly different trajectories with significantly larger rotational and translational amplitudes and an increase of the drag coefficient easily exceeding a factor two. We propose a simple model that captures how the occurrence of the COM offset induced resonance regime varies, depending on the other input parameters, specifically the density ratio, the Galileo number, and the relative moment of inertia. Remarkably, depending on the input parameters, resonance can be observed for centre-of-mass offsets as small as a few percent of the particle diameter, showing that the particle dynamics can be highly sensitive to this parameter.
flow-structure interactions, vortex-shedding R. S.]Rising and settling 2D cylinders with centre-of-mass offset
M. P. A. Assen, J. B. Will, C. Shen Ng, D. Lohse, R. Verzicco, Dominik Krug1]Martin P. A. Assen\({}^{1}\)+, Jelle B. Will\({}^{1,2}\)+ Chong Shen Ng\({}^{1}\), Detlef Lohse\({}^{1}\), Roberto Verzicco\({}^{1,3,4}\), Dominik Krug\({}^{1}\)+
Footnote †: Email address for correspondence: [email protected]
Footnote †: thanks: Email address for correspondence: [email protected]
## 1 Introduction
One of the striking characteristics of freely rising and settling particles at sufficiently large Reynolds numbers is the existence of non-vertical paths. This type of motion is a common characteristic for the dynamics of blunt bodies in general (Ern _et al._, 2012) and is related to the presence of a (periodically) oscillating wake with vortex shedding, akin
to that forming behind a fixed geometry (Gerrard, 1966; Perry _et al._, 1982; Williamson, 1996). The phase regimes of freely rising or settling bodies are more complicated than their fixed counterparts, however, due to the inherent coupling between the motion of the body-fluid interface and the surrounding flow morphology. This results in a complex, often only quasi-periodic motion which generally is difficult to predict a priori. Therefore, a complete solution or model of these systems remains elusive in many cases such that new insights rely on empirical results and parameter studies based on experiments or numerical simulations. Understanding and modelling of particle dynamics is important in many fields, for instance to predict the the spread of (plastic) pollutants in the ocean (Sutherland _et al._, 2023).
Properties of the paths and dynamics observed for buoyancy/gravity driven motion are determined by the strength of the coupling between particle and fluid. When this coupling is weak (Horowitz & Williamson, 2006) or when the degrees of freedom of motion are limited (Williamson & Govardhan, 2004), then the fluid response will be similar to that of a fixed geometry. On the contrary, regimes exist where particle kinematics are strongly affected by the fluid motion, and vice versa leading to alterations in flow morphology and particle trajectory and dynamics, e.g. the shedding frequency, but also the drag, and possibly larger path amplitudes.
Previous studies have often focused solely on the translational dynamics in relation to the particle-to-fluid density ratio, often disregarding body rotation, as noted by Mathai _et al._ (2017). In the present work, we primarily focus on the effects of rotational coupling. To this end, we vary the internal mass distribution of freely rising and settling two-dimensional (2D) cylinders by introducing a Centre-Of-Mass (COM) offset. This approach is motivated by recent work of Will & Krug (2021_b_), where the COM of freely rising and settling spheres was varied experimentally. This study found that the rotational dynamics of the spheres are strongly affected by the internal mass distribution, which in turn strongly affects all other aspects of particle motion. The observed phenomena could be explained via the analogy to a simple driven harmonic oscillator and expressing the results in terms of a timescale ratio between the 'pendulum' frequency, induced by the offset, and the driving frequency, which is determined by the vortex-shedding. This model captured several key features of the particle dynamics when COM offset was present. It further predicted additional dependencies on the particle-to-fluid density ratio \(\Gamma\), the dimensionless Moment Of Inertia (MOI) of the particle \(I^{*}\), and implicitly on the Galileo number _Ga_, which is similar to the Reynolds number _Re_ as it is a measure of the ratio between inertial to viscous forces but uses an a priori defined buoyancy velocity scale instead of a dynamical one, and governs the onset and transitions in between the various wake-structure topologies. However, due to experimental and physical constraints, the parameter space available in Will & Krug (2021_b_) is insufficient to test these conclusively. Therefore, we aim to systematically investigate these dependencies by means of numerical simulations of 2D cylinders with COM offset, to show that the underlying physics are similar between the 2D and the three-dimensional (3D) case, and that the results presented here can shed light on the remaining open questions.
The problem of freely rising or settling cylinders is an extension of the classical case of vortex-induced vibrations (Bearman, 1984; Parkinson, 1989; Govardhan & Williamson, 2000; Williamson & Govardhan, 2004), where a cylinder is placed in a free stream with only limited degrees of freedom. The applied restrictions indicated that the degrees of freedom and, therefore, the amount of particle motion (tuned by body inertia, spring stiffness, and damping of the supports) strongly influence the wake structure and coupled dynamics. Williamson & Roshko (1988) presented qualitatively similar results for a cylinder that was forced periodically in a free stream and classified the resulting wake
patterns as a function of the driving amplitude and frequency. Building on this, the work by Jeon & Gharib (2004) showed that the type of vortex-shedding depends on transverse and streamwise oscillations as well as on their relative phase. Similarly, for elastically mounted cylinders at subcritical Reynolds numbers (_Re_\(\leqslant\) 30) the effects of forced rotations were examined in recent work by Bourguet (2023), uncovering significant alterations in flow structure and amplitude of oscillation depending on _Re_ and rotational magnitude and frequency. For the case of freely rising and settling 2D cylinders, a critical mass density ratio (\(\Gamma\)), the ratio of particle to fluid mass density, was encountered, marking the threshold between reduced coupling at high \(\Gamma\) where the particle dynamics and its wake barely influence each other. On the contrary, below this threshold, particles show large path amplitudes and substantial alterations in the wake vortex shedding frequency (Horowitz & Williamson, 2006, 2010_b_). Similar density ratio related transitions in the regime of motion have also been observed for spheres (Horowitz & Williamson, 2010\(a\); Auguste & Magnaudet, 2018; Will & Krug, 2021_a_). Following the same train of thought, the rotational moment of inertia was also investigated as a relevant parameter, governing the dynamics of rising and settling 2D cylinders in the numerical work by (Mathai _et al._, 2017), as well as experimentally for spheres (Mathai _et al._, 2018; Will & Krug, 2021_a_). Due to these previous observations, we investigate the effects of \(\Gamma\) and MOI separately and in combination with effects induced by a COM offset.
Before proceeding with the problem definition, caution is warranted when interpreting the results as the 2D assumption in this work effectively corresponds to the limiting case of particle motion for very long cylinders settling or rising in a three-dimensional (3D) environment. Beyond a certain Reynolds number, the flow will become inherently 3D, even for a cylinder of infinite length; for a fixed cylinder, this is found to occur around \(\mbox{{Re}}\approx 190\)(Henderson, 1997; Williamson & Brown, 1998; Aleksyuk & Heil, 2023). Moreover, the cylinder length and associated end-effects play an important role (Inoue & Sakuragi, 2008), such that the motion of these cylinders becomes inherently 3D, showing both horizontal (azimuthal) cylinder rotation (around the vertical axis) and fluttering motion (around a horizontal axis) (Toupoint _et al._, 2019).
### Problem definition and equations of motion
In this work, we concern ourselves with the dynamics and kinematics of freely rising and settling 2D cylinders in an otherwise quiescent, infinite fluid. The fluid phase has a constant mass density \(\rho_{f}\) and a kinematic viscosity \(\nu\). The motion of the cylinder is confined to move within the \(xy\)-plane. Here, the \(y\)-axis is anti-parallel to the gravity vector (which has magnitude \(g\)). Subscripts \(x\) and \(y\) are assigned to vector components in this plane. We denote particle position, velocity, and acceleration by \(\mbox{{\boldmath$x$}}_{p}\), \(\mbox{{\boldmath$v$}}\), and \(\mbox{{\boldmath$a$}}\), respectively.
The particle (see schematic in figure 1 (_a_)) has a circular diameter \(D\), and an effective mass \(m_{p}\), mass density \(\rho_{p}\), and a volume \(\mathcal{V}\), per unit length. The COM of the cylinder (designated with point \(G\)) is displaced by a distance \(\ell\) from the geometric centre \(C\). Subscripts \(C\) and \(G\) throughout will refer to these points. A unit pointing vector \(p\) is defined between these points from \(G\) to \(C\). From here, we define the angles \(\theta\) between \(p\) and \(\mbox{{\boldmath$e$}}_{y}\) (the vertical unit vector), and \(\theta_{v}\) between \(p\) and \(\mbox{{\boldmath$v$}}_{C}\) (the instantaneous particle velocity at the centre). The buoyancy force acts upwards through point \(C\), while the gravitational force acts downwards through point \(G\). The relevant velocity scale characterising this system is the buoyancy velocity, i.e. \(V_{b}=\sqrt{|1-\Gamma|gD}\).
The dynamics of freely rising and settling buoyancy-driven particles are governed by the linear and angular momentum balances. In the present work, all particle dynamics are for unconstrained motion, implying that the only two forces acting on the geometry
are a body force due to gravity (\(\mathbf{F}_{g}=-\rho_{p}\mathcal{V}g\mathbf{e}_{y}\)) and the force exerted by the fluid on the particle \(\mathbf{F}_{F}\). For convenience we split this total fluid force into a contribution due to buoyancy and a time-varying part \(\mathbf{F}_{F}=\mathbf{F}_{b}+\mathbf{F}_{f}\), where \(\mathbf{F}_{b}=\rho_{f}\mathcal{V}g\mathbf{e}_{y}\). Therefore, the conservation of linear momentum for the cylinder is given by
\[\Gamma\frac{\mathrm{d}\mathbf{v}_{G}}{\mathrm{d}t}=\frac{\mathbf{F}_{f}}{m_{f}}+(1- \Gamma)g\mathbf{e}_{y}, \tag{1}\]
with \(\Gamma=\rho_{p}/\rho_{f}\) the mass density ratio, and \(m_{f}\) the mass of the fluid displaced by the particle. It is important to note that (1) is independent of the COM offset. The angular momentum balance for a 2D cylinder with COM offset reads
\[I_{C}\frac{\mathrm{d}^{2}\theta}{\mathrm{d}t^{2}}=T_{f}-\frac{1}{2}m_{p} \gamma D(\mathbf{a}_{C}+g\mathbf{e}_{y})\times\mathbf{p}, \tag{2}\]
where the balance is constructed with respect to the _geometric centre_ of the particle. Here, \(T_{f}\) is the fluid torque due to viscous stresses on the particle and the additional terms \((\mathbf{a}_{C}+g\mathbf{e}_{y})\times\mathbf{p}\) on the right-hand side results from the COM offset. The moment of inertia of the particle around point \(C\) (\(I_{C}\)) is related to that around point \(G\) by the parallel axis theorem \(I_{C}=I_{G}+m_{p}\ell^{2}\). Further, \(\gamma\) denotes the ratio \(\gamma=2\ell/D\). The rotational dynamics described in (2) resemble those of a forced pendulum (Will & Krug, 2021_b_), for which, when linearised, the natural frequency is given by
\[f_{p}=\tau_{p}^{-1}\ =\frac{1}{\pi}\sqrt{\frac{\gamma g}{DI^{*}}}, \tag{3}\]
with \(I^{*}=I_{C}/I_{\Gamma}\), and \(I_{\Gamma}=m_{p}D^{2}/8\) a reference value of a homogeneous cylinder with identical mass \(m_{p}\). We can further write (2) in dimensionless form by introducing a dimensionless timescale \(\tilde{t}=t/\tau_{v}\), where \(\tau_{v}=D/V_{b}\) is the vortex shedding timescale and therefore represents the typical timescale of the forcing in (2). We further non-dimensionalise the viscous torque term as \(T_{f}=\mu DLV_{b}\,T_{f}^{*}\)(Jordan & Fromm, 1972;
Figure 1: (_a_) Schematic of the cylinder with the centre-of-mass (\(G\)) displaced by a distance \(\ell\) from the volumetric centre (\(C\)). The pointing vector \(\mathbf{p}\) is a unit vector in the direction from \(G\) to \(C\) and \(\theta\) is the angle between \(\mathbf{p}\) and the vertical (\(y\)-direction). The forces acting on the body are buoyancy (\(\mathbf{F}_{b}\)) and the remaining fluid forces \(\mathbf{F}_{f}\) (in \(C\)) and gravity \(\mathbf{F}_{g}\) (in \(G\)). (_b_) Schematic depicting the direction of the Magnus lift force the horizontal component of which is used together with the horizontal particle acceleration to calculate the phase lag \(\Delta\phi\). (_c_) Time signals of the horizontal component of the Magnus force (\(\mathbf{F}_{m}\sim-\omega_{z}v_{y}\), blue) and horizontal particle acceleration (\(a_{x}/g\), red) for three different offset cases, showing different phase lags. Note that, since \(\langle v_{y}\rangle=\mathcal{O}(1)\), the value on the left \(y\)-axis is indicative of body rotation.
Bouchet _et al._, 2006) and write the particle acceleration as \(\mathbf{a}_{C}\sim\mathbf{F}_{f}/m_{p}\). Finally, using \(||\mathbf{F}_{f}||\sim\rho_{f}LDV_{b}^{2}\) to obtain \(\mathbf{a}_{C}=V_{b}^{2}/(D\Gamma)\,\mathbf{a}_{C}^{*}\) yields
\[\frac{\mathrm{d}^{2}\theta}{\mathrm{d}\hat{t}^{2}}=\frac{1}{Ga\Gamma I^{*}}\,T _{f}^{*}-\frac{\gamma}{I^{*}}\left(\frac{1}{|1-\Gamma|}\mathbf{e}_{y}+\frac{1}{ \Gamma}\mathbf{a}_{C}^{*}\right)\times\mathbf{p}, \tag{4}\]
The dimensionless prefactor \(\gamma/(|1-\Gamma|I^{*})\) in front of the pendulum term is proportional to the square of the ratio \(\mathcal{T}\) of the vortex shedding to the pendulum timescale as defined in Will & Krug (2021_b_). For a 2D cylinder, this timescale ratio is equal to
\[\mathcal{T}=\frac{\tau_{v}}{\tau_{p}}=\frac{1}{\pi}\sqrt{\frac{\gamma}{|1- \Gamma|I^{*}}}. \tag{5}\]
Note that the control parameter \(\mathcal{T}\) is solely dependent on the prescribed particle and fluid properties. It was shown by Will & Krug (2021_b_) that this control parameter governs the rotation dynamics of spheres with a COM offset. We will validate and expand upon this finding in the present work.
In SS 2 we will first outline the numerical approach used to obtain the results as well as the data processing applied to the data set. Then, in SS 4 - SS 6, our results are presented and discussed. This is split into four sections, where SS 3 contains a general discussion on effects of COM offset, the resonance mechanism, and the effect of fluid inertia for freely rising cylinders. Next, in SS 4 the role of the particle-to-fluid density ratio and how it affects COM-induced phenomena is discussed along with the differences between rising and settling, followed by a discussion on MOI effects in SS 5 and Galileo number effects in SS 6. Finally, in SS 7 the primary results and findings are summarised.
## 2 Numerical framework
### Fluid phase
The incompressible Navier-Stokes equations describe the flow of an unbounded Newtonian fluid around the particle and satisfy the boundary conditions at the body surface and infinity. An approximate computational strategy to model this configuration is achieved by solving the Navier-Stokes equations on a finite size domain in a moving reference frame attached to the body. For a perfectly circular particle, this reference frame does not need to rotate due to the body's inherent symmetry. A co-moving frame also allows for a configuration where the gravity vector is directed towards the outlet such that the wake can gently leave the domain without disturbing the particle dynamics. The incompressible Navier-Stokes equations are non-dimensionalised with \(V_{b}\) and \(D\). For the translating frame, the momentum and continuity equations are given by (see e.g. Mougin & Magnaudet, 2002; Jenny & Dusek, 2004)
\[\frac{d\mathbf{u}}{dt}+\nabla\cdot[\mathbf{u}(\mathbf{u}-\mathbf{v}_{C})]=-\nabla p+\frac{1}{ Ga}\nabla^{2}\cdot\mathbf{u}+\mathbf{f}, \tag{6a}\] \[\nabla\cdot\mathbf{u}=0, \tag{6b}\]
with \(\mathbf{u}\) the fluid velocity vector, \(p\) the kinematic pressure and \(\mathbf{f}\) the boundary forcing from the immersed boundary method (IBM) that enforces the no-slip condition (described in detail SS 2.2). Note that the hydrostatic component is absent from \(p\) as it is explicitly added to the forces acting on the cylinder.
The velocity at the inflow is set to zero to simulate an asymptotically quiescent fluid. At the outflow, a convective boundary is imposed (see e.g. Kim & Choi, 2006). The side
walls of the two-dimensional domain are periodic. The domain size is set to \(60D\) in the gravity direction and \(20D\) in the transverse direction, which was found to be sufficiently large to avoid box size effects. The grid spacing was kept constant within a square of size \(2D\) adjacent to the cylinder. Outside this domain, the mesh spacing expands linearly towards the edges of the domain. A heat equation is solved to smoothen the mesh, and the final grids employed for the various cases are presented in table 1. These grids were chosen such that the particle boundary layer was resolved by three to four grid points. Tests confirmed that halving the respective grid spacing did not alter the overall statistics.
### Numerical method
The numerical scheme closely follows the immersed boundary projection method as formulated by Lacis _et al._ (2016) of which a brief overview is presented. We solve (1a) and (1b) on a staggered grid, where the spatial gradients are computed using a conservative second-order central finite difference scheme. The non-linear term of (1a) is advanced in time via the explicit second-order Adams-Bashforth scheme and the viscous terms via the second-order implicit Crank-Nicolson scheme:
\[\begin{split}\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}& +\frac{3}{2}\hat{N}(\mathbf{u}^{n},\mathbf{v}_{C}^{n})-\frac{1}{2}\hat{N}( \mathbf{u}^{n-1},\mathbf{v}_{C}^{n-1})=-\hat{G}\mathbf{\varphi}^{n+1/2}\\ &-\hat{G}\mathbf{p}^{n}+\frac{1}{2Ga}\hat{L}(\mathbf{u}^{n+1}+\mathbf{u}^{n}) +\hat{H}\mathbf{f}^{n+1/2},\end{split} \tag{2a}\] \[\begin{split}\text{where}\quad\hat{D}\mathbf{u}^{n+1}=0\quad \text{and}\quad\hat{E}\mathbf{v}^{n+1}=\mathbf{v}^{n+1}+\mathbf{\omega}^{n+1}\times\mathbf{ \mathcal{L}}.\end{split} \tag{2b}\]
Here, \(\hat{N}(\mathbf{u},\mathbf{v}_{C})\) denotes the non-linear operator, \(\hat{G}\) the gradient operator, \(\hat{L}\) the Laplace operator, \(\hat{D}\) the divergence operator, \(\hat{H}\) the spreading (regularisation) operator, \(\hat{E}\) the interpolation operator, \(\mathbf{\varphi}^{n+1/2}\) the discrete incremental pressure, \(\mathbf{p}^{n}\) the discrete pressure, \(\mathbf{\mathcal{L}}\) the Lagrangian marker coordinates with respect to geometric centre and \(\mathbf{f}^{n+1/2}\) the discrete analogue of \(\mathbf{f}\) in (1a). The interpolation and regularisation matrices \(\hat{E}\) and \(\hat{H}\), respectively make use of a discrete three-point \(\delta\)-function (Roma _et al._, 1999).
The Newton-Euler equations are advanced in time via
\[I_{B}(\mathbf{v}_{C}^{n+1}-\mathbf{v}_{C}^{n})=N_{B}\mathbf{\tilde{f}}^{n+1/2}+\Delta\mathbf{ q}_{B}+\mathbf{g}^{n}-\mathbf{\zeta}^{n}, \tag{3}\]
with
\[I_{B}=\frac{1}{\Delta t}\begin{bmatrix}m_{p}&0&0\\ 0&m_{p}&0\\ 0&0&I_{C}\end{bmatrix},\quad N_{B}=-\begin{bmatrix}1&\ldots&1&0&\ldots&0\\ 0&\ldots&0&1&\ldots&1\\ -\mathcal{L}_{y_{1}}&\ldots&-\mathcal{L}_{y_{n}}&\mathcal{L}_{x_{1}}&\ldots& \mathcal{L}_{x_{n}}\end{bmatrix}, \tag{4}\]
\(\Delta\mathbf{q}_{B}\equiv Q(\mathbf{v}^{n}-\mathbf{v}^{n-1})/\Delta t\) and \(Q\) the matrix that interpolates the velocity inside the cylinder (cf. Kempe & Frohlich, 2012). The time step is limited with \(\text{CFL}=0.4\).
\begin{table}
\begin{tabular}{r r l} \(\quad\)_Ga_ & \(D/\Delta x\) & \(N_{x}\times N_{y}\) \\
50-200 & 50 & \(420\times 960\) \\
500 & 84 & \(520\times 1104\) \\
700 & 95 & \(560\times 1152\) \\
2000 & 161 & \(700\times 1440\) \\ \end{tabular}
\end{table}
Table 1: Overview of the grids. The first column denotes the Galileo number _Ga_. The second column represents the number of grid points per diameter of the cylinder. The third column is the grid resolution for the fluid phase.
Vector \(\mathbf{g}^{n}\) contains the buoyancy force and the torque induced by the particle weight. The Newton equation in (1) is solved with respect to the geometric centre and we make use of \(\mathbf{v}_{C}=\mathbf{v}_{G}+\mathbf{\omega}\times\gamma/2D\mathbf{p}\) for the transformation of (1). Additionally, the equation of angular conservation is solved with respect to the geometric centre (see SS 1) yielding an additional \(\mathbf{a}_{C}\times\mathbf{p}\) term. The terms from the latter contributions are collected in \(\mathbf{\zeta}^{n}\). Finally, we have for \(\mathbf{g}^{n}\) and \(\mathbf{\zeta}^{n}\):
\[\mathbf{g}^{n}\equiv\begin{cases}\phantom{-}0\\ (\varGamma-1)g\\ -m_{p}gD\gamma/2\sin\theta^{n}\end{cases},\quad\mathbf{\zeta}^{n}\equiv\frac{ \gamma m_{p}D}{2\varLambda t}\begin{Bmatrix}\omega^{n}\cos\theta^{n}-\omega^{ n-1}\cos\theta^{n-1}\\ \omega^{n}\sin\theta^{n}-\omega^{n-1}\sin\theta^{n-1}\\ a_{x}^{n}\cos\theta^{n}+a_{y}^{n}\sin\theta^{n}\end{Bmatrix}, \tag{5}\]
with \(a_{x}^{n}\equiv v_{x}^{n}-v_{x}^{n-1}\) and \(a_{y}^{n}\equiv v_{y}^{n}-v_{y}^{n-1}\), discretised components relating to \(\mathbf{a}_{C}\), respectively.
Equation (2.2a) together with constraints (2.2b), and (2.3) can be rewritten as
\[\begin{bmatrix}A&0&G&E^{T}\\ 0&I_{B}&0&N_{B}\\ G^{T}&0&0&0\\ E&N_{B}^{T}&0&0\end{bmatrix}\begin{Bmatrix}\mathbf{q}^{n+1}\\ \mathbf{v}_{C}^{n+1}\\ \mathbf{\varphi}^{n+1}\\ \mathbf{\tilde{f}}^{n+1/2}\end{bmatrix}=\begin{Bmatrix}\mathbf{r}^{n}\\ \mathbf{r}^{n}_{B}\\ \mathbf{0}\\ \end{Bmatrix}, \tag{6}\]
with \(\mathbf{q}^{n+1}=R\mathbf{v}^{n+1}\), \(E=\hat{E}R^{-1}\), diagonal matrix \(R\equiv[\varDelta y_{j},\,0;0,\,\varDelta x_{i}]\), \(A=\hat{M}\left[I/\varDelta t-\hat{L}/(2\mathit{Ga})\right]\), and \(\mathbf{r}^{n}\), \(\mathbf{r}^{n}_{B}\) containing the explicit terms. We approximate the inverse of \(A\) as \(A^{-1}\approx B=\varDelta tM^{-1}\)(Lacis _et al._, 2016, cf. SS 3), with \(M=\hat{M}R^{-1}\), \(\hat{M}\equiv[(\varDelta x_{i}+\varDelta x_{i-1})/2,\,0;0,\,(\varDelta y_{j} +\varDelta y_{j-1})/2)]\) a diagonal matrix. The solution procedure of (6) is performed via a block-LU decomposition following a three step procedure
\[\begin{bmatrix}A&0\\ 0&I_{B}\end{bmatrix}\begin{Bmatrix}\mathbf{q}^{*}\\ \mathbf{v}_{C}^{*}\end{bmatrix}=\begin{Bmatrix}\mathbf{r}^{n}\\ \mathbf{r}^{n}_{B}\end{Bmatrix}, \tag{7a}\] \[\begin{bmatrix}G^{T}BG&G^{T}BE^{T}\\ EBG&EBE^{T}+N_{B}^{T}I_{B}^{-1}N_{B}\end{bmatrix}\begin{Bmatrix}\mathbf{\varphi}^ {n+1/2}\\ \mathbf{\tilde{f}}^{n+1/2}\end{bmatrix}=\begin{Bmatrix}G^{T}\mathbf{q}^{*}\\ E\mathbf{q}^{*}+N_{B}^{T}\mathbf{v}_{C}^{*}\end{Bmatrix},\] (7b) \[\begin{Bmatrix}\mathbf{q}^{n+1}_{n+1}\\ \mathbf{v}_{C}^{n+1}\end{Bmatrix}=\begin{Bmatrix}\mathbf{q}^{*}\\ \mathbf{v}_{C}^{*}\end{Bmatrix}-\begin{Bmatrix}BG\mathbf{\varphi}^{1/2}+BE^{T}\mathbf{ \tilde{f}}^{n+1/2}\\ I_{B}^{-1}N_{B}\mathbf{\tilde{f}}^{n+1/2}\end{Bmatrix}. \tag{7c}\]
Here, \(\mathbf{q}^{*}\) in (7a) is solved for via a well tested factorisation procedure (see e.g. Verzicco & Orlandi, 1996). The solution of \(\mathbf{\varphi}^{n+1/2}\) and \(\mathbf{\tilde{f}}^{n+1/2}\) are obtained via the PETSc library (Balay _et al._, 1997, 2019) using the algebraic multigrid method BoomerAMG as the preconditioner and the general minimum residual method (GMRES) to solve (7b). This combination of solvers was found to be robust and converge within 12 to 17 iterations depending on the selected grid size and time step. A relative tolerance was set to \(10^{-13}\), to obtain solutions that satisfy the divergence free condition and no slip condition up to the limit of double-precision calculations. Once the solution vector is found, we update the pressure field (cf. Verzicco & Orlandi, 1996)
\[\mathbf{p}^{n+1}=\mathbf{p}^{n}+\mathbf{\varphi}^{n+1/2}-\frac{\varLambda t}{2\mathit{Ga}} \hat{L}\mathbf{\varphi}^{n+1/2}. \tag{8}\]
The additional solving routines for equations (7b) and (7c) were tested to ensure that they yield machine precision solutions satisfying the divergence free, and no-slip
condition (defined in 2.2_b_). The overall solution procedure was found to provide first-order convergence rate in the \(L_{2}\) norm for the velocity field and first-order in time (owing to the approximation of \(A^{-1}\approx\Delta tM^{-1}\)). Multiple validations for fixed, and freely rising cylinders showed good agreement with previous numerical and experimental studies. We refer the reader for further details on the immersed boundary projection approach to Taira & Colonius (2007); Lacis _et al._ (2016).
### Data set and processing
In this work, a total of 938 cases have been simulated for different combinations of the four control parameters: \(\mathcal{T}\), _Ga_, \(\Gamma\) and \(I^{*}\). The main goal is to investigate the effect of the COM offset in combination with the other parameters. For this, we varied \(\mathcal{T}\) between 0 and a maximum of 0.6, _Ga_ in the range between 50 up to 2000, \(\Gamma\) between 0.001 and 5, and \(I^{*}\) from 0.5 to 16. Compiled input and output parameters for all cases in our data set are included in the supplementary materials.
All results presented in this study are obtained after a statistically steady state has been reached. To ensure this, first, a moving average is computed of the time trace of the vertical velocity with an averaging window much larger than a single period of the typical fluctuations. This processed signal is compared to the terminal velocity of that case (determined by the average of the last 10% of the time signal). The initial transient is considered to have ended once the filtered time signal deviates less than 5% from this terminal velocity. For \(\mbox{{Ga}}=200\), this typically is the case after a transient time of \(60D/V_{b}\), which is short compared to the total average simulation time of \((1.9\times 10^{3}\,D/V_{b})\).
A number of different properties are derived from the simulations to characterise kinematics and dynamics of the particle path and the surrounding flow field. In the following, we describe the procedures used to extract these in detail.
The frequency \(f\) of the horizontal path oscillations is determined by the peak of the power spectrum of \(v_{x}(t)/V_{b}\), for which we applied local peak fitting in order to increase the accuracy of the estimated \(f\). In the case of multiple peaks, the most prominent one is used in subsequent analysis and data visualisation. Some specific cases featuring multiple peaks are discussed in SS 4.2. The obtained values of \(f\) were cross-checked with an autocorrelation analysis of \(v_{x}(t)/V_{b}\), which was found to yield almost identical results in all cases.
Some additional processing is required to obtain oscillation amplitudes of the particle rotation and translation due to drift present in the time signals of \(\mbox{{x}}_{p}(t)\) and \(\theta\). The reference \(\theta=0\) is either defined by the direction of the offset or by the initial orientation in the case of zero offset without loss of generality. To correct for the slow drift present for some of the cases, we employ a moving averaging filter on the signal with a window size of approximately \(1/f(Ga,\Gamma,\mathcal{T},I^{*})\), or one full oscillation time. Thus, we obtain a 'centre-line' (\(\mbox{{x}}_{cl}(t)\), with horizontal mean drift velocity \(v_{d}=\langle|\mbox{{d}}\mbox{{x}}_{cl,\,x}/\mbox{{d}}t|\rangle\), documented in the supplementary data) which is subtracted from the actual position and orientation time signal to remove any low frequency effects. The absolute value of the signal processed this way is used to determine a list of the individual peak amplitudes (_A_) for the path and (\(\theta\)) for the rotational oscillations, the mean of which is denoted by \(\hat{A}\) and \(\hat{\theta}\), respectively. Note that, as a consequence, this can mean that the particle does not exhibit rotational oscillations around \(\theta=0\) (where \(p\) is pointing upwards). Instead, especially for small offsets we observed a behaviour where \(\theta\) might drift away from \(\theta=0\) followed by a large rotation back to the reference state when the rotational amplitude becomes large.
The phase lag \(\Delta\phi\) between the horizontal component of the Magnus lift force \(\mbox{{F}}_{m}\) and the horizontal body acceleration \(\mbox{{a}}_{x}\) is calculated via cross-correlation of these quantities.
Similarly, \(\Delta\psi\) denotes the phase lag between the angular acceleration \(\alpha\) and the fluid torque \(T_{f}\). The lag obtained from the cross-correlation is divided by the length of an oscillation period \(1/f\) and then expressed as a phase angle ranging from -180\({}^{\circ}\) to 180\({}^{\circ}\). The respective components that define \(\Delta\phi\) are illustrated in figure 1(_b_). Figure 1(_c_) provides three examples of signals with varying \(\mathcal{T}\) showing a negative, zero and positive value of \(\Delta\phi\).
## 3 General effect of the COM offset on dynamics and kinematics
### Particle kinematics and wake structures
We present figure 2 to give an impression of how the wake patterns and particle kinematics change in the presence of COM offset. These snapshots display the non-dimensionalised fluid vorticity field (\(\omega_{z}=\partial_{y}u_{x}-\partial_{x}u_{y}\)) along with particle tracks (black lines) for six different COM offsets in the range \(\gamma\in[0,\,1.23]\) (increasing from left to right). All cases here are for \(\mathit{Ga}=200\), \(\Gamma=0.5\), and \(I^{*}=1\).
The cylinder with zero COM offset in figure 2\(a\) is seen to rise almost straight, with regular vortex shedding occurring at the same frequency as the path oscillations. This vortex pattern, where two single vortices of opposite vorticity are shed during a single oscillation cycle, is the so-called "2S" mode (Williamson & Roshko, 1988). No visible effect of the COM offset is observed for cases up to \(\mathcal{T}=0.174\) (figure 2_b_), but beyond this value, e.g. \(\mathcal{T}=0.201\) shown in panel figure 2\(c\), remarkably different kinematics are encountered. Both amplitude and wavelength of the path oscillations are significantly
Figure 2: Snapshots of particle trajectories and wake structures of rising cylinders with Galileo number \(\mathit{Ga}=200\) and density ratio \(\Gamma=0.5\) for six different centre-of-mass offsets \(\gamma\) (and \(\mathcal{T}\)) (_a–f_). The offset increases from left to right as indicated by the listed parameters in the top left of each subfigure. Particle trajectories are indicated by the black lines, the grid spacing has dimensions of the particle diameter \(D\). Coloured contours represent the normalised vorticity field (\(\omega_{z}D/V_{b}\)).
larger in this case, and the wake now exhibits an irregular vortex shedding pattern as can be seen in supplementary video 1. For this case it is observed that the wake structure intermittently switches between several modes: (i) path oscillations with no significant shedding events, (ii) one single vortex pair as in the 2S-regime per oscillation, or (iii) even two vortex pairs as typically found in the so-called "2P"-mode. Note that these different modes appear to alternate without any noticeable long time scale pattern. This chaotic shedding pattern occurs for cases close to what we will call "resonance", where the rotational forcing induced by the path oscillations occurs at the same frequency as the inherent pendulum time scale. For even higher values of \(\mathcal{T}\) beyond resonance, represented by \(\mathcal{T}=0.285\) in panel figure 2\(d\), we observe that the large amplitude path oscillations persist albeit with a reduced wavelength. Further, the vortex shedding returns again to an unperturbed 2S mode now with staggered vortex cores due to the strong path oscillations. Finally, with even larger offsets, figure 2(_e_,_f_), the amplitude of the path oscillations begins to gradually reduce, returning to a state very much like that for the zero offset case (see supplementary video 1 for \(\mathcal{T}>0.3\) cases). The results shown here are representative of the \(\Gamma\)-range where the resonance phenomenon is present. In the following, we will evaluate how this resonance behaviour depends on all of the other governing parameters.
### On the importance of fluid inertia
In order to investigate the effect of fluid inertia, we first consider the mean rotational amplitude (\(\hat{\theta}\)) for a constant density ratio (\(\Gamma=0.6\)) as a function of the timescale ratio \(\mathcal{T}\) as shown in figure 3(_a_). Focusing initially on the case _Ga_\(=200\) (corresponding to figure 2), we see that at \(\mathcal{T}=0\) the rotational amplitude is small \(\hat{\theta}=0.4^{\circ}\). Introducing a small amount of offset results in a marginal increase in this amplitude up to \(\hat{\theta}=1.4^{\circ}\) at \(\mathcal{T}=0.16\). However, around \(\mathcal{T}=0.2\), there is a strong increase reaching a maximum amplitude of more than \(\hat{\theta}=35^{\circ}\) at \(\mathcal{T}=0.225\). This rapid increase is associated with the resonance phenomenon that was already visible in figure 2(_c_). Beyond this point, the amplitude decreases gradually with increasing \(\mathcal{T}\).
Figure 3: (_a_) Mean rotational amplitude \(\hat{\theta}\) as a function of Galileo number versus the timescale ratio \(\mathcal{T}\), here \(\Gamma=0.6\) and \(\Gamma^{*}=1\). (_b_) Schematic showing the parameters of the fluid inertia model. (_c_) \(\hat{\theta}\) for the same cases as in (_a_) plotted against the modified timescale ratio \(\tilde{\mathcal{T}}\), which includes the effects of a Galileo number dependent added fluid inertia as per (1). (_d_) Thickness of the fluid inertia layer \(\delta\) and (_e_) added inertia as a function of _Ga_, based on empirical collapse of the data.
When comparing results across different _Ga_ numbers, the same characteristic behaviour is observed for all cases in figure 3(_a_). However, the value of \(\mathcal{T}\) at which resonance appears (marked by the steep increase in \(\hat{\theta}\)) is consistently shifted towards higher values as _Ga_ decreases. Such a variation with _Ga_ is not surprising since the definition of \(\mathcal{T}\) does not incorporate any viscous effects. However, for low values of _Ga_ one would expect the Stokes layer surrounding the particle to contribute significantly to the total rotational inertia of the body and thereby also to modify the particle pendulum timescale. We can account for this effect by additionally including the rotational inertia \(I_{a}\) resulting from the Stokes layer with thickness \(\delta\) as illustrated in figure 3(_b_) in our analysis. As a result, the modified pendulum frequency and timescale ratio of the system become
\[\tilde{f}_{p}=\frac{1}{\pi}\sqrt{\frac{\gamma g}{D(I^{*}+I_{a}^{*}/\Gamma)}},\] (1a) and \[\tilde{\mathcal{T}}=\frac{1}{\pi}\sqrt{\frac{\gamma}{\left|1-\Gamma\right|(I ^{*}+I_{a}^{*}/\Gamma)}}, \tag{1b}\]
respectively. Here \(I_{a}^{*}\) is the dimensionless fluid inertia defined as \(I_{a}^{*}\equiv 8I_{a}/(m_{f}D^{2})\), the ratio of the Stokes layer's rotational inertia to that of the displaced fluid. The total rotational inertia is thus given by \(I^{*}+I_{a}^{*}\Gamma\). We assume that the thickness of this Stokes layer scales as \(\delta\sim 1/\sqrt{Ga}\)(Williamson & Brown, 1998; Schlichting & Gersten, 2003; Mathai _et al._, 2018), which for a cylinder leads to
\[I_{a}^{*}(\textit{Ga})=\frac{8c_{1}}{\sqrt{\textit{Ga}}}+\frac{24c_{1}^{2}}{ \textit{Ga}}+\frac{32c_{1}^{3}}{\textit{Ga}^{3/2}}+\frac{16c_{1}^{4}}{\textit {Ga}^{2}}, \tag{2}\]
with \(c_{1}\) as the only free parameter. We find that choosing \(c_{1}=2.3\) results in a reasonable collapse of the resonance regime for different _Ga_ when plotting \(\hat{\theta}\) against \(\tilde{\mathcal{T}}\) as shown in figure 3(_c_). The corresponding thickness of the Stokes layer and magnitude of the added fluid inertia as a function of _Ga_ are provided in figures 3 (_d, e_), respectively. For _Ga_\(=200\), the thickness of the fluid layer is approximately \(0.33\) particle radii and the rotational inertia amounts to about \(2\) times that of the displaced fluid. Beyond _Ga_\(=\mathcal{O}(10^{3})\), the value of \(I_{a}^{*}\) changes much more slowly, explaining the weak _Ga_ dependence at higher _Ga_ observed in figure 3(_a_) as well as in previous work on spheres (Will & Krug, 2021_b_). Note, however, that \(I_{a}^{*}\) is still \(0.72\) of the displaced fluid mass at _Ga_\(=1000\) for cylinders and therefore by no means negligible. We performed an estimate of the history torque to confirm that the obtained values for \(I_{a}^{*}\) are realistic. A complete discussion of this for both cylinders and spheres is provided in Appendix A.
### Who's driving?
When considering the right hand side of (2), there are two potential drivers of the rotational motion, the viscous torque \(T_{f}\) and the translational-rotational coupling term \(\mathbf{a}_{C}\times\mathbf{p}\), the latter being a consequence of the COM offset. Here, we will investigate their respective role with respect to the resonance behaviour. We know from the analysis in SS3.2 that the maximum in rotational amplitude is related to resonance between the vortex shedding timescale \(\tau_{v}\) and the pendulum timescale \(\tau_{p}\). However, both the viscous and translational driving will occur at the vortex shedding frequency, such that a distinction of their effects is not possible on this basis only. Answering the question of 'who's driving' also provides insight in the effectiveness of COM offset in specific regimes of motion.
In order to untangle the effects of both contributions, simulations were performed where, after a statistically steady state had been reached, the \((\mathbf{a}_{C}\times\mathbf{p})\)-term was turned
off in the integration of (2). In figure 4 (_a_), the horizontal component of the particle velocity \(v_{x}\) is shown as a function of time for these runs at _Ga_\(=200\), \(\Gamma=0.5\), and \(\mathcal{T}=0\) (grey line) and \(\mathcal{T}=0.285\) (red line). At \(t=0\), the coupling term \(\mathbf{a}_{C}\times\mathbf{p}\) is turned off for the case with offset. Figure 4 (_b_) displays the rotation rate \(\omega\) for the same simulations. These results clearly indicate that as the coupling-term is turned off, the particle dynamics return to those of the case without offset. Note here that the pendulum term ( \(\mathbf{e}_{y}\times\mathbf{p}\)) is still present for \(t\geq 0\), but evidently it has no effect without translational driving of the rotational dynamics. Therefore, we conclude that the rotational resonance phenomenon is linked to the translational coupling. As a consequence, we expect COM offset to have no impact on particle dynamics for cases where no horizontal path oscillations (i.e. no horizontal accelerations) are present, e.g. at low \(Ga\). This also suggests that the resonance behaviour might also be triggered by outside periodic forcing, as would be present in a turbulent flow environment. It would be interesting to study how the settling/rising velocities of low Galileo number bodies with COM offset are affected in turbulence via this mechanism.
On the role of \(T_{f}\), it is further instructive to consider the phase lag \(\Delta\psi\) between \(T_{f}\) and the rotational acceleration \(\alpha\), which is shown in figure 4 (_c_) for the full range for \(\Gamma\) and \(\mathcal{T}\) at _Ga_\(=200\). For zero or very small offsets, \(T_{f}\) is driving the (weak) rotational dynamics as evidenced by \(\alpha\) and \(T_{f}\) being close to in phase. However, as the offset increases towards resonance and beyond, \(\Delta\psi\) switches swiftly to values close to \(180^{\circ}\), such that the viscous torque predominantly acts as damping in these cases. In essence, these trends also hold for higher _Ga_. However, the dynamics become somewhat more chaotic at higher _Ga_, as will be shown in SS6, resulting in slightly lower values of \(\Delta\psi\) on the order of \(120\) to \(150^{\circ}\).
## 4 The effects of density ratio on COM offset
### Frequency of oscillation
In the following sections, we will discuss how the effect of the COM offset varies with the density ratio. In doing so, we focus on the representative case of _Ga_\(=200\) and \(I^{*}=1\). We first consider the frequency of the path oscillations (\(f\)), as this parameter also corresponds to the frequency at which the rotational dynamics are forced. In figure
Figure 4: (_a_, _b_) Results for \(Ga=200\) and \(\Gamma=0.5\) for two cases; one without offset (grey line) and one with offset (red line). (_a_) Dimensionless horizontal velocity of the cylinder (\(v_{x}\)) and (_b_) dimensionless rotation rate (rad.) versus dimensionless time. During these runs at \(t=0\) the \(\mathbf{a}_{C}\times\mathbf{p}\) term for (2) is turned off, showing that in absence of this coupling term the dynamics of particles with offset almost completely revert back to those of particles without offset. (_c_) Phase lag \(\Delta\psi\) between the rotational particle acceleration (\(\alpha\)) and the viscous torque (\(T_{f}\)).
5(_a_), we plot the data in the form of the Strouhal number
\[\mathit{Str}=\frac{fD}{V_{b}}. \tag{1}\]
as a function of \(\mathcal{T}\) and \(\Gamma\). The marker colour in the figure indicates the exact \(\mathit{Str}\) obtained from the simulations as can be read from the legend, the iso-contours and background colours represent a linear interpolation of this data. The transparent white area bordered by the black dashed line indicates the region where \(\gamma>\sqrt{0.5}\). This corresponds to the theoretical state where \(I_{G}\), the MOI of the particle around the centre-of-mass, is zero in accordance with the parallel axis theorem (\(I_{C}=I_{G}+m_{p}\ell^{2}\)) as a consequence of keeping \(I_{C}\equiv m_{p}D^{2}/8=const.\) (i.e. \(I^{*}=1\)). Furthermore, we also add a line where \(\gamma=1\), i.e. when the point \(G\) lies on the particle edge. Results within this marked region are therefore not physically viable, yet still satisfy the governing equations.
Considering the zero offset (\(\mathcal{T}=0\)) cases in figure 5(_a_) first, we find that \(\mathit{Str}\) varies quite significantly from \(\mathit{Str}=0.195\) at high density ratios (\(\Gamma\geq 0.2\)) to \(\mathit{Str}=0.127\) for \(\Gamma\leq 0.1\). This transition appears to be quite sudden, suggesting the existence of critical density ratio as previously observed for rising and settling blunt bodies (Namkoong _et al._
Figure 5: Results for the path oscillation frequency (\(f\)) of rising particles at \(Ga=200\) as a function of \(\mathcal{T}\)) \(\Gamma\). In (_a_), the marker colour indicates the dimensionless frequency (\(\mathit{Str}=fD/V_{b}\)) according to the colour bar provided below. The marker type indicates the different regimes in terms of the resonance behaviour discussed in the following. The isocontours are based on a linear interpolation of the data. Dashed white lines represent isocontours of \(\tilde{\mathcal{T}}\), the timescale ratio including effects of fluid inertia. (_b_) Horizontal particle position over the cylinder diameter (\(x/D\)) as a function of dimensionless time grouped in three values of \(\mathcal{T}\) for three values of the \(\Gamma\) as indicated by the line colours showing characteristic behaviour for each. (_c_) Ratio of the frequency (\(f\)) of the path oscillations over the pendulum frequency \(f_{p}\) vs. the timescale ratio \(\tilde{\mathcal{T}}\). Here the marker colour indicates \(\Gamma\) as listed in the legend below the figure. The two dashed black lines show a constant value of \(\mathit{Str}\). Both of these show the collapse of COM and \(\Gamma\) effects in terms of this parameter. We further see that the results also collapse with the results from spheres with COM offset (Will & Krug, 2021_b_) shown as black symbols. The inset of the figure shows the same data as (_a_) plotted as \(\mathit{Str}\) vs. \(\tilde{\mathcal{T}}\).
2008; Horowitz and Williamson 2010\(b\); Mathai et al. 2017; Auguste and Magnaudet 2018; Will and Krug 2021_a_). The change in the path frequency at the lowest \(\Gamma\) is also associated with an increase in the path amplitude as is evident from the trajectories at \(\mathcal{T}=0.15\) (which resemble those at \(\mathcal{T}=0\)) in figure 5(_b_).
Now let us consider the effects of COM offset for varying \(\Gamma\) (i.e. moving vertically in the figure). Depending on \(\Gamma\), three distinct effects of increased offset can be observed. First of all, for \(\Gamma\geq 0.9\) (marked by square symbols throughout), we find that increasing COM offset has almost no effect on the oscillation frequency. There is only a slight decrease for \(Str\) at extreme offset as can best be seen in the inset of figure 5(_c_). The general lack of response to COM offset in this regime can be explained by considering the rotational equation of motion as presented in (4). Remember here that the system is similar to a driven damped harmonic oscillator where the pendulum term is analogous to the spring stiffness, the viscous torque is the damping term, and the accelerated reference frame (\(\mathbf{a}_{C}\times\mathbf{p}\)) provides the driving. The restoring torque is proportional to \(|1-\Gamma|^{-1}\) and therefore goes to infinity for \(\Gamma\to 1\). This is not the case for the driving term which scales according to \(\Gamma^{-1}\). Therefore, when the body becomes close to neutrally buoyant, the pendulum torque goes to infinity and as a result the forcing can not rotate the body significantly enough to induce any circulation. Therefore, there will be no Magnus force and no rotational-translational feedback loop leading to resonance. Thus, for the cases where \(\Gamma\) is close to unity, the oscillation frequency (as well as other output parameters) of the body remain unaffected by the offset.
The second regime is characterised by a sharp transition in particle dynamics where in a narrow range of \(\mathcal{T}\) the dynamics switch between the base state (near identical to \(\gamma=0\)) and the resonance state. This is best shown in the inset of figure 5(_c_) where we see that at low values of \(\tilde{\mathcal{T}}\) for intermediate density ratios (\(0.2\leq\Gamma\leq 0.8\)) \(Str\) stays constant at approximately \(0.195\) (upper branch). However, as the offset increases there is a sharp jump to the lower branch of \(Str\). The upper branch corresponds to a system state with minimal body rotation and translation, and in the lower branch the vortex shedding latches on to body motion and is affected by the pendulum frequency. The cases \(\Gamma=0.2\) and \(0.8\) are edge cases and show characteristics of their neighbouring regimes.
Finally, the third regime is characterised by a gradual transition to the resonance state and is encountered for \(\Gamma\leq 0.1\). Here we find that even at zero offset they are already following the lower branch in figure 5(_c_. Since the particle is already exhibiting path oscillations and minute rotational oscillations even at zero offset, no critical threshold of offset needs to be exceeded for the coupling to begin occurring. For these cases, even at \(\tilde{\mathcal{T}}\) below resonance, we already see offset affecting the particle dynamics. The footprint of these three regimes is also evident in the amplitude and spatial path frequency as shown in figure 5(_b_). For high \(\Gamma\), there is no effect of increasing offset, at intermediate density ratios we see a large difference between different \(\mathcal{T}\), and at low \(\Gamma\), we observe large path oscillations even at small/zero offset.
Beside the jump at the onset of resonance, \(Str\) also varies significantly beyond the resonance state. The isocontours of \(Str\) in this region approximately follow the lines of constant \(\tilde{\mathcal{T}}\) (white dashed lines) and in particular the minimum of \(Str\) coincides roughly with \(\tilde{\mathcal{T}}=0.08\). The correlation between \(Str\) and \(\tilde{\mathcal{T}}\) is explicitly shown in the inset of figure 5(_c_). This plot also highlights the existence of two branches of the system state and the fact the COM offset can trigger a transition between the two, indicated for each \(\Gamma\) by the coloured dashed section of the lines. The collapse of the data on these two curves is not trivial and underlines the validity of the Stokes layer argument at the core of the definition of \(\tilde{\mathcal{T}}\). As \(\tilde{\mathcal{T}}\) becomes very large, \(Str\) appears to return to the trend at large \(\Gamma\) where higher density ratios have a slightly higher \(Str\). It is further clear that while \(\tilde{\mathcal{T}}\) is
the relevant parameter to describe the behaviour after the transition from the low \(Str\) to the high \(Str\) state, the transition itself does not coincide with \(\tilde{\cal T}=\mbox{const.}\), but occurs at lower values of \(\tilde{\cal T}\) for lower \(\Gamma\). The case with the lowest density of \(\Gamma=0.001\) spans only a tiny range in terms of \(\tilde{\cal T}\) even for the largest offsets. This could explain why there is no noticeable variation in the particle behaviour for this density ratio even at large offsets. However, since the driving term is proportional to \(1/\Gamma\), it will likely dominate the pendulum torque, which does not diverge for small \(\Gamma\).
As mentioned at the beginning of this section, the frequency of the path-oscillations is important for the driving of the rotational dynamics through equation (2). As with any harmonic oscillator the parameter of prime importance is the ratio of the driving to natural frequency of the system \(f/\tilde{f}_{p}\), which we show in the main panel of figure 5(_c_) as a function of the timescale ratio \(\tilde{\cal T}\). Curves of constant \(Str\) corresponding to the two different states are indicated by the black dashed lines. Importantly, we find that \(f/\tilde{f}_{p}=1\) occurs around \(\tilde{\cal T}=0.11\), which corresponds to the bold white dashed line in figure 5(_a_). We further see in figure 5(_c_) that the path oscillation frequency of the body appears to be drawn towards \(f_{p}\) as it begins to deviate from \(Str=0.127\) to meet \(f/\tilde{f}_{p}=1\), consistent with the so called lock-in phenomenon (Bishop & Hassan, 1964; Bearman & Obasaju, 1982). The region of (approximate) frequency lock-in, ranging from \(0.09\leq\tilde{\cal T}\leq 0.12\) is indicated by a grey shaded area in the figure background throughout this work. Finally, we included the results for rising and settling spheres with COM offset from the work by Will & Krug (2021) as black circles in figure 5(_c_). The good agreement with the present results suggests that the underlying physics of the resonance mechanism are indeed the same and that results and trends presented here are also relevant for spherical bodies in a 3D flow environment.
### On the transition to resonance
In this section, we will discuss the transition to resonance in the intermediate \(\Gamma\) regime, i.e. for \(0.2\leq\Gamma\leq 0.8\) in more detail. In the range \(0.3\leq\Gamma\leq 0.7\), the transition from the high \(Str\) number mode to the low \(Str\) one occurs within a narrow band of \({\cal T}\). This is also visible from figure 6(_a_), where the power spectra normalised by the maximum amplitude (\({\cal F}\)) of \(v_{x}/V_{b}\) are shown for \({\cal T}=0.159\), \(0.195\), and \(0.225\) in the vicinity of the transition point at \(\Gamma=0.6\) (yellow diamonds, pink pentagons and purple hexagons respectively). For both, \({\cal T}=0.159\) and \({\cal T}=0.225\), the spectra feature singular peaks only at \(Str\approx 0.1\) and \(Str\approx 0.2\), respectively. The former peak also dominates for the intermediate case \({\cal T}=0.195\), however, a weaker secondary peak at \(Str\approx 0.21\) is also seen to emerge at this offset value. Similar trends can be observed across the range \(0.3\leq\Gamma\leq 0.7\) with varying ratios of relative peak height, suggesting that the transition between modes happens in a narrow band of \({\cal T}\), but is not entirely discrete.
In SS4.1, we mentioned that \(\Gamma=0.2\) and \(0.8\) were on the edges of the \(\Gamma\)-range for which a sharp transition to the resonance regime was encountered. We will investigate these cases in more detail here. For \(\Gamma=0.2\), the range of \({\cal T}\) where multiple modes are observed widens significantly as compare to \(0.3\leq\Gamma\leq 0.7\). We observe multiple peaks in the spectra for \(0.138\leq{\cal T}\leq 0.225\) as exemplified by the case shown in figure 6(_a_) (green circles). This extended range is most likely due to the intrinsic rotational and transitional oscillations present at \(\gamma=0\) for \(\Gamma\) close to \(0\) and is similar to the cases of \(\Gamma\leq 0.1\). However, looking at figure 5(_c_), we still observe that the cases at \(\Gamma=0.2\) and \(\gamma\approx 0\) follow the upper branch in terms of \(f/f_{p}\) and \(Str\), making the behaviour transitional between the \(\Gamma\)-regimes.
At \(\Gamma=0.8\), we find only a single case (\({\cal T}=0.327\)) for which the system state jumps
to the lower branch in figure 5(_c_). In figure 6(_a_) (blue squares), we observe that this jump is accompanied by a wide range of frequency peaks. The occurrence of multiple peaks originates from a very unique frequency and amplitude-modulation cycle in the fluid-structure interaction, the signature of which is shown in terms of the time-evolution of \(v_{x}/V_{b}\) and \(\omega D/V_{b}\) in figure 6(_b_). For both quantities, a modulation of the amplitude but also of the frequency is evident at timescales much larger than that of the path oscillations. This behaviour stands in stark contrast to the rest of the cases which exhibit very regular periodic motion. Specifically, the parameter combinations that show this characteristic behaviour are: \(\Gamma=0.8\) with \(\mathcal{T}=0.327\) and \(\mathcal{T}=0.365\), and \(\Gamma=0.7\) with \(\mathcal{T}=0.411\). A video showing this behaviour along with the vorticity in the wake can be found in supplementary video 2.
To quantify the frequency variations, we plot the instantaneous _Str_ in figure 6(_c_) based on determining the distance between the maxima and minima in the signal \(v_{x}/V_{b}\). Instances where the frequency is relatively low are indicated by the blue regions in figure 6(_b_,_c_). The corresponding particle kinematics, showing a marked reduction in the lateral amplitude, and the associated vortical structures for the low _Str_ mode are illustrated in figure 6(_d_). In this case, the cylinder rises almost vertically and the length of the attached wake is at its maximum extent. After this, in the transitional period marked in yellow, the path amplitude remains low but the frequency of the oscillations increases. Figure 6(_e_) shows that this is linked to the rapid shedding of the build-up attached wake, quickly
Figure 6: (_a_) Single sided amplitude spectrum (\(\mathcal{F}\)) based on the particle horizontal particle velocity (\(v_{x}/V_{b}\)) normalised by the maximum amplitude. (_b_) Dimensionless horizontal velocity and rotation rate (rad./\(s\)) versus time for \(\Gamma=0.8\) and \(\mathcal{T}=0.327\). We see that the dynamics exhibit a cyclical behaviour on a timescale much greater than that of the vortex-shedding dynamics. This behaviour is split into three parts as indicated by the colours in the background of the figure. (_c_) Instantaneous Strouhal number as a function of time for \(\Gamma=0.8\) and \(\mathcal{T}=0.327\), calculated based on the peak-to-peak times of \(|v_{x}/V_{b}|\). (_d–f_) Vortex shedding and path oscillations correlating to the modes in (_b_, _c_). (_d_) Very low frequency oscillations of minimal amplitude, attached wake is very large. (_e_) The buildup vorticity is rapidly shed in the wake at a high frequency, resulting in small amplitude high frequency path oscillations. (_f_) Slower periodic vortex shedding with larger amplitude path oscillations, the attached vorticity slowly grows throughout this phase until the cycle begins anew.
reducing its length. This state is similar to the dynamics observed at higher \(\Gamma\). Finally, in the period indicated by red shading, the lateral amplitude is relatively large and the oscillation frequency is intermediate. In the corresponding figure 6(_f_), it is seen that over the course of a number oscillation periods the attached wake slowly grows again until this cycle repeats. Due to the large amplitude and longest duration of this phase, the red region manifests as the strongest peak in the Fourier spectrum and thus the result for \(Str\). This behaviour is characteristic for the cases near \(\Gamma=0.8\) and \(\mathcal{T}=0.327\) at this Galileo number and is indicative of the density regime transitions, where the dynamics exhibit signs of both regimes. These observations highlight that in the transition range, multiple states can coexist.
### Drag coefficient and Magnus force
In this section, we will concern ourselves with the mean vertical velocity, i.e. the terminal rising/settling velocity, which is of particular practical relevance. We define the drag coefficient \(C_{d}\) obtained from the time-averaged vertical force balance between the buoyancy and the drag force, given that the particle has reached terminal velocity. For a two-dimensional cylinder, this results in
\[C_{d}=\frac{\pi|1-\Gamma|gD}{2\langle v_{y}\rangle^{2}}. \tag{10}\]
Note that in this definition \(C_{d}\) is essentially a dimensionless rising or settling velocity. When plotted as a function of \(\Gamma\) and \(\mathcal{T}\) (figure 7(_a_)), the drag coefficient exhibits considerable variations almost up to a factor of 4 across the parameter space that are predominantly induced by COM effects. At \(\mathcal{T}=0\), \(C_{d}\) is found to be lowest for \(\Gamma=0.2\). Moving to the right in the figure, i.e. towards increasing \(\gamma\), the previously (SS4.1) defined \(\Gamma\)
Figure 7: (_a_) Particle drag coefficient as a function of the particle-to-fluid density ratio \(\Gamma\) and the timescale ratio \(\mathcal{T}\) for rising particles at \(Ga=200\) and \(I^{*}=1\). (_b_) Correlation between \(C_{d}\) and the phase lag \(\Delta\phi\) between the horizontal Magnus force and the horizontal component of instantaneous acceleration. (_c_) Non-dimensional mean magnitude of the fluctuating velocity component \(v^{*}=\sqrt{\langle\overline{v^{\prime 2}_{C}}\rangle}/V_{b}\) as a function of the timescale ratio \(\mathcal{T}\).
regimes can again be noted. For \(\Gamma\geq 0.8\), no increase in \(C_{d}\) is observed as a consequence of the COM offset. However, for \(\Gamma\leq 0.7\) the resonance behaviour manifests itself in a strong increase in \(C_{d}\). These trends are explicitly plotted in terms of \(\tilde{\mathcal{T}}\) in figure 7 (_b, c_). For \(\Gamma\leq 0.7\), the resonance behaviour reaches a maximum for \(\tilde{\mathcal{T}}\approx 0.11\). This is indicated in figure 7(_a_) by the bold white dashed line, and even more evident from the location of the peak in \(C_{d}\) in figure 7(_b_). The value of \(\tilde{\mathcal{T}}=0.11\) corresponds to \(f/f_{p}^{*}=1\) in figure 5(_c_), i.e. where the driving frequency \(f\) and the pendulum frequency are identical. The magnitude of the peak drag monotonically increases with \(\Gamma\). Beyond \(\tilde{\mathcal{T}}=0.11\), \(C_{d}\) gradually decreases again in all resonance cases. Finally, figure 7(_c_) also shows that for all cases at large offsets, even those that did not exhibit resonance (i.e. \(\Gamma\geq 0.8\)), the drag decreases slightly. It appears that the magnitude of the decrease is inversely correlated with the mass density, resulting in larger reduction for lighter particles (figure 7(_c_)). This phenomenon does not appear to occur at fixed values of \(\mathcal{T}\) or \(\tilde{\mathcal{T}}\). It is noteworthy, though, that a similar drag reduction was encountered around similar values of \(\tilde{\mathcal{T}}\) for settling and rising spheres with COM offset (Will & Krug, 2021_b_).
In previous work by Will & Krug (2021_b_), the connection was made between the drag increase and the maximum in the enhancement of horizontal particle acceleration (\(a_{x}\)) through the rotation induced Magnus lift force (\(F_{m,\,x}\sim-\omega_{z}v_{y}\)). Besides \(F_{m,\,x}\), lateral accelerations can also be driven by pressure fluctuations induced by the vortex shedding. To study the enhancement of the path oscillations via the Magnus force, we consider the phase lag (\(\Delta\phi\)) between \(F_{m,\,x}\) and \(a_{x}\). Examples of time series with different phase lags for three values of \(\mathcal{T}\) are shown in figure 1(_c_). When \(F_{m,\,x}\) and \(a_{x}\) are in phase (\(\Delta\phi=0\)), the enhancement of the horizontal particle motion by the Magnus force is maximum. The connection between \(\Delta\phi\) and \(C_{d}\) is established for the current data set in the inset of figure 7(_d_). In the main panel of figure 7(_d_), the phase lag is plotted explicitly vs. \(\tilde{\mathcal{T}}\). Again there is excellent collapse of the data onto two branches, representing the oscillating and non-oscillating states, identical to those encountered for \(Str\) in SS4.1. We further see that \(\Delta\phi=0^{\circ}\) around \(\tilde{\mathcal{T}}=0.11\) for all cases where resonance is present, whereas acceleration and Magnus force are significantly out of phase (\(\Delta\phi\approx-60^{\circ}\)) in the same range on the lower branch. This point is emphasised by the inset of figure 7(_d_), where the peaks in \(C_{d}\) are seen to align with \(\Delta\phi=0\). The good agreement with the experimental sphere data of Will & Krug (2021_b_), included in figure 7(_d_), again underlining the fact that the resonance phenomenon in 2D is indeed comparable to that in 3D.
### Drag correlations
The question to what extent the drag of freely rising and settling bodies is correlated to path-oscillations and/or particle rotations is subject to an ongoing discussion. It is indisputable that the presence of horizontal oscillations affects the overall drag coefficient (Horowitz & Williamson, 2010_a_). However, the presence of rotations was also clearly found to play a prominent role (Namkoong _et al._, 2008; Auguste & Magnaudet, 2018; Mathai _et al._, 2018). In the work on spheres with COM offset by Will & Krug (2021_b_), it was shown that the drag correlated better with the mean rotation rate than with the amplitude of the path oscillations or the horizontal velocity fluctuations for cases at or beyond resonance. On the other hand, for zero offset, the drag appeared to correlate equally well with both in the three-dimensional (3D) chaotic regime when varying the MOI of spheres (Will & Krug, 2021_a_) and both of these quantities did not result in an adequate prediction of drag for the spiralling regime.
To add to this, we investigate how variations in \(C_{d}\) relate to the presence and strength of rotational and translational fluctuations in the present data. To this end, we define the dimensionless fluctuating velocity \(v^{*}=\sqrt{\langle\mathbf{v}_{C}^{2}\rangle_{\rm rms}}/V_{b}\), where \(\mathbf{v}_{C}^{\prime}=\mathbf{v}_{C}-\langle\mathbf{v}_{C}\rangle\), presented
in figure 8(_a_), and dimensionless root-mean-squared rotation rate \(\omega^{*}=\langle\omega\rangle_{\mathrm{rms}}D/V_{b}\), shown in figure 8(_b_), for all rising cases at \(\mathit{Ga}=200\). Since particle velocity fluctuations are dominated by their horizontal component, they are qualitatively similar to the amplitude of the path oscillations \(\hat{A}/D\). A striking difference between the distributions of \(v^{*}\) and \(\omega^{*}\) concerns the region at low density ratios (\(\Gamma<0.2\)) and small offsets (\(\mathcal{T}<0.2\)), where significant velocity fluctuations, and hence path-oscillations, are present in the almost complete absence of body rotation (a feature that is different from the findings of Mathai _et al._ (2017) as discussed in Appendix B). In figure 8(_c_, _d_), we plot \(C_{d}\) vs. \(v^{*}\) and \(\omega^{*}\), respectively. In all panels of figure 8, the marker edge colour is white if \(\Delta\phi<0\) and black if \(\Delta\phi\geqslant 0\). For a subset of the markers labelled with a white border, an approximately linear scaling of \(C_{d}\) with \(v^{*2}\) can be observed in figure 8(_c_)). This linear range corresponds to the region of small offsets and low density ratios featuring path oscillations but importantly no rotations. These cases adhere to the scaling \(C_{d}(v^{*})=2.0036{v^{*}}^{2}+1.1\) as indicate by the dashed black line in figure 8(_c_). However, once rotation begins to become significant, we find that it dominates the drag behaviour. This is demonstrated in figure 8(_d_), from which it is clear that for the markers with black borders \(C_{d}\) is approximately proportional to \(\omega^{*2}\). For cases beyond resonance, results are reasonably well represented by the fit \(C_{d}(\omega^{*2})=0.0024\omega^{*2}+1.15\). We find similar quadratic relationships for the higher Galileo number cases examined in this work, but not in the case of settling particles.
### Settling particles
Up until this point, the focus was exclusively on light (rising) 2D cylinders. For heavy particles, it was already demonstrated in Will & Krug (2021_b_) that the feedback between the Magnus lift force and particle acceleration becomes negative, effectively suppressing the resonance mode. This implies that the strong coupling between rotation and translation and the associated drag increase are absent, but not necessarily that the COM offset has no effect at \(\Gamma>1\). To explore this, the density ratio range from \(\Gamma=1.1\) up to 5 was studied with \(\mathcal{T}\) ranging from 0 to the contour \(I_{G}=0\) at \(\mathit{Ga}=200\), 500 and 700 and \(I^{*}=1\). We present only the results for \(\mathit{Ga}=200\) in figure 9(_a-e_) since the trends for the higher Galileo numbers are similar.
Figure 9(_a_) confirms that the drag coefficient does vary as a function of \(\mathcal{T}\) also for settling particles. Yet, the magnitude of the increase in \(C_{d}\) (from around 1.2 to 1.8) is much smaller compared to that observed for rising bodies (from around 1.1 up to 4). Furthermore, the drag increase is more pronounced at larger \(\Gamma\) and contrary to rising cylinders, the contours of constant \(C_{d}\) do not align well with isocontours of either \(\mathcal{T}\) or \(\tilde{\mathcal{T}}\) (dashed white lines), suggesting that the mechanism of drag increase here is not resonance related. This is further evidenced by figure 9(_b_), where for the the phase lag \(\Delta\phi\) is shown for the same cases as in figure 7(_d_). Unlike for rising particles, there is no monotonic increasing trend between offset and \(\Delta\phi\) and no regime where \(\Delta\phi=0\) can be identified. In fact, the phase lag is strongly positive (between \(90^{\circ}\) and \(135^{\circ}\)) in the regions of elevated \(C_{d}\), which implies that \(F_{m}\) (at least in part) counteracts \(a_{x}\). The drag behaviour for settling particles rather seems correlated with a reduction of the rotational inertia around point \(G\), as the latter tends to zero (black dashed line) for increasing offset due to the fact we maintain \(I^{*}=1\). In the inset of figure 9(_a_) we show this explicitly by rescaling the horizontal axis according to \(\mathcal{T}/\mathcal{T}|_{I_{G}=0}\). Doing so reveals that
Figure 9: Results for settling (\(\Gamma>1\)) 2D cylinders at \(\mathit{Ga}=200\) and \(I^{*}=1\). With (_a_) showing the drag coefficient, (_b_) the Strouhal number, (_c_) the Phase angle between Magnus force and particle horizontal acceleration, (_d_) the path amplitude and (_e_) the rotational amplitude. White lines in (a) represent isocontours of \(\tilde{\mathcal{T}}\).
the drag is maximum for low, but non-zero, rotational inertia (around \(\mathcal{T}/\mathcal{T}|_{I_{G}=0}=0.9\)). Similarly to rising cylinders, the increase in drag coincides with a decrease in \(Str\) (see figure 9(_c_)) although this effect is much smaller here as compared to the resonance mode encountered for \(\Gamma<1\). Consistent with the trends established for light cylinders at zero offset, the lateral (figure 9(_d_)) and rotational (figure 9(_e_)) amplitudes are also elevated in this parameter region. This behaviour can also be observed in supplementary video 3 showing six values of \(\mathcal{T}\) for \(\Gamma=2.5\). While the magnitude of all the path-oscillations remains significantly lower as those encountered in the resonance regime for \(\Gamma<1\), surprisingly the rotational amplitudes are on a similar level.
The relation between drag and path/rotational oscillations for settling cylinders, shown in figure 8(_c_,_d_), is qualitatively similar to that discussed in SS4.4 for rising bodies. However, the exact scaling of \(C_{d}\) with \(\omega^{*2}\) is not exactly identical. Furthermore, due to the absence of rotational-translational coupling, the observed increase in body rotation is not reflected in a similar increase in the path-oscillations. We suspect that for settling the increase in drag is primarily resulting from the rotational motion given the minute increase in the translational dynamics in this case. Finally, we observe that the magnitudes of \(C_{d}\), \(\hat{A}/D\), and \(\hat{\theta}\) all decrease towards \(\Gamma=1\). This behaviour is consistent with the previous results for \(\Gamma<1\) and the explanation provided in SS4.1, namely the divergence of the pendulum term.
Figure 10: Investigation on the effect of the dimensionless moment of inertia \(I^{*}\) in combination with the timescale ratio (\(\mathcal{T}\)) for \(Ga=200\) and \(\Gamma=0.4\) on the drag coefficient (_a_), Strouhal number (_b_), phase lag (_c_), translational amplitude (_d_), and the rotational amplitude (_e_). In these figures the solid and dashed black lines, respectively, represent contours along which \(\gamma=1\) and \(I_{G}=0\). In (_a_) the inset shows the same data as the main panel, however is plotted against the modified timescale ratio \(\tilde{\mathcal{T}}\) to include effects of the rotational added mass due to the Stokes layer.
## 5 Effects of varying moment of inertia
In this section, we will explore the effects variations in the particle MOI around \(G\), which thus far has been kept constant. Since particle rotation proved to be a critical aspect in the preceding analysis, it is anticipated that the variations of the MOI will also affect the overall dynamics. We investigate this by varying the dimensionless MOI around the geometric centre (\(I^{*}\)) in the range \(I^{*}\in[0.5,16]\) for the cases of \(\Gamma=0.4\), \(Ga=200\) and \(\mathcal{T}\in[0,0.5]\). The corresponding results for \(C_{d}\), \(\Delta\phi\), \(\mathit{Str}\), \(\hat{A}/D\), and \(\hat{\theta}\) are presented in figure 10. In these figures, physically feasible boundaries are represented by the solid black line, which indicates the line where \(\gamma=1\) and the dashed black line indicates where \(I_{G}=0\). The grey shaded region marks parameter combinations beyond both these two criteria. This region is probed less extensively and therefore no linear interpolation of the data is provided there.
The results for \(C_{d}\) as a function of \(I^{*}\) and \(\mathcal{T}\) in figure 10(_a_) clearly underline the need to include the fluid inertia in the analysis of the problem. This is obvious from the fact that the \(\mathcal{T}\) values at the maximum in \(C_{d}\) show significant variation as a function of \(I^{*}\), while inclusion of the fluid inertia in the definition of \(\tilde{\mathcal{T}}\) resolves this dependence. The latter can be seen from the white dashed lines in figure 10(_a_), but is even more evident from the inset, where the same data is plotted directly vs. \(\tilde{\mathcal{T}}\); the maxima in \(C_{d}\) collapse onto \(\tilde{\mathcal{T}}=0.11\). Besides the dependence on \(\tilde{\mathcal{T}}\), our data further show that higher values of \(I^{*}\) lead to an increased peak drag coefficient at resonance.
In figure 10(_b_), \(\Delta\phi\) is shown for the same data set. Identically to the results described in SS4.3, we find that peak drag occurs for \(\Delta\phi=0\) when the system is in resonance. The phase data is the best way to asses the validity of the inclusion of fluid inertia as shown in figure 10(_b_), the isocontours of \(\tilde{\mathcal{T}}\) almost exactly match the interpolated \(\Delta\phi\) data proving the efficacy of this model. Note that this match is obtained with no additional fitting such that this validates also our choice for the value of \(I^{*}_{a}\), that was obtained based on \(Ga\) trends in SS3.2.
In figure 10_c_-_e_, we present corresponding results for the Strouhal number and for the translational and rotational amplitudes, respectively. Consistent with the observations for \(C_{d}\) and \(\Delta\phi\), isocontours of all these quantities also line up with lines for which \(\tilde{\mathcal{T}}=\mathit{const}\). Also in line with the \(C_{d}\) results, the resonance induced changes become stronger with increasing \(I^{*}\) in all quantities considered. This behaviour can be understood by considering the systems as a driven and damped harmonic oscillator noting that when the rotational inertia of the system increases, the damping ratio decreases resulting in larger rotational amplitudes \(\hat{\theta}\). This, in turn, then affects the other parameters (\(C_{d}\), \(\mathit{Str}\) and \(\hat{A}/D\)), for which the known effects of rotation become enhanced.
As an aside, we would like to remark on the zero offset case, \(\gamma=\mathcal{T}=0\), which was studied in detail in Mathai _et al._ (2017). Based on their simulations, these authors identified a transition in particle dynamics and vortex shedding mode as a function of \(I^{*}\). We were unable to reproduce such a transition in our simulations. In order to clarify this difference, a direct comparison of these two contradictory results is provided in Appendix B at matching Galileo number (\(Ga=500\)) and overlapping range of \(I^{*}\) and \(\Gamma\). Finally, we would like to point out that the conclusion based on the present data, i.e. that rotation plays a very marginal role in affecting regime transition in absence of a COM offset, is also consistent with the experimental study on the effect of varying MOI for rising spheres by Will & Krug (2021_a_).
## 6 Galileo number effects
In this section we revisit the Galileo number, previously discussed in SS3.2, however here we will take a broader scope and will look beyond the effects of fluid inertia. In this work seven Galileo numbers, ranging from 50 up to 2000, were examined for varying COM offset at fixed \(\Gamma=0.6\), and \(I^{*}=1\). These results are presented in figure 11. Furthermore, for \(\mathit{Ga}=500\) and \(700\) we also varied \(\Gamma\) from \(0.001\) up to 5 identical to what was previously presented for \(\mathit{Ga}=200\). The only difference in the results for higher Galileo number settling particles is that for \(0.16<\mathcal{T}<0.19\) a significant horizontal drift is encountered \(v_{d}/V_{b}>0.1\), resulting in oblique trajectories. Results for the drift velocity for all cases can be found in the supplementary data.
In figure 11 (_a_) the drag coefficient is shown as a function of \(\mathit{Ga}\) and \(\mathcal{T}\). For all cases depicted here, an increase in drag associated with COM offset is observed. The magnitude of this increase in drag is found to become larger with increasing Galileo number. At \(\gamma=0\), a minimum drag (\(C_{d}=1.2\)) occurs for \(\mathit{Ga}=200\). The increase in \(C_{d}\) towards lower \(\mathit{Ga}\) is related to the viscous dominance in this regime and for higher \(\mathit{Ga}\) the increase in \(C_{d}\) is associated with increasing path and rotational oscillations. Furthermore, the isocontours of \(\tilde{\mathcal{T}}\), including fluid inertia effects, capture the essential features of Galileo number dependence of the \(C_{d}\) variation reasonably well. This is highlighted in the inset of figure 11 (_a_), where we rescale the horizontal axis to \(\tilde{\mathcal{T}}\). This also reveals that the onset of resonance occurs near at constant \(\tilde{\mathcal{T}}\). Additionally, the range of offsets where \(C_{d}\) is affected extends to larger \(\tilde{\mathcal{T}}\) for increasing \(\mathit{Ga}\) in a similar way as decreasing \(\Gamma\) or \(I^{*}\) would.
Figure 11 (_b_) shows \(\Delta\phi\) for the same parameter range. Here, the \(\tilde{\mathcal{T}}=\mathrm{const.}\) contours match with constant \(\Delta\phi\) predominantly over the range \(-45^{\circ}<\Delta\phi<-15^{\circ}\) (\(0.08<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde {\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde {\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde {\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde {\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde {\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{ T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T }}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{ \mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\tilde{\mathcal{T}}< \tilde{\mathcal{T}}<\tilde{\mathcal{T}}<\
0.11). This is a consequence of our choice to base the value of \(I_{a}^{*}\) and more specifically the fitting coefficient \(c_{1}\) on collapsing the rising edge of \(\langle\hat{\theta}\rangle\), figure 3 (_c_). An alternate choice would have been to fit \(c_{1}\) to align \(\Delta\phi=0\) across all \(Ga\). Doing so results in a value \(c_{1}\) of approximately 0.5 (note that this yields a more than 80% reduction in \(I_{a}^{*}\) for \(\mathit{Ga}<2000\)), resulting in \(\tilde{\mathcal{T}}\) contours overlapping the \(\Delta\phi=0\) isocontour in figure 11 (_b_). This does not alter the conclusion that added rotational mass is responsible for the observed behaviour. Previously discussed results collapse in a similar way for both \(c_{1}=0.5\) or \(c_{1}=2.3\), but the differences highlight the fact that the actual value depends on the particle dynamics and kinematics for a given parameter combination of \(\mathcal{T}\), \(\Gamma\), \(I^{*}\), and \(\mathit{Ga}\) and is, in fact, not constant in time altogether.
For all Galileo numbers a reduction in the Strouhal is encountered around \(\tilde{\mathcal{T}}=0.11\), see figure 11 (_d_). This drop is contingent on the presence of rotational oscillations: when rotation is absent the path-oscillation frequency is high and when it is present it is much lower identical to the behaviour observed in SS4.1. Based on the results here rotational amplitudes of approximately \(5^{\circ}\) at high Galileo numbers are sufficient to cause a drastic drop in \(\mathit{Str}\).
In figure 11(_e_) the path-amplitude response is depicted. The behaviour here is similar to that of \(C_{d}\) and \(\hat{\theta}\) except for the large increase in path-amplitude at high \(\mathit{Ga}\) and small values of \(\mathcal{T}\). We can see that as \(\mathit{Ga}\) increases, the path amplitude at \(\gamma=0\) also grows for \(\mathit{Ga}\geq 500\), whereas the rotational amplitude remains low in the same parameter region (see figure 11 (_c_). This behaviour is akin to that observed for spheres as reported in Auguste & Magnaudet (2018); Will & Krug (2021_a_), where a \(\Gamma\) threshold is encountered demarcating the transition between a vertical and chaotic rise mode, notably in the absence of strong rotation. The \(\Gamma\) value of this threshold was shown to increase with \(\mathit{Ga}\) which is what we find here as well since for \(\mathit{Ga}=200\) this transition was encountered for \(0.1<\Gamma<0.2\), see SS4.1. And indeed, for \(\mathit{Ga}\geq 500\) the behaviour of the 2D cylinders is chaotic with large fluctuations in both path and rotational amplitudes without period-to-period regularity.
To highlight this period-to-period irregularity and to demonstrate the effect of COM offset on these dynamics, we show the path-amplitude (\(\hat{A}/D\)) for \(\mathit{Ga}=700\) in figure 12 (_a_) as well as the standard deviation of \(\hat{A}^{\prime}/D\) of this quantity in figure 12 (_b_). For these higher Galileo number cases the magnitude of the fluctuations in the amplitude is comparable to the path-amplitude itself. The irregularity is also much higher than at \(\mathit{Ga}=200\), for which \(\hat{A}^{\prime}/D\leq 0.1\) even for the transitional cases discussed in SS4.2. This stresses that behaviour at higher \(\mathit{Ga}\) is in fact chaotic and it is therefore quite remarkable that the mean quantities are reasonably well behaved and in-line with results for lower Galileo numbers. This can in part be explained by a second observation pertaining to figure 12 (_b_), namely that when the resonance threshold is exceeded, i.e. \(\tilde{\mathcal{T}}>0.11\), the value of \(\hat{A}^{\prime}/D\) drops drastically. The periodicity imposed by the pendulum frequency becomes dominant and stabilises the chaotic motion resulting from the body-fluid interaction. Qualitatively, this chaotic behaviour at low offset and the stabilising effect of large offsets is visible in the supplementary video 4 and supplementary video 5 showing results for \(\mathit{Ga}=500\) and \(700\), respectively.
Finally, we focus on the lowest Galileo number (\(\mathit{Ga}=50\)), where the zero offset case exhibited a vertical (non-oblique) rise mode with superimposed path oscillations. For these cases, the discrete vortex shedding that is characteristic of high \(\mathit{Re}\) flow around a blunt body is no longer observed. Instead, an oscillating wake is encountered as depicted for the \(\gamma=0\) case in figure 12 (_c_), where the particle path and wake pattern (in terms of fluid vorticity) are visualised. Importantly, however, we find that even the small pressure
asymmetry induced by the oscillating wake and the associated small path-oscillations (\(\dot{A}/D=0.06\) at \(\gamma=0\)) combined with COM offset are enough to trigger resonance behaviour similar to that observed for the _Ga_ = 200 case. In figure 12 (_d_), a snapshot of the simulations for the resonance case is shown, featuring larger amplitude path oscillations and a wider wake. Similar to the _Ga_ = 200 case, increasing the offset beyond resonance again leads to a reduction of the amplitude of the oscillations as shown in figure 12 (_e_). Based on this result, we conclude that COM offset will affect the dynamics as long as the base state at \(\gamma=0\) exhibits path-oscillations.
## 7 Summary and conclusions
In this work, we have systematically studied Centre-Of-Mass (COM) offset effects for a freely rising or settling cylinder in a quiescent fluid via direct numerical simulations employing the Immersed Boundary Projection Method. The non-dimensional parameter characterising the COM offset is given by the timescale ratio (5) \(\mathcal{T}\equiv\tau_{v}/\tau_{p}\), with \(\tau_{v}\) defining the vortex shedding frequency timescale (set by the buoyancy velocity and particle diameter), and \(\tau_{p}\) a timescale originating from the "pendulum"-like restoring torque resulting from the offset between the centres of mass and buoyancy (cf. Will and Krug, 2021_b_). The main goal of this work has been to confirm that the behaviour of COM offset can be predicted in term of this timescale ratio, which depends on both \(\Gamma\) and \(I^{*}\). Additionally, a dependence on the Galileo number is expected but this is not reflected in the definition of \(\mathcal{T}\) by Will and Krug (2021_b_). These dependencies could not be adequately confirmed experimentally in previous work due to physical constraints, therefore, a numerical study was desirable since then one can exactly set all of the control parameters. Simplification to 2D, i.e. cylinders, allowed us to examine the 4-dimensional
Figure 12: (_a_) Amplitude of the path oscillations for _Ga_ = 700 and \(I^{*}=1\) as a function \(\mathcal{T}\) and \(\Gamma\). (_b_) For the same parameter space shows the standard deviation of the peak path-amplitude. (_d_–_e_) Trajectories and wake structure for _Ga_ 50, \(\Gamma=0.6\), \(I^{*}=1\) and \(\mathcal{T}=0\), \(0.3\), and \(0.6\), respectively. The colour gradient in the wake indicates non-dimensional fluid vorticity (\(\omega_{f}D/V_{b}\)) as indicated by the colour bar.
control parameter space which is not feasible for 3D. The thus studied parameter space ranges \(0\leq\mathcal{T}\leq 0.6\) for COM offset for Galileo numbers ranging from 50 up to 2000, \(0.001\leq\Gamma\leq 5\), and \(0.5\leq I^{*}\leq 16\).
First of all, we found that for rising particles the dynamics and response to the offset was qualitatively similar to that of spheres; a resonance mode was encountered at a particular offset for which the particle rotation and drag are significantly enhanced. For increasing offset, this effect slowly reduces towards a case where no rotation is present. This behaviour at larger \(\Gamma\), constant (or large) _Ga_, and constant \(I^{*}\) appeared to be well described by \(\mathcal{T}\) (as was the case for Will & Krug (2021_b_)). However, for lower \(\Gamma\) or _Ga_ we found that an additional effect was playing a role. This was identified as the contribution of rotational fluid inertia (\(I_{a}\)). We modelled this contribution as an annulus surrounding the cylinder, the thickness of which scales according to a boundary layer (\(1/\sqrt{Ga}\), which is identical to \(1/\sqrt{Ga}\) when \(C_{d}\) is constant). This hence introduces a Galileo (or Reynolds) number dependence in the definition of the timescale ratio resulting in (1_b_), that was previously not explored. For rising cylinders, for which the resonance phenomenon is present, this modified timescale ratio (\(\tilde{\mathcal{T}}\)) was seen to capture the trends in the observed resonance behaviour with resonance occurring at \(\tilde{\mathcal{T}}=0.11\).
Secondly, we find that body rotation is of crucial importance when considering the dynamics and kinematics of a body moving through a fluid. When altering COM offset, only the rotational equation of motion of the body is affected, and indeed we note a substantial increase in rotation around \(\tilde{\mathcal{T}}=0.11\). But more importantly, this also affects the frequency of oscillation, path-amplitude, and drag coefficient (terminal velocity). Altering the COM a couple of percent can induce a more than three-fold increase in \(C_{d}\). This increase in drag can be attributed to an increase in both rotational and path oscillations. However, the effect of rotation typically is way more significant and instances exhibiting translational oscillations without rotation featured only relatively small drag increases. While \(\tilde{\mathcal{T}}\) describes the behaviour of the output parameters relative to the offset, the magnitude of the variation (such as the drag increase) still depends on the full parameter combination; for instance the magnitude of the drag increase in resonance still depends on both \(\Gamma\) and \(I^{*}\).
Thirdly, we determined that the driving of the rotational dynamics for bodies with COM offset originates from the torque generated by horizontal path-oscillations, i.e. the \(\mathbf{a}_{C}\times\mathbf{p}\)-term in (1_b_). When switching only this coupling term off in the equations of motion while leaving the pendulum term unchanged, there was almost no difference in the particle behaviour with or without offset for both rising and settling particles. Conversely, the (viscous) fluid torque almost entirely acts as a damping term in the presence of an offset limiting the rotational oscillations. Generally, the magnitude of the viscous torque was found to be too low to drive significant rotations. This applies also at zero offset where the present data did not reproduce the regime transition for varying rotational inertia reported in Mathai _et al._ (2017).
Finally, we confirmed that for \(\Gamma>1\), i.e. settling cylinders, the resonance phenomenon is no longer present. As outlined by Will & Krug (2021_b_), the feedback between the rotation induced Magnus lift force and the direction of horizontal acceleration becomes negative for heavy particles. Nevertheless, some effects of the offset also exists for \(\Gamma>1\), however, they occur at larger offsets than expected based on the light counterparts and the effect on \(C_{d}\) is significantly smaller (and importantly does not scale according to \(\tilde{\mathcal{T}}\)). Instead of the resonance mechanism the explanation for this behaviour appears to be related to the reduction of \(I_{G}\) resulting from us enforcing \(I^{*}=1\). Surprisingly, we also uncover that for both rising and settling no effect of COM occurs when \(\Gamma\) is around unity. This can be explained by the fact that the ratio of the pendulum torque over the
driving torque (\(\Gamma/|1-\Gamma|\)), in (3.1_b_) goes to infinity for \(\Gamma\to 1\). Thus the driving can not overcome the restoring force of the pendulum and rotations remain too small to engage the feedback mechanism.
To summarise, we have given a complete overview of how COM offset depends on the four control parameters governing the system for both rising and settling cylinders in a quiescent fluid. The dynamics and kinematics uncovered here in terms of \(\widetilde{\mathcal{T}}\) qualitatively match those uncovered by Will & Krug (2021_b_) for rising and settling spheres. This suggests that the present findings largely transfer to the behaviour of spherical particles. Especially, this concerns the behaviour at low Galileo numbers, where also for spheres fluid inertia will become increasingly important which can be accounted for analogously via an added mass term.
## Acknowledgements
We acknowledge PRACE for awarding us access to MareNostrum at Barcelona Supercomputing Center (BSC), Spain (Project 2020225335). This project has further received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 950111, BU-PACT), as well as from the Netherlands Organisation for Scientific Research (NWO) under VIDI Grant No. 13477 and through the research programme of the Foundation for Fundamental Research on Matter with project number 16DDS001, which is financially supported by the NWO.
## Declaration of interests.
The authors report no conflicts of interest.
## Appendix A On the value of \(I_{a}^{*}\)
In this appendix we address how the added inertia \(I_{a}^{*}\) may be estimated, with \(I_{a}^{*}\) the added inertia due to the surrounding Stokes layer. The torque induced by this layer is estimated for a body that starts spinning in a quiescent viscous fluid such that squares of the velocity field may be neglected. For a sphere it may be shown that this layer yields an effective torque that can be analytically expressed. For a cylinder the analytical solution takes the form of an infinite series expansion (Basset, 1888). For our analysis we shall assume the torque of the cylinder to take the same form as that of a sphere. The analysis further applies under the condition that the relevant timescale is small enough such that effectively only the history torque contributes (see e.g. Feuillebois & Lasek, 1978; Auguste & Magnaudet, 2018). Here, we introduce a fitting parameter which we estimate through a series of numerical experiments to match the history torque to that of the cylinder.
### History torque of a sphere
Here, we will present the main results for the history torque of a sphere, which is the dominating contribution of the Stokes layer (see e.g. Auguste & Magnaudet (2018)), and then find an approximate analogue history torque for a cylinder. For small time \(t\) the leading contribution of the history torque for a sphere is (see e.g. Feuillebois & Lasek, 1978)
\[T_{h}\approx-\frac{1}{6}\sqrt{\pi}\mu D^{4}\nu^{-1/2}\int_{0}^{t}\frac{d \omega/d\tau}{\sqrt{t-\tau}}d\tau,\] (A.1)
with the rotational velocity \(\omega\) assumed to be continuously differentiable. Discretising the integral in (13) for small \(t\) one finds
\[T_{h}\approx-\frac{1}{3}\rho_{f}\sqrt{\pi\nu}D^{4}\Delta\omega/\sqrt{\Delta t}, \tag{14}\]
with \(\Delta\omega\equiv\omega^{n+1}-\omega^{n}\), and the difference between time levels \(n\) and \(n+1\) being equal to \(\Delta t\). If one then introduces
\[\delta=\sqrt{\nu\Delta t/(\pi d^{2})} \tag{15}\]
one can write for the rotational equation (cf. Auguste & Magnaudet, 2018)
\[\frac{1}{10}\Gamma\frac{\Delta\omega}{\Delta t}\sim-2\delta\frac{\Delta\omega }{\Delta t}. \tag{16}\]
### History torque of a cylinder
Our goal is to find the history torque \(T_{h}\) acting on a cylinder. Here, we assume that \(T_{h}\) approximately takes a similar form as that of a sphere described in SS A.1. To this point we consider a rotating cylinder in a viscous fluid that is at rest initially. The flow is assumed to be axisymmetric and squares of the velocity field are neglected, yielding the expression for the azimuthal velocity component \(\hat{u}_{\theta}\) (non-dimensionalised with length scale \(D\) and velocity scale \(\omega D\))
\[\mbox{\it Re}_{\theta}\partial_{\hat{t}}\hat{u}_{\theta}=\partial_{\hat{r}}^ {2}\hat{u}_{\theta}+\hat{r}^{-1}\partial_{\hat{r}}\hat{u}_{\theta}-\hat{u}_{ \theta}/\hat{r}^{2}, \tag{17}\]
with \(\mbox{\it Re}_{\theta}\equiv 0.25\omega D^{2}/\nu\). Here, we solve (17) numerically, by using the fourth order Runge-Kutta scheme for the time-discretisation and a second order central difference scheme for spatial gradients. In addition, we point out that \(\omega\) was given a fixed value when solving (17). We selected multiple \(\mbox{\it Re}_{\theta}\approx O(1)\) and integrated up to times such that \(t\ll D^{2}/\nu\) (the used time interval ranged from \(10^{-4}D^{2}/\nu\) down to \(10^{-8}D^{2}\nu\)). In this time-interval we assume the history torque
\[T_{h}\approx-0.25\pi D^{3}\mu c_{2}\;\frac{\omega}{\sqrt{\pi\nu t}}, \tag{18}\]
to be dominating the torque on the cylinder. The torque on the cylinder in (18) has the fitting parameter \(c_{2}\), which we find by calculating the actual torque on the cylinder from our numerical experiment via \(T_{h}=0.25D^{2}\mu\omega\int_{0}^{2\pi}[\partial_{\hat{r}}\hat{u}_{\theta}- \hat{u}_{\theta}/\hat{r}]_{\hat{r}=D/2}\,d\theta\). With this procedure we then find the fitting parameter to be equal to
\[c_{2}\approx 1.00. \tag{19}\]
Now that we have an approximate form of the torque on the cylinder, we can find the analogue form of (16) for a cylinder. We then plug the obtained expression for the torque in the angular momentum balance and find
\[\Gamma\frac{\pi}{32}D^{4}\tilde{\theta}\sim-0.25\sqrt{\frac{\pi\nu}{t}}\omega D ^{3}. \tag{20}\]
To convert the expression for continuously differentiable \(\omega\) we apply Duhamel's principle and find (by approximating \(\Delta\omega=\omega^{n+1}-\omega^{n}\))
\[\Gamma\frac{1}{8}\frac{\Delta\omega}{\Delta t}\sim-2.00\delta\frac{\Delta \omega}{\Delta t}. \tag{21}\]
The expression in (21) teaches us that the added inertia \(I_{a}^{*}\) takes the form of
\[I_{a}^{*}=16\delta. \tag{22}\]
### Dataset fit and comparison
The moment of inertia for an annulus is obtained via
\[I_{a}=\frac{1}{2}\rho\pi(r_{2}^{4}-r_{1}^{4}), \tag{10}\]
with \(r_{1}=0.5D\) and \(r_{2}=(0.5+c_{1}/\sqrt{Ga})D\), and \(c_{1}\) a constant. Plugging in the latter radii and calculating \(I_{a}^{*}\equiv I_{a}/I_{f}\) yields
\[I_{a}^{*}=\frac{8c_{1}}{\sqrt{Ga}}+\frac{24c_{1}^{2}}{Ga}+\frac{32c_{1}^{3}}{ Ga^{3/2}}+\frac{16c_{1}^{4}}{Ga^{2}}. \tag{11}\]
We fitted \(c_{1}\) such that the rotational data depicted in figure 3(_a_) collapses with respect to \(T^{*}\) (presented in 3_b_). For this fit we found \(c_{1}\approx 2.3\) to yield good results for cylinders. To compare with the history torque value from the analysis in SS A.2, we follow Mathai _et al._ (2018) and set the timescale as half the oscillation time yielding \(\Delta t=0.5\mathit{Str}D/V_{b}\). From this it follows that \(\delta=(2\pi\mathit{Str}Ga)^{-1/2}\). By plugging this relationship into (10) and assuming \(\mathit{Str}\approx 0.11\) (the value for resonance where rotation is dominant), we find a value \(c_{1}=2.4\), which does match the leading term of our fit (\(c_{1}=2.3\)) in (11) surprisingly well. In previous work by Mathai _et al._ (2018) only the leading order term was taken into account, however the inclusion of the additional higher order terms in (11) results in a large mismatch in the actual value of \(I_{a}^{*}\); more than a factor 2 at a Galileo number of 50. However, we found that inclusion of the higher order terms resulted in a better collapse of the data presented in the current work, their inclusion is therefore recommended.
## Appendix B Comparison to previous work
For the cases at \(\mathit{Ga}=500\) with zero offset, we examined a parameter space spanning \(\Gamma\) and \(I^{*}\) that was already explored extensively in the work by Mathai _et al._ (2017). Here, we take a closer look into the differences between that work and the present one.
We compare our result for \(Ga=500\) with \(\gamma=0\) and \(\Gamma\) ranging from 0.001 to 0.99 to results of Mathai _et al._ (2017) at identical parameters. A note on the difference in convention: in the work by Mathai _et al._ (2017) the parameter \(I^{*}\) is equal to \(I^{*}\Gamma\) in the present work. We extracted the results from Mathai _et al._ (2017, figures 2\(a\), _b_), where, due to the difference in the definitions of the nondimensional rotational inertia, our results lie on the diagonal \(m^{*}=I^{*}\), i.e. the line from (A) to (D). One of the main findings in Mathai _et al._ (2017) was change in the vortex shedding mode from a 2S mode at high \(\Gamma\) and \(I^{*}\) to a 2P mode at low values of these two parameters. We found no such transition as all cases in the present work exhibited a 2S mode. Furthermore, Mathai _et al._ (2017) reported that this transition was accompanied by large increases in the path and rotational amplitudes, \(\hat{A}/D\) and \(\hat{\theta}\), respectively. A direct comparison between the
Figure 13: (_a_) Path and (_b_) rotational amplitude for Galileo 500 cases without COM offset and \(I^{*}=1\) compared with results from Mathai _et al._ (2017) (extracted from their figures).
results for these two parameters is presented in figure 13 (_a_, _b_). While the general trends of a gradual increase for decreasing \(\Gamma\) (i.e. low \(\Gamma^{*}\Gamma\)) for both amplitudes is consistent between the works, there are large deviations in the magnitudes of both translational and rotational amplitudes for identical cases, especially towards lower density ratios.
These differences for identical parameters combinations raise the question if their employed virtual mass approach (Schwarz _et al._, 2015) could explain these variations. The use of the virtual mass approach (VMA) was required to stabilise the explicit scheme in Mathai _et al._ (2017). We tested this hypothesis by modifying equations (1), (2) to include a virtual mass contribution on both sides scaled by coefficient \(C_{v}\) (for which we discretised the added time derivatives on the right hand side with a forward Euler scheme)
\[(\Gamma+C_{v})\frac{\mathrm{d}\mathbf{v}_{G}}{\mathrm{d}t}=\frac{\mathbf{F}_{f}}{m_{f} }+(1-\Gamma)g\mathbf{e}_{z}+C_{v}\frac{\mathrm{d}\mathbf{v}_{G}}{\mathrm{d}t}, \tag{1a}\]
\[\frac{1}{8}(\Gamma+C_{v})D^{2}I^{*}\frac{\mathrm{d}^{2}\theta}{\mathrm{d}t^{2 }}=\frac{1}{m_{f}}T_{f}+\frac{1}{8}C_{v}D^{2}I^{*}\frac{\mathrm{d}^{2}\theta} {\mathrm{d}t^{2}}. \tag{1b}\]
A value of \(C_{v}=0\) corresponds to the present approach, while typical values to stabilise explicit schemes are on the order of the added mass, i.e. \(C_{v}=1\) for a cylinder (Schwarz _et al._, 2015). We did not observe appreciable changes in the particle dynamics when varying \(C_{v}\) in the range \(C_{v}\in[0,5]\) and it therefore appears that the discrepancies between our work and Mathai _et al._ (2017) are not related to the VMA and may be caused by other unknown factors.
|
2310.11008 | Hypernova signatures of the first stars in dwarf galaxies in the Local
Group | Observing the first generation of stars, Population III (Pop III), is still a
challenge even with the James Webb Space Telescope (JWST) due to their
faintness. Instead, searching for fossil records of Pop III stars in nearby
dwarf galaxies provides an alternative method for studying their physical
properties. It is intriguing that a star recently discovered in the Sculptor
dwarf galaxy, named AS0039, is considered to show the unique signature of a
Pop~III star. The detailed abundance patterns of AS0039 are well-matched with
those predicted by nucleosynthesis models for Pop~III exploding as an energetic
hypernova (HN), confirming its potential to provide insight into the properties
of the first stars. This study aims to explore the environmental conditions
required for the formation of such a unique star using cosmological
hydrodynamic zoom-in simulations on dwarf galaxies with a mass of M_vir~10^8
solar mass at z=0 while varying the fraction of Pop~III stars that undergo HNe.
Our simulations identify rapid gas inflow (~0.08 solar mass/yr) as a possible
factor in facilitating the formation of stars similar to AS0039. Alternatively,
the delayed formation of subsequent Pop~II stars in the gas-enriched
environment may lead to low-metallicity stars like AS0039. Additionally, using
the A-SLOTH code, we investigate the probability of finding remnants of Pop II
stars with HN signatures in nearby dwarf satellite galaxies. We suggest that
the most likely dwarf galaxies to contain HN signatures are massive satellites
with a probability of 40% in the range of M_peak~10^{10}-10^{11} solar mass and
M_star~10^7-10^8 solar mass, considering observational limitations. | Teayong Lee, Myoungwon Jeon, Volker Bromm | 2023-10-17T05:33:24Z | http://arxiv.org/abs/2310.11008v1 | # Hypernova signatures of the first stars in dwarf galaxies in the Local Group
###### Abstract
Observing the first generation of stars, Population III (Pop III), is still a challenge even with the James Webb Space Telescope (JWST) due to their faintness. Instead, searching for fossil records of Pop III stars in nearby dwarf galaxies provides an alternative method for studying their physical properties. It is intriguing that a star recently discovered in the Sculptor dwarf galaxy, named AS0039, is considered to show the unique signature of a Pop III star. The detailed abundance patterns of AS0039 are well-matched with those predicted by nucleosynthesis models for Pop III exploding as an energetic hypernova (HN), confirming its potential to provide insight into the properties of the first stars. This study aims to explore the environmental conditions required for the formation of such a unique star using cosmological hydrodynamic zoom-in simulations on dwarf galaxies with a mass of \(M_{\rm vir}\approx 10^{8}\,M_{\odot}\) at \(z=0\) while varying the fraction of Pop III stars that undergo HNe. Our simulations identify rapid gas inflow (\(M_{\rm gas}\sim 0.08\,M_{\odot}\) yr\({}^{-1}\)) as a possible factor in facilitating the formation of stars similar to AS0039. Alternatively, the delayed formation of subsequent Pop II stars in the gas-enriched environment may lead to low-metallicity stars like AS0039. Additionally, using the A-SLOTH code, we investigate the probability of finding remnants of Pop II stars with HN signatures in nearby dwarf satellite galaxies. We suggest that the most likely dwarf galaxies to contain HN signatures are massive satellites with a probability of 40% in the range of \(M_{\rm peak}\approx 10^{10}-10^{11}\,M_{\odot}\) and \(M_{*}\approx 10^{7}-10^{8}\,M_{\odot}\), considering observational limitations.
keywords: galaxies: formation - galaxies: dwarf - galaxies: star formation - methods: numerical - cosmology: dark ages, reionization, first stars
## 1 Introduction
Understanding the physical characteristics of the first generation of stars, known as Population III (Pop III) stars, and the first galaxies is crucial for gaining a comprehensive understanding of the entire Universe (reviewed in Bromm et al., 2009; Bromm, 2013; Klessen and Glover, 2023). Recent breakthroughs in observing distant galaxies born at \(z\gtrsim 10\) by the James Webb Space Telescope (JWST) have opened up new previously inaccessible territories of the early Universe (e.g., Atek et al., 2022; Donnan et al., 2022; Whitler et al., 2022; Bouwens et al., 2023; Finkelstein et al., 2023). The discovery of unexpectedly bright high-\(z\) galaxies has prompted the investigation of the possible existence of massive Pop III stars in these galaxies (e.g., Kannan et al., 2023; Boylan-Kolchin, 2023; Inayoshi et al., 2022; Haslbauer et al., 2022). As a result, understanding the characteristics of the first stars has become even more important. While JWST could possibly directly observe the first stars, if Pop III were making up most of the stellar mass in early galaxies or globular clusters (e.g., Mowla et al., 2022), detecting individual Pop III stars remains challenging and requires larger telescopes (e.g., Schauer et al., 2020; Woods et al., 2021; Katz et al., 2023; Larkin et al., 2023; Venditti et al., 2023), except for the cases where the brightness is significantly increased by gravitational lensing effects (e.g., Schauer et al., 2022; Welch et al., 2022).
Although direct detection of the first stars remains elusive, extensive theoretical studies have been conducted to infer their physical properties (see, e.g., Bromm, 2013 for a review). It is widely accepted that Pop III stars were massive stars, with masses larger than a few tens of solar masses, formed from primordial gas in minihaloes with virial masses of \(M_{\rm vir}\approx 10^{5-6}\,M_{\odot}\) at \(z\gtrsim 15\)(e.g., Haiman et al., 1997; Tegmark et al., 1997; Bromm et al., 1999; Abel et al., 2002; Yoshida et al., 2003). Pop III stars, depending primarily on their initial masses, are expected to undergo supernovae (SNe) explosions, ejecting metals synthesized in their cores and polluting the surrounding interstellar medium (ISM) (e.g., Wise and Abel, 2008; Greif et al., 2010; Wise et al., 2012; Jeon et al., 2014). This contaminated medium eventually gives rise to the second generation of stars, Population II (Pop II) stars, which tend to have lower masses and longer lifespans than their Pop III predecessors (e.g., Omukai, 2000; Bromm et al., 2001; Schneider et al., 2002). Consequently, such Pop II stars (\(m_{\rm PopII}\lesssim 0.8\,M_{\odot}\)) may still exist in the Local Universe, displaying the distinctive characteristics of Pop III stars. The search for such traces of ancient stars that existed in the Local Universe is known as "stellar archaeology." Furthermore, if these Pop II stars belong to local dwarf galaxies, it is possible to infer information not only about the stars themselves but also about the environment in which they formed by reconstructing the star formation histories (SFHs)
of these galaxies. This approach is known as "galaxy archaeology" (e.g., Bovill & Ricotti, 2011; Frebel & Norris, 2015)
What unique characteristics of the first generation of stars have been preserved in local dwarf galaxies? One possible answer lies in carbon-enhanced metal-poor stars (CEMPs) (e.g., Beers & Christlieb, 2005; Aoki et al., 2007), which have been thought to be associated with Pop III stars due to their commonness in observed dwarf galaxies such that the fraction of CEMPs increases as decreasing metallicities, reaching \(-92\%\) below \(\mathrm{[Fe/H]}<-4\)(e.g., Lee et al., 2013; Pluacco et al., 2014, 2021). Specifically, these stars are defined by \(\mathrm{[C/Fe]}\gtrsim 0.7\) and \(\mathrm{[Fe/H]}\lesssim-2\). One possible explanation for this is the weak supernova explosion of Pop III stars, which can expel light elements such as carbon and oxygen from the outer layers of the star while heavy elements like iron fall back onto the core (e.g., Norris et al., 2013; Jeon et al., 2021).
Recently, in the Sculptor dwarf spheroidal (dSph), Skaladotritz et al. (2021) have identified a potential remnant of Pop III stars by discovering AS0039, the most metal-poor star with \(\mathrm{[Fe/H]}=-4.11\), among the observed local dwarf galaxies. According to the nucleosynthesis model for metal-free stars proposed by Heger & Woosley (2002, 2010), the chemical abundance patterns of AS0039 appear to be consistent with those of a Pop III star with a mass of \(m_{\mathrm{PopIII}}\approx 20\pm 2\,M_{\odot}\) that ended its life as a hypernova (HN) explosion with an SN energy of \(E_{\mathrm{SN}}=10^{52}\) erg. Notably, AS0039 is a peculiar carbon-poor star with \(\mathrm{[C/Fe]}_{\mathrm{LTE}}\sim-0.75\) and has distinct \(\alpha\)-element patterns, which sets it apart from normal-carbon stars. According to \(\mathrm{\SIUnitSymbolDegree}\), there is a 77% chance that the enrichment of AS0039 is due to a single Pop III SN event. Furthermore, Pluacco et al. (2021) have reported another potential HN signature associated with Pop III stars. They have identified a star in Stripe 82 called SPLUS 12104-0049, whose chemical composition appears to be consistent with that of a Pop II star formed from gas polluted by a Pop III star with a mass of \(30\,M_{\odot}\). This Pop III star would have exploded as an HN with an energy of \(E_{\mathrm{SN}}=10^{52}\) erg as well.
The aim of this study is to offer a theoretical interpretation of the observed stars and gain insights into the era of the first and second generations of stars. In particular, the goal is to reproduce stars such as AS0039, which are rare and could potentially offer clues about Pop III HN events as a component of the stellar population in a nearby dwarf galaxy. To be specific, we investigate how the signatures of Pop III HN events might appear in subsequent Pop II stars. Moreover, we examine the circumstances under which a star like the observed AS0039 can form. To achieve this goal, we conduct a series of cosmological zoom-in simulations on a local dwarf analog, which is comparable to the ultra-faint dwarf galaxy (UFD) with a mass of \(M_{\mathrm{vir}}\lesssim 10^{8}\,\mathrm{M}_{\odot}\) at \(z=0\). UFDs are the smallest and most metal-poor galaxies in the Universe, and they are considered an excellent laboratory for studying the earliest stars since they preserve traces of them. (For a review, see Simon, 2019, and also see Tolstoy et al., 2009; Brown et al., 2014; McConnachie, 2012.)
The exact mechanism for the explosion of highly energetic HN is still unclear. However, there is widespread agreement that it may be connected to massive stars exhibiting high angular momentum (e.g., Nomoto et al., 2006; Burrows et al., 2007). Several studies have been conducted on the level of rotation in metal-free stars (e.g., Stacy et al., 2011; Hirano & Bromm, 2018). One such study by Stacy et al. (2011) estimates the rotational velocity of Pop III stars by tracking the evolution of primordial gas up to densities of \(n_{\mathrm{H}}=10^{12}\mathrm{cm}^{-3}\) in minihaloes. This suggests that the rotational velocity of stars larger than \(30\,M_{\odot}\) may potentially exceed \(1000\,\mathrm{km}\,\mathrm{s}^{-1}\). Consequently, these stars may experience chemically homogeneous evolution (CHE) (Sibony et al., 2022), creating a reservoir of rotational energy sufficient to initiate an HN explosion. Another study by Choplin et al. (2019) investigated the abundance patterns of carbon-enhanced extremely metal-poor (EMP) stars (\(-4<\mathrm{[Fe/H]}<-3\)), comparing them to massive stellar models that consider rotation. The findings suggest a higher fraction of fast rotators at low metallicity, with the velocity distribution of star models reaching equatorial velocities of 550-640 km s\({}^{-1}\). As such, whether or not Pop III stars undergo HN explosions are determined by their initial masses as well as by their degree of rotation (e.g., Marigo et al., 2003; Ekstrom et al., 2008; Heger & Woosley, 2010; Chatzopoulos & Wheeler, 2012; Choplin et al., 2019; Murphy et al., 2021). For instance, Yoon et al. (2012), where they take into account magnetic fields and the rotation of metal-free stars, suggest that zero-age main sequence stars with masses between \(13\,M_{\odot}\) and \(84\,M_{\odot}\) may undergo HN explosions if they experience CHE.
In this work, we have conducted cosmological simulations on a dwarf galaxy analog, where we vary the fraction of Pop III stars that end their lives as HNe as a free parameter, covering the full range of possibilities. To further understand the observed frequency of HN signatures, we have also utilized a semi-analytic model of galaxy formation called A-SLOTH (Hartwig et al., 2023). This model enables us to connect the formation of early Pop III stars with their fossil remnants in nearby galaxy analogs, providing an estimate of the probability of discovering satellite galaxies that contain Pop III HN signatures based on the fraction of Pop III stars exploding as HNe in the early Universe.
The paper is structured as follows: Section 2 outlines the numerical methodology employed in this research. Section 3 presents and analyzes the simulation results, including the growth of simulated galaxies over cosmic time, the prevalence of minihaloes and Pop II stars exhibiting Pop III HN signatures, and the chemical imprints left on the remaining Pop II stars. In Section 4, we discuss the likelihood of finding satellite galaxies that harbor Pop III HN signatures using a semi-analytic galaxy formation model. The key discoveries of this study are summarized in Section 5. Unless otherwise indicated, all distances are provided in physical (proper) units to maintain consistency.
## 2 Numerical Methodology
### Simulation Setup
We have conducted cosmological hydrodynamic zoom-in simulations on dwarf galaxies using a modified version of the parallel N-body/smoothed particle hydrodynamics code, GADGET3 (e.g., Springel et al., 2001; Springel, 2005; Schaye et al., 2010). We utilize cosmological parameters with a fraction of dark energy of \(\Omega_{\mathrm{A}}=0.73\), dark matter of \(\Omega_{\mathrm{m}}=0.26\), baryons of \(\Omega_{\mathrm{b}}=0.04\), and a Hubble constant of \(\Omega_{\mathrm{H}}=0.71\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}\), respectively (Planck Collaboration, 2016). We employ the code MUSIC (Hahn & Abel, 2011) to generate initial conditions for a cosmological simulation box with a linear size of 3.125 comoving Mpc. In the first step, we conducted a dark-matter-only simulation to \(z=0\) using \(128^{3}\) particles to represent this box and then identify a target galaxy with a mass of \(M_{\mathrm{vir}}\approx 10^{8}\,M_{\odot}\) at \(z=0\) using a halo-finder code called ROCKSTAR (Behroozi et al., 2013). As a second step, we performed three consecutive refinements for the Lagrangian volume that encompasses particles within \(R=2.5R_{\mathrm{vir}}\) at \(z=0\). This results in the most refined region where the masses of dark matter and gas particles are \(m_{\mathrm{DM}}\sim 500\,M_{\odot}\) and \(m_{\mathrm{gas}}\sim 63\,M_{\odot}\), respectively.
At each time step, we solve the rate equations for primordial species such as \(\mathrm{H,H^{+},H^{-},H_{2},H^{+2},He,He^{+},He^{++},e^{-},D,D^{+}}\), and
HD, considering all the relevant cooling processes such as H and He collisional ionization, excitation and recombination cooling, bremsstrahlung, inverse Compton cooling, and collisional excitation cooling of H2 and HD. We also account for the cooling of gas by metal species such as carbon, oxygen, silicon, magnesium, neon, nitrogen, and iron, using cooling rates from the photoionization package CLOUDY (Ferland et al., 1998). To mimic the effect of reionization, we introduce a cosmic UV background (Haardt and Madau, 2012) at \(z=7\) and linearly increase its strength until \(z=6\), which is when cosmic reionization is believed to have completed (e.g., Gunn and Peterson, 1965; Fan et al., 2006).
### Star formation
As gas densities increase, stars are allowed to form as collisionless particles when the density surpasses a threshold of \(n_{\rm H}=100\) cm\({}^{-3}\), according to the Schmidt law (Schmidt, 1959). The star formation rate is governed by the equation \(\dot{\rho}\ast=\rho_{\rm th}/\tau_{\rm s}\), where \(\tau_{\rm s}=\tau_{\rm ff}/\epsilon_{\rm ff}\) is the star formation time scale. Here, \(\tau_{\rm ff}=[3\pi/(32G\rho_{\rm th})]^{1/2}\) is the free-fall time at the threshold \(\rho_{\rm th}\), and \(\epsilon_{\rm ff}\) is the star formation efficiency per free-fall time. We set the star formation efficiency to \(\epsilon_{\rm ff}\sim 0.01\) for both Pop III and Pop II stars in this study, which is a typical value in the local Universe (e.g., Leroy et al., 2008).
The timescale for star formation is determined as follows:
\[\tau_{\rm s}=\frac{\tau_{\rm ff}(n_{\rm H,th})}{\epsilon_{\rm ff}}\sim 400{ \rm Myr}\left(\frac{n_{\rm H,th}}{100\ {\rm cm}^{-3}}\right)^{-1/2}. \tag{1}\]
#### 2.2.1 Pop III stars
While the exact mass of metal-free stars in the early Universe remains uncertain, it is widely accepted that they were likely to be highly massive, as efficient cooling mechanisms were scarce at that time, with molecular hydrogen (H\({}_{2}\)) being the primary coolant available (e.g., Bromm, 2013; Whalen et al., 2013; Hirano et al., 2014; Stacy et al., 2016; Lazar and Bromm, 2022). Therefore, we adopt a top-heavy initial mass function (IMF) over a mass range of \([1-260]\ M_{\odot}\) for Pop III stars to extract an individual star. The functional form of the IMF is expressed as follows:
\[\phi=\frac{dN_{\rm PopIII}}{d\ln m}\propto m^{-1.3}\exp\left[-\left(\frac{m_{ \rm char}}{m}\right)^{1.6}\right]\, \tag{2}\]
where \(m_{\rm char}=60\ M_{\odot}\) is the characteristic mass.
#### 2.2.2 Pop II stars
Pop II stars form in gas clouds that have been contaminated by metals expelled by previous Pop III SN. The threshold density for Pop II star formation is the same as that for Pop III stars, but there is an additional requirement that the gas metallicity should be higher than a critical metallicity value of \(Z_{\rm crit}=10^{-5.5}\ Z_{\odot}\), which is motivated by dust-continuum cooling. Since the resolution of the gas mass is not high enough to directly convert a gas particle into an individual Pop II star with a mass of around \(1\ M_{\odot}\), the assumption is made that Pop II stars form as a cluster with a total mass of \(500\ M_{\odot}\), following the Salpeter initial mass function with a slope of \(\alpha=1.35\) over the mass range of \([0.1-100]\ M_{\odot}\). Once a gas particle meets the two conditions of \(n_{\rm th}\) and \(Z_{\rm crit}\), it is replaced by a sink particle that accretes surrounding gas until it attains a mass of \(M_{\rm PopIII}=500\ M_{\odot}\).
### Supernova feedback
The fate of individual Pop III stars depends on their initial masses, with stars in the \(10-40\ M_{\odot}\) range undergoing core-collapse supernovae (CCSNe) with typical energies of \(10^{51}\) erg and stars in the \(140-260\ M_{\odot}\) range dying in pair-instability supernovae (PISNe) with energies of \(10^{52}\) erg. We transfer the resulting SN energy as thermal energy to neighboring SPH particles, increasing their temperature and ejecting metals based on the properties of the SN progenitor. Pop III metal yields provided by Heger and Woosley (2010) and Heger and Woosley (2002) are adopted for CCSNe/HNe, and PISNe, respectively, with seven metal species (C, N, O, Si, Mg, Ne, Fe) tracked for normal CCSNe and PISNe, and 16 metals tracked by adding nine species (Na, Al, Ca, Sc, Cr, Mn, Co, Ni, Zn) for HNe.
The mass range at which Pop III stars produce an HN is not yet fully understood (e.g., Karlsson et al., 2013). In this study, we aim to investigate the environmental conditions required for producing a star resembling AS0039 in a dwarf galaxy. For simplicity, whenever an HN event occurs, we assume a fixed metal yield from a \(21\ M_{\odot}\) Pop III progenitor (Heger and Woosley, 2010), which is considered as a progenitor mass of AS0039. Varying metal yields based on progenitor mass would make it difficult to distinguish unique HN signatures. This, in turn, would hinder our ability to examine the conditions necessary for Pop II stars to display HN signatures comparable to the observed AS0039. We note that each HN event releases \(E_{\rm SN}=10^{52}\) erg of energy. For Pop II stars, possible metal yields associated with various evolutionary phases, such as AGB, TypeII, and Type I a SNe, are considered. Jeon et al. (2017) provides a detailed description of the models adopted for each event.
Metals expelled by SNe are spread among approximately 32 neighboring gas particles, \(N_{\rm Ngb}\approx 32\), located around the explosion site. This initial distribution results in the metallicity of \(Z_{\rm i}\), which is determined by the metal mass divided by the number of neighboring gas particles and a spline kernel function \(W(r)\), dependent on the distance from the metal ejection site. Afterward, the metals diffuse from the original gas particle to its surrounding particles through a diffusion process described by a diffusion equation.
\[\frac{dc}{dt}=\frac{1}{\rho}\nabla\cdot(D\nabla c)\, \tag{3}\]
, where \(c\) is the concentration of a contaminant fluid per unit mass, and \(D\) is the diffusion coefficient. To incorporate the diffusion process into SPH simulations, Greif et al. (2009) has discretized the equation for an particle \(i\) as follows,
\[\frac{dc_{i}}{dt}=\Sigma_{j}K_{ij}(c_{i}-c_{j})\, \tag{4}\]
where
\[K_{ij}=\frac{m_{j}}{\rho_{i}\rho_{j}}\frac{4D_{i}D_{j}}{(D_{i}+D_{j})}\frac{r_ {ij}\cdot\nabla_{i}W_{ij}}{r_{ij}^{2}}. \tag{5}\]
Here, the index \(j\) indicates surrounding gas particles, \(W_{ij}\) is the kernel, and \(r_{ij}\) is the distance between particle \(i\) and j.
### Hypernovae
#### 2.4.1 Sets of parameters
To explore the circumstances that allow for the formation of a star similar to AS0039, we have conducted simulations that altered two variables: the threshold metallicity required for the formation of
Pop II stars and the fraction of Pop III stars that explode as HNe explosions. Table 1 presents information about the five different sets of simulations where we vary the two parameters mentioned previously.
* **Critical metallicity:** The abundance of Pop III stars and the occurrence of Pop III HNe are determined by the critical metallicity. If the critical metallicity is low, mildly enriched gas can form Pop II stars, whereas high critical metallicity could lead to the formation of Pop III stars, increasing their abundance. There have been numerous studies aiming at narrowing down the value of critical metallicity (e.g., Bromm & Loeb, 2003; Santoro & Shull, 2006; Frebel et al., 2007; Dopcke et al., 2013; Chon et al., 2021; Sharda & Krumholz, 2022). For instance, Bromm & Loeb (2003) suggests that fine-structure lines of carbon and oxygen dominate the transition from Pop III to Pop II, with a critical metallicity of \(Z_{\rm crit}=10^{-3.5}\,Z_{\odot}\). Alternatively, dust-induced fragmentation can also be a key driver for the formation of low-mass Pop II stars, even at a low metallicity of \(10^{-5.5}\,Z_{\odot}\)(e.g., Tsuribe & Omukai, 2006; Caffau et al., 2011; Schneider et al., 2012). Since AS0039 has an extremely low stellar metallicity of \(\rm[Fe/H]=-4.11\), we have chosen to use a critical metallicity of \(Z_{\rm crit}=10^{-3.5}\)\(Z_{\odot}\) as a reference value, denoted as Z55 in the simulation names. We compare the results of those simulations with the run that uses a higher critical metallicity of \(Z_{\rm crit}=10^{-3.5}\)\(Z_{\odot}\), as in the run named Z35-F50.
* **Frequency of hypernovae:** The extent to which Pop III stars undergo HN explosion at the end of their lives remains uncertain (e.g., Umeda & Nomoto, 2005; Kobayashi et al., 2006; Nomoto et al., 2006; Yoon et al., 2012; Karlsson et al., 2013; Placco et al., 2015; Ishigaki et al., 2018). To explore the full range of possibilities, the parameter \(f_{\rm HN}\) is varied in this study. To establish the upper limit, it is assumed that all Pop III stars experience HN explosion, designated as F100 in the simulation name. Conversely, the lower limit is set by allowing for a single HN explosion during the assembly history of the simulated UFD analog. Specifically, two cases are considered: one where the first Pop III star formed undergoes HN explosion (Z55-early), and another where the single HN event happens randomly during the assembly process (Z55-Late).
Based on the findings of Yoon et al. (2012), which propose a possible mass range of \(13\,M_{\odot}\) to \(84\,M_{\odot}\) for HN events, about 50% of Pop III stars are expected to undergo HN events when using the IMF presented in equation (2). This fraction is also consistent with the study by Kobayashi et al. (2006), which utilized simulations of galaxies similar to the Milky Way (MW) to explain observed chemical abundances, suggesting that to match the abundance of metal species such as \(\rm[Zn/Fe]\), about half of the stars independent of mass and metallicity should explode as HNe, especially for low metallicity. In accordance with these findings, we adopt a value of \(f_{\rm HN}\) of 50%, labeled as F50 in the simulation names. We also take into account the work of Karlsson et al. (2013), who propose that Pop III stars in the mass range of \(m_{\rm PopIII}=25-40\)\(M_{\odot}\) are capable of undergoing HN explosions, resulting in \(f_{\rm HN}\approx 11\)%. To incorporate their findings, we run a simulation called Z55-Mass, in which HN explosions are allowed to occur among Pop III stars in this mass range.
#### 2.4.2 Finding a hypernova signature
We employ a tool called STARFIT to extract the HN signatures from our simulations. This tool provides various metal yields that result from nucleosynthesis models (Heger & Woosley, 2002, 2010; Limongi & Chieffi, 2012; Just et al., 2015; Grimmett et al., 2018; Limongi & Chieffi, 2018). We utilize these yields to determine several progenitor characteristics, such as mass, metallicity, and explosion energy. It is noteworthy that Skialadottir et al. (2021) also obtain the progenitor properties of AS0039 using STARFIT, which are consistent with those of a primordial star having a mass of \(m_{\rm PopIII}=21\)\(M_{\odot}\) and explosion energy of \(E_{\rm SN}=10\times 10^{51}\) erg, based on the metal abundances comparison. In Figure 1, we present a comparison of individual metal abundances between AS0039 (depicted as a red pentagon) and the best-fit model generated from STARFIT (represented as a blue line). We also show the abundances of Pop II stars with HN signatures, obtained from our simulation, as a cyan line, which closely matches the result from STARFIT. Additionally, we illustrate the metal species produced by a Pop III star, which has an initial mass of \(m_{\rm PopIII}=25\)\(M_{\odot}\) and exploded as a faint supernova, as a gray line for comparison.
## 3 Simulation results
This section presents the findings of the simulated galaxy. Section 3.1 illustrates the process of mass assembly for the UFD analog. We
\begin{table}
\begin{tabular}{c c c} \hline \hline Name & \(Z_{\rm crit}[Z_{\odot}]\) & \(f_{\rm HN}\) \\ \hline Z35-F50 & \(10^{-3.5}\) & 50\% \\ Z55-F50 & \(10^{-5.5}\) & 50\% \\ Z55-F100 & \(10^{-5.5}\) & 100\% \\ Z55-Early & \(10^{-5.5}\) & early 1 star \\ Z55-Late & \(10^{-5.5}\) & late 1 star \\ Z55-Mass & \(10^{-5.5}\) & mass range \(25-40\) M\({}_{\odot}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the simulations. Column (1): Run name. Column (2): Critical metallicity for Pop II star formation. Column (3): Fraction of Pop III stars exploding as HNe.
Figure 1: The metal species of individual elements observed (represented by the red pentagon) are compared with the best-fit values provided by STARFIT (shown as the blue line). This comparison suggests that the progenitor of the Pop II star responsible for the observed metal abundances had a mass of \(m_{\rm PopIII}\approx 21\)\(M_{\odot}\), and it exploded as an HN explosion with an energy of \(E_{\rm SN}=10^{52}\) erg. The abundance patterns produced by the HN explosion display unique characteristics that are different from those of a faint SN model, as depicted by the grey line. We also use STARFIT to identify HN signatures in Pop II stars generated in our simulated galaxies. An example of this is illustrated by the cyan line.
then examine the number of progenitor minihaloes that could have Pop II stars with Pop III HN signatures and the fraction of such stars among all Pop II stars in Section 3.2. Finally, we discuss the estimated chemical abundances of Pop II stars from the simulations and compare them with observed values.
### Mass assembly
Figure 2 shows the mass assembly history of the simulated UFD galaxy analog (left panel) and the cumulative star formation history (right panel) by varying the fraction of Pop III stars that explode as HNe and the critical metallicity for Pop II star formation. In the left panel of Figure 2, each color represents the dark matter (black), gas (blue), and stellar (red) masses. The gray-shaped region indicates the epoch of reionization, where the UV background is introduced and gradually increases in strength. Although the stochastic nature of star formation may result in slight differences in the gas and stellar mass during the assembly process, all sets of galaxies with the same initial conditions have grown into similar galaxies with a dark matter mass of \(\rm M_{DM}\approx 3.8\times 10^{8}\) M\({}_{\odot}\), a gas mass of \(\rm M_{gas}\approx 6.9\times 10^{5}\) M\({}_{\odot}\), and a stellar mass of \(\rm M_{*}\approx 1.2\times 10^{4}\) M\({}_{\odot}\) at \(z=0\), which are comparable to those of actual UFD galaxies.
The simulated ratio of baryon mass to DM mass maintains a cosmological value of \(f_{\rm b}=\Omega_{b}/\Omega_{m}\approx 0.17\) up to a redshift of \(z=11.5\), but it drops due to the feedback from Pop III and Pop II stars. Notably, as shown in the left panel of Figure 2, at around \(z\approx 10\), the baryon fractions in four runs (Z35-F50, Z55-F50, Z55-100, and Z55-Early) decrease to \(f_{\rm b}=0.04\) due to the formation of Pop II stars. These runs produce 2-5 times more stars than the other two runs (Z55-Late and Z55-Mass), leading to a reduction of the gas fraction. Furthermore, the emergence of the UV background causes a further decrease in the baryon fraction to below \(f_{\rm b}\approx 0.01\) at \(z=6\), from \(f_{\rm b}\approx 0.08\) at \(z=7\). Nonetheless, in all six sets of simulations, the target halo ultimately reaches a similar baryon fraction, even in the presence of different HNe frequencies. This outcome can be attributed to the relatively comparable total energy delivered by all SNe to the halo. Despite the significant impact of HNe, their events are surpassed in frequency by the higher occurrence of Pop II SNe. The cumulative energy contribution from both SNe and HNe into the target halo, with respect to cosmic time, is illustrated in Figure 11.
It is important to note that the formation of the UFD analog at \(z=0\) is a result of small minihaloes merging together, indicating that it has multiple progenitor minihaloes. For reference, we offer a merger tree of the target halo in Appendix B1 for the Z55-F50 run. Our simulations reveal that in a small system such as UFD, the main progenitor does not dominate star formation at high redshifts, but rather, star formation is initiated in minihaloes of comparable mass, which merge later. It should be noted that we define a minihalo as a halo in which at least one Pop III star has formed. The first star formation episode starts at around \(z\approx 12.9\) and ends at \(z=6.5\), resulting in the star formation duration of approximately 550 Myrs. Therefore, the increase in stellar mass below \(z=6\), depicted as the red lines in the left panel of Figure 2, is due to the merging of progenitor minihaloes. It should be mentioned that we halt the simulations, except for Z55-F100, when the redshift reached \(z\approx 3\) where the dark matter mass is \(\rm M_{DM}\approx 1.6\times 10^{8}\) M\({}_{\odot}\), the gas mass is \(\rm M_{gas}\approx 6.2\times 10^{5}\) M\({}_{\odot}\), and the stellar mass is \(\rm M_{*}\approx 1.4\times 10^{4}\) M\({}_{\odot}\), respectively. At this point, star formation is completely quenched due to cosmic reionization, causing the maximum gas density inside the primary halo to drop to \(n_{\rm H}\approx 10^{-4}\) cm\({}^{-3}\), and no further halo mergers occur. This was confirmed by running the Z55-F100 simulation down to \(z=0\).
In order to explore the process of stellar assembly in more detail, we have included a plot of the cumulative star formation in the right panel of Figure 2, which shows the star formation for Pop III (green)
Figure 2: The assembly histories of the simulated UFD analog are shown for six sets of simulations. The evolution of the masses for dark matter (black), gas (blue), and stars (red) is illustrated as a function of time in the left panel. The right panel displays the cumulative star formation histories for Pop III (green) and Pop II (yellow) stars, respectively. The formation of stars begins at approximately \(z\approx 12.9\) within minihaloes with masses of \(M_{\rm vir}\approx 10^{5-6}\,M_{\odot}\), which merged over time, eventually forming a galaxy with a mass of \(M_{\rm vir}\approx 3.8\times 10^{8}\,M_{\odot}\) by \(z=0\). We find that the star formation is halted by both cosmic reionization and SN feedback at \(z=6.4\), resulting in a star formation period lasting approximately 550 Myr. It is crucial to emphasize that the emergence of the UFD analog at \(z=0\) arises from the mergers of multiple progenitor haloes, indicating that the stars do not solely originate from the primary halo but also from other progenitor haloes. In the right panel, we show the history of stars originating from all progenitor haloes. In order to enhance comprehension regarding the distinction between stars formed in situ and those arising from external haloes, we explicitly differentiate these contributions in Figure 22.
and Pop II (yellow) stars separately. Pop III star formation takes place simultaneously in progenitor minihaloes of the simulated UFD analogs. For most simulations, Pop III star formation ends around \(z=8.5\). However, for the Z35-F50 simulation (solid line), which has a critical metallicity of \(Z_{\rm crit}=10^{-3.5}\)\(Z_{\odot}\) compared to the other simulations with \(Z_{\rm crit}=10^{-5.5}\)\(Z_{\odot}\), Pop III stars can be formed even in gas with higher metallicity. This causes the galaxy in the Z35-F50 simulation to continue forming Pop III stars until around \(z=7.5\), resulting in a higher stellar mass ratio occupied by Pop III stars by about 10% compared to the other simulations.
Our simulations show that the majority of minihaloes (\(\sim 83\)%) are likely to produce only a single Pop III star before transitioning to Pop II star formation. However, a small fraction of minihaloes is capable of producing multiple Pop III stars, with 11% having two and 3% having three. It is worth noting that high-resolution simulations typically reveal the formation of Pop IIIs in small groups (e.g., Stacy et al., 2013, 2016; Susa, 2019; Liu et al., 2021; Jaura et al., 2022; Chiaki and Yoshida, 2022). Therefore, given that we allow Pop III stars to form as individual entities rather than as members of a stellar cluster composed of multiple stars, the feedback effect we present should be considered a lower limit. This is because the possibility of several sequential SN explosions might more effectively suppress subsequent star formation.
To investigate the role of merger events in the assembly of the simulated UFDs, we classify star formation for Pop III and Pop II stars into in-situ and ex-situ star formation (see Table 2). In-situ star formation refers to stars that are formed within the primary halo, which is defined as the halo contributing the most significant mass to the overall halo mass during the subsequent time step, especially when we trace back from \(z=0\) to higher redshifts in search of the progenitor halo. Meanwhile, ex-situ star formation refers to stars that are formed in other progenitor haloes and then merge with the primary halo at a later time. Across all simulations, the Z35-F50 run has the highest total in-situ star formation for all stars, accounting for 26.7%, while the Z55-F100 run has the lowest value at 15.4%. The average in-situ star formation across all simulations is 19.5%, and approximately 80% of the stars in the galaxies have grown through mergers. When comparing the in-situ ratio of Pop III and Pop II stars, the rate of Pop II is higher by 8%. This is because the duration of Pop III star formation is shorter due to the transition from Pop III to Pop II, and the halo mass becomes heavier when Pop II star formation begins, leading to a higher in-situ rate compared to that of Pop III stars.
The formation and merging of stars in multiple distinct minihaloes in the simulated galaxy at \(z=8.3\) are depicted in Figure 3. Each panel shows the projected dark matter overdensity, hydrogen number density, gas temperature, and gas metallicity within 1.75 kpc (4R\({}_{\rm vir}\)) from the galaxy center, from left to right. Solid white circles at the center of each panel mark the virial radius of the halo. Different symbols represent different types of stars, with cyan stellar and yellow circle symbols representing Pop II stars with and without Pop III HN signatures, respectively. The figure illustrates that stars in the early Universe form in multiple progenitor minihaloes and then merge onto the primary halo at a later time. We find that the dispersal of gas observed in the gas density panel, particularly in relatively high-density clouds, is a consequence of an energetic HN explosion.
### Incidence of hypernova pattern
We examine the percentage of progenitor minihaloes containing Pop III stars that experience HN explosions and the number of minihaloes hosting the resulting Pop II stars that inherit the Pop III HN signatures within the same halo for each simulation. The corresponding numbers are summarized in Table 3. For example, in the Z35-F50 simulation, 12 different progenitor minihaloes have formed Pop III stars, denoted as \(N_{\rm mini,Pop\ III}\)=12. Of these, 7 minihaloes experience
\begin{table}
\begin{tabular}{c c c c} \hline Run & Total in-situ & Pop III in-situ & Pop II in-situ \\ \hline Z35-F50 & 26.7\% & 20.0\% & 30.0\% \\ Z55-F50 & 23.1\% & 14.3\% & 25.0\% \\ Z55-F100 & 15.4\% & 11.1\% & 16.7\% \\ Z55-Early & 15.6\% & 10.0\% & 17.1\% \\ Z55-Late & 18.6\% & 12.5\% & 20.0\% \\ Z55-Mass & 17.6\% & 11.1\% & 19.0\% \\ \hline \end{tabular}
\end{table}
Table 2: The percentage of stars formed through in-situ star formation for each simulation set. Column (1) indicates the run name, column (2) represents the total in-situ fraction, column (3) shows the in-situ fraction for Pop III stars, and column (4) indicates the in-situ fraction for Pop II stars.
Figure 3: The morphology of the simulated galaxy at \(z=8.3\) is illustrated in four panels from left to right, displaying the dark matter overdensity, hydrogen number density, gas temperature, and gas metallicity projected along the line of sight within 1.75 kpc (4R\({}_{\rm vir}\)) from the galaxy center. The virial radius of the halo is marked with solid white circles at the center of the panels. Pop II stars with and without Pop III HN signatures are represented by cyan and yellow colors, respectively. Moreover, in-situ stars and ex-situ stars are discerned through the use of star and circle symbols. These panels clearly demonstrate that stars are formed from multiple progenitor minihaloes and then merge into the primary halo at a later time.
Pop III HNe explosions, yielding \(N_{\rm mini,HN}=7\). However, only 4 out of 7 progenitor minihaloes with HNe produce subsequent Pop II stars with the Pop III HN signature, noted as \(N_{\rm mini,HNsig}=4\). Similarly, in Z55-F50, Pop III HNe events take place in 3 out of 8 progenitor minihaloes, and all of the subsequent Pop II stars with the Pop III HNi signatures are found within these three minihaloes. On the other hand, if the frequency of Pop III HNe is reduced, such as a single event, the number of minihaloes that could contain Pop II stars with HN signatures also decreases.
Figure 4 addresses the main question of the conditions that lead to the presence or absence of Pop II stars with Pop III HNe signatures. It shows the number density of star-forming gas within the minihaloes where an HN event is triggered as a function of their virial masses. Cyan circles on the figure correspond to minihaloes where Pop II stars with Pop III HNe signatures are present. On the other hand, the black symbols represent minihaloes where Pop III HNe occurred, but either no subsequent Pop II star formation (indicated by triangles) or the subsequent Pop II stars are already polluted by metals from other supernova events (represented by stellar symbols). Regarding the value of number density, \(n_{\rm H}\), it is chosen immediately before the formation of a Pop II star. For other cases, the \(n_{\rm H}\) value used is about \(\sim\)70 Myr after the Pop III event, which corresponds to the average time delay for the next star formation observed in the simulations.
Figure 4: The virial mass of progenitor minihaloes and the number density of star-forming gas in those haloes for each simulation run. The minihaloes that contain Pop II stars with the Pop III HNe signatures are marked as filled cyan circles. Meanwhile, black symbols indicate minihaloes in which Pop III HNe occurred, but no subsequent Pop II star formation (indicated by triangles) or subsequent Pop II stars are already contaminated by metals from other supernova events (represented by stellar symbols). The gas number density is determined immediately before the formation of Pop II stars, but if the gas is unavailable due to significant evacuation, the gas density at about 70 Myr after the Pop III event is used. This time corresponds to the average time delay for the next star formation observed in the simulations.
Figure 5: The evolution of two different haloes that experienced an HN explosion at similar virial masses. One of the minihaloes produces Pop II stars with the Pop III HN signature (solid line), while the other minihalo does not form any Pop II stars (dashed line). The color of each line represents the DM mass (black), the gas mass within the viral radius, \(R_{\rm vir}\) (blue), and the gas mass within \(2R_{\rm vir}\) (green). The density of star-forming gas in a rapidly growing dark matter halo (solid line) can be replenished by an influx of gas, while in a slowly growing dark matter halo (dashed line), it is difficult to recover gas that has evaporated into the IGM.
\begin{table}
\begin{tabular}{c c c c} \hline Run & \(N_{\rm mini,Pop\ III}\) & \(N_{\rm mini,HN}\) & \(N_{\rm mini,HNsig}\) \\ \hline Z35-F50 & 12 & 7 & 4 \\ Z55-F50 & 8 & 3 & 3 \\ Z55-F100 & 8 & 8 & 5 \\ Z55-Early & 8 & 1 & 1 \\ Z55-Latt & 8 & 1 & 0 \\ Z55-Mass & 8 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 3: The number of progenitor minihaloes that merge to form a single UPT analog in each simulation set. Column (1) shows the simulation name, column (2) indicates the number of progenitor minihaloes that form at least one Pop III star, column (3) shows the number of minihaloes with Pop III stars that explode as an HN explosion, and column (4) indicates the number of minihaloes that contain Pop II stars with inherited HN signatures.
A total of 7 progenitor minihaloes, 3 from Z35-F50, 3 from Z55-F100, and 1 from Z55-Late runs, are found to lack Pop II stars with Pop III HN signatures. These 7 cases can be divided into two categories. The first category, which we call the "unrecoverable case," is characterized by a significant decrease in the number density of the star-forming region due to the strong feedback of HNe. As a result, the gas is unable to recover the threshold density of \(n_{\rm H}=100\) cm\({}^{-3}\), making further star formation impossible. We refer to the second category as the "contaminated case". To form Pop II stars with HN signatures, it is necessary that the gas is contaminated only by metals from HN events. However, in some scenarios, minihalo mergers occur before the formation of Pop II stars, causing the gas to become polluted by metals from stars in the merged minihaloes. Consequently, Pop II stars are formed out of gas with metals composed of those from Pop III HNe and other SN events. The results of the study show that in the cyan-colored progenitor minihaloes, the gas is able to recover from Pop III HN feedback and reach a gas density of \(n_{\rm H}>100\) cm\({}^{-3}\). However, in the "unrecoverable" cases, the gas density drops below \(n_{\rm H}=100\) cm\({}^{-3}\), with the Z55-F100 case being the most severely disrupted, where the density falls to \(n_{\rm H}=0.001\) cm\({}^{-3}\). In the "contaminated" cases, the lack of Pop II stars with HN signatures is caused not only by HN feedback but also by metal contamination resulting from mergers. Under these circumstances, densities of star-forming gas can be achieved up to \(n_{\rm H}=10-1000\) cm\({}^{-3}\).
Under what conditions can gas within a minihalo form Pop II stars or become unrecoverable? It is likely that the surrounding environment plays a role in the observed difference. A comparison of two minihaloes with similar mass but different environments is presented in Figure 5. One minihalo produces Pop II stars with the Pop III HN signature (solid line), while the other minihalo does not form Pop II stars (dashed line). The color of each line represents DM mass (black), gas mass within the viral radius (blue), and gas mass within two times the viral radius (green). When an HN explosion occurs with \(E_{\rm SN}=10^{52}\) erg, the halo mass of both minihaloes is comparable, but the result differs due to the subsequent dark matter growth rate. In a fast-growing dark matter halo (solid line), the density of the star-forming gas is restored due to an influx of gas, whereas in a barely growing dark matter halo (dashed line), the gas that has evaporated into the IGM is difficult to recover. This is supported by the amount of gas within 2 \(R_{\rm vir}\), which is about four times greater for the fast-growing minihalo than for the slow-growing halo around 100 Myr after the HN explosion event.
The question being asked is what fraction of Pop II stars produced in the simulated UFD galaxy analog, which has a total stellar mass of approximately \(10^{4}\,M_{\odot}\), contains Pop III HN signatures. To address this, we identify the number of Pop II stars with Pop III HN signatures among all Pop II stars within the viral radius of the simulated analog at \(z=3\). Figure 6 compares the fraction of Pop II stars with HN signatures, represented as \(f_{\rm PopII,HNiag}=N_{\rm PopII,HNiag}/N_{\rm PopII}\), in various runs. Except for Z35-F55 and Z55-LATE, in general, \(f_{\rm PopII,HNiag}\) tends to decrease as the fraction of Pop III HNe decreases. Specifically, with the same critical metallicity of \(Z_{\rm crit}=10^{-5.5}\)\(Z_{\odot}\), the fraction is \(f_{\rm PopII,HNiag}=16\%\) for Z55-F100 and decreases to \(f_{\rm PopII,HNiag}=9.3\%\) for Z55-F50, eventually dropping to \(f_{\rm PopII,HNiag}=2.3\%\) for Z55-Mass.
Assuming that half of the Pop III stars explode as HNe, setting a higher critical metallicity of \(Z_{\rm crit}=10^{-3.5}\)\(Z_{\odot}\) in the Z35-F50 run results in a higher fraction of Pop II stars with HN signatures, specifically with a value of \(f_{\rm PopII,HN,sig}=13\%\). This is higher than the \(f_{\rm PopII,HNiag}=9.3\%\) observed in the Z55-F50 run. The reason for this is that in the Z35-F50 run, there are more Pop III stars, leading to a higher frequency of HN events. In fact, HN events occur 2.6 times more frequently in the Z35-F50 run than in the Z55-F50 run. Out of the 6 sets considered, the runs that have only one HN event (Z55-Early, Z55-Late, Z55-Mass) all exhibit a \(f_{\rm PopII,HNiag}\) that is lower than 3%. It should be noted that the \(f_{\rm PopII,HNiag}\) predicted above are likely to be lower in more massive satellite galaxies than in UFDs. This is because larger galaxies are expected to have a higher number of Pop II stars. Although the number of Pop III stars exploding as HNe may also increase with halo mass, the duration of Pop III star formation in the early universe is very short \(z>7.5\), making Pop II stars the dominant population in more massive galaxies. As a result, the fraction of Pop II stars with HN signatures compared to the total Pop II stars is likely to decrease as the halo mass increases.
### Chemical abundances
Figure 7 shows a comparison between the metallicity distribution of all Pop II stars (yellow) and Pop II stars with HN signatures (cyan) found in the viral radius of the UFD analog at \(z=3\) for each run. In all runs except Z35-F50, where the formation of Pop II stars with metallicity below \(Z_{\rm crit}=10^{-3.5}\)\(Z_{\odot}\) is not possible, the metallicity of the Pop II stars ranges from [Fe/H] \(\approx-5.5\) to [Fe/H] \(\approx-1.5\) (yellow). The vertical line in Figure 7 represents the stellar metallicity of AS0039, which is measured to be \(\rm[Fe/H]=-4.11\), corresponding to the ultra metal-poor (UMP) stars (\(\rm[Fe/H]<-4\)) (e.g., Beers & Christlieb 2005). When all runs are combined, the fraction of UMP stars is around 18% out of a total of 204 Pop II stars, with 37 of these stars being UMP stars. As such, the fraction of UMP stars among only the Pop II stars with HN signatures (cyan) is \(\sim\)7%, which is based on a total of 14 Pop II stars with HN signatures, among which only 1 star is a UMP star. The Pop II star with the HN signature that has the most similar metallicity to AS0039 is the one with [Fe/H]=-4.17 in the Z55-F50 run. Most of the other Pop II stars with HN signatures have metallicities in the range of -3.4 to -2.4 for their [Fe/H] values.
The run Z55-Early is the simplest and most suitable run to investigate the conditions for having such a low metallicity like the observed AS0039 in that only the first Pop III star in the first minihalo explodes as an HN. It still, however, leads to the formation of a Pop II star with an HN signature, albeit with a metallicity, \(\rm[Fe/H]\approx-2.6\), more than one order of magnitude higher than that of AS0039. Com
Figure 6: The fraction of Pop II stars that exhibit HN signatures among all Pop II stars in the simulated analog, denoted as \(f_{\rm PopII,HNiag}\). As the fraction of Pop III HNe decreases, there is generally a tendency for \(f_{\rm PopII,HNiag}\) to show a declining pattern.
parably, when a Pop III HN explodes in a minihalo that collapses relatively late, like in the runs Z55-late or Z55-Mass, even if it is a single event, the HN unique signatures can be easily diluted by metals from other SN events (e.g. Ji et al., 2015). As a result, none of the Pop II stars with HN signature can be formed in the Z55-late run owing to external pollution. However, if subsequent star formation occurs before the gas is contaminated, the unique signature remains, forming Pop II stars with it, as in the Z55-Mass run.
According to our findings, the metallicity of Pop II stars is related
Figure 8: The metallicity of Pop II stars, which formed right after the Pop III star formation, is dependent on the delay time. The delay time is defined as the time gap between the Pop III SN explosion and the formation of subsequent Pop II stars. Note that the absolute amount of iron produced by a normal Pop III SN is lower than that of an HN explosion from a Pop III star with \(m_{\rm PopIII}=21\,M_{\odot}\)(Heeger and Woosley, 2010), leading to lower overall [Fe/H] values. Pop II stars that formed after a Pop III HN explosion (cyan color) typically exhibit a longer delay time of up to about 180 Myr owing to the high-energy explosion, compared to those created after a normal SN (yellow). The reason for this is that metallicity tends to decrease with longer delay times, primarily due to increased diffusion unless there is external enrichment.
Figure 7: The metallicity distribution of Pop II stars (yellow) found in the simulated UFD analog for each simulation run, comparing with Pop II stars exhibiting the HN signatures (cyan). A vertical line is drawn to represent the metallicity of AS0039, which is \(\rm[Fe/H]=-4.11\). With the exception of Z35-F55, which is intended to prevent the formation of Pop II stars with a metallicity lower than \(10^{-3.5}\,Z_{\odot}\), most simulations result in approximately 18% of UMP stars, defined as \(\rm[Fe/H]<-4\).
Figure 9: During the delay time period, the gas evolution within haloes plays a critical role in determining the metallicity of subsequently forming stars. In the case of the halo where UMP Pop II stars are found (black line), there is a rapid gas inflow with a peak value of \(\dot{M}_{\rm gas}\sim 0.08\,\,M_{\odot}yr^{-1}\), which is significantly higher than the averaged gas inflow rate of \(\sim 0.0025\,\,M_{\odot}yr^{-1}\) for other haloes (cyan color). The significant increase in the infall rate observed in this case can be due to a merger event between the halo where the UMP star is formed and a main progenitor halo, taking place \(\sim\)25 Myr before the formation of the UMP star. This rapid influx of gas plays a crucial role in diluting the metals produced by Pop III HN, thereby facilitating the formation of UMP stars.
to the delay time between the SN explosion and subsequent Pop II star formation. Figure 8 illustrates this correlation by plotting stellar metallicity against the delay time, which is the interval between the occurrence of a Pop III SN explosion and the formation of Pop II stars from the gas contaminated by the SN event. In general, Pop II stars with HN signatures (cyan color) tend to have higher [Fe/H] values than those formed after a normal Pop III SN (yellow). This is due to the fact that the absolute yield of iron from a normal Pop III SN with an energy of \(10^{51}\) erg is lower than that of an HN explosion from a progenitor mass of \(m_{\rm PopIII}=21\,M_{\odot}\)(Heger & Woosley, 2010). Additionally, the high energy associated with HN explosions leads to longer delay times, up to \(\sim\)180 Myr, for Pop II stars with HN signatures. Our simulation results clearly show that the metallicity of Pop II stars with HN signatures decreases with longer delay times. For instance, Pop II stars with a metallicity of \(\rm[Fe/H]\approx-2.5\) require a delay time of about \(\sim\)40 Myr, while the formation of EMP stars needs a delay time of about \(\sim\)100 Myr. This is because the metals produced by the HN event would have been diluted during the long delay time period, leading to a decrease in the gas metallicity where the Pop II stars are formed. This effect becomes more pronounced as the delay time increases, and without any external metal pollution during this period, the subsequent Pop II stars will form with low metallicities.
Although Pop II stars with HN signatures exhibit [Fe/H] values above \(\rm[Fe/H]\approx-3.3\), we have only generated one UMP star with \(\rm[Fe/H]\approx-4.17\), similar to that of AS0039. To investigate the conditions that lead to the formation of such UMP stars, we analyze the gas evolution within the halo in which Pop II stars with HN signatures reside during the delay time period, as shown in Figure 9. In contrast to other haloes (cyan color), the halo where the UMP star, presented as the black line, is formed experiences a rapid gas inflow of about \(M_{\rm gas}\approx 5\times 10^{5}\,M_{\odot}\) within \(\sim\)7 Myr, about 22 Myr after the HN explosion, increasing the gas mass from \(M_{\rm gas}\approx 2.8\times 10^{5}\,M_{\odot}\) to \(M_{\rm gas}\approx 8.8\times 10^{5}\,M_{\odot}\). The resulting gas accretion rate is \(\dot{M}_{\rm gas}\sim 0.08\)\(M_{\odot}\)yr\({}^{-1}\), which is 4 times higher than the average gas accretion rate onto the halo. This peak value is higher by a factor of 30 compared to the average gas inflow rate of \(\dot{M}_{\rm gas}\sim 0.0025\,M_{\odot}\)yr\({}^{-1}\) for other haloes. The observed high infall rate, in this case, can be attributed to a merger event occurring \(\sim\)25 Myr prior to the formation of the UMP star. This merger involves the combination of a halo hosting a UMP star similar to AS0039 with a main progenitor halo, resulting in the influx of \(M_{\rm gas}\sim 5\times 10^{5}\,M_{\odot}\) of gas. The rapid gas influx into the halo where the UMP star forms plays a crucial role in diluting the metals produced by the Pop III HN, ultimately allowing the formation of UMP stars. Consequently, our findings suggest that low-metallicity stars can form via two possible mechanisms: a prolonged delay time or a substantial inflow of gas onto a halo, which enables the diffusion of metals.
It should be mentioned that Jeon et al. (2014) suggested a longer recovery timescale of up to \(\sim\)300 Myr when using the energy of \(E_{\rm SN}=10^{52}\) erg for a Pop III SN. Such difference in timescale compared to our work can be attributed to two factors. Firstly, Jeon et al. (2014) examined halo masses of \(M_{\rm vir}\approx 5\times 10^{5}\,M_{\odot}\), which are \(2-10\) times smaller than the masses used in our study to estimate the recovery time. Secondly, we do not account for the pre-explosion photoionization by the Pop III progenitors, which may have caused gas densities to remain high prior to the SN explosion, resulting in a shorter recovery timescale in our simulations. Even when considering the photoionization effect, however, recovery time can vary considerably depending on the halo mass, as illustrated by Jeon et al. (2014). For example, the recovery time for a \(40\,M_{\odot}\) Pop III progenitor star that explodes with \(E_{\rm SN}=10^{51}\) erg was \(\sim\)92 Myr for a halo mass of \(M_{\rm vir}=5\times 10^{5}\,M_{\odot}\), but dramatically dropped to \(\sim\)6 Myr for a halo mass of \(M_{\rm vir}=9\times 10^{5}\,M_{\odot}\). This is due to the H II region becoming more compact, which weakens the pre-heating effect and allows gas to rapidly recollapse in relatively massive haloes.
In Figure 10, we compare the individual abundances of Pop II stars with HN signatures (cyan star) and normal Pop II stars (yellow star) from our simulations to observations of stars, such as AS0039 (red pentagon, Skaladotti et al., 2021), stars in the Sculptor galaxy (magenta circle, Frebel et al., 2010; Simon et al., 2010; Tafelmeyer et al., 2010; Starkenburg et al., 2013; Jablonka et al., 2015), and stars in the MW (grey circle, Cayrel et al., 2004). The focus of the comparison is on the abundances of Carbon, Magnesium, and Silicon, which are displayed in three panels in Figure 10. Our study reveals that the individual abundances of Pop II stars with HN signatures, depicted
Figure 10: Comparison of individual metal abundances between our simulation of Z55-F50 (stellar symbols) and observations. The observations include AS0039 (red pentagon, Skaladotti et al., 2021), stars in the Sculptor galaxy (magenta circle, Frebel et al., 2010; Simon et al., 2010; Tafelmeyer et al., 2010; Starkenburg et al., 2013; Jablonka et al., 2015), and stars in the MW (grey circle, Cayrel et al., 2004). Pop II stars with HN signatures (cyan) show [X/Fe] values that are lower than those of normal Pop II stars (yellow) by \(\sim\)1 dex for carbon and by 0.5-0.7 dex for silicon and magnesium. The abundances of Pop II with HN signatures are in good agreement with those of AS0039. Our simulations demonstrate that the difference in carbon abundance is due to the type of SN explosion. Specifically, Pop II stars that form out of gas contaminated by a normal Pop III SN with low iron abundance tend to give rise to CEMP stars with \(\rm[C/Fe]\approx 2\). In contrast, the low carbon value of \(\rm[C/Fe]=-0.5\), which matches the value observed in AS0039, is attributed to a Pop III HN explosion. Moreover, we find that Pop II stars with \(\rm[C/Fe]\approx 0.6\) are more affected by Pop II SNe than Pop III SNe.
as a cyan star, are lower than those of normal Pop II stars (yellow star), ranging from -1 dex for Carbon to -0.5 dex for Magnesium and Silicon. The estimated abundances of Pop II stars with HN signatures are consistent with the best-fit values of AS0039, given the uncertainties in the observations. Note that the abundances of AS0039 are measured values, and there could be differences between the observed and the nucleosynthesis results, even with the best-fit model.
It is worth noting that AS0039 has a peculiar feature in its carbon abundance; unlike CEMP stars commonly found in dwarf galaxies, which are also considered signatures of Pop III stars, AS0039 is a carbon-poor star with \([\mathrm{C}/\mathrm{Fe}]_{\mathrm{LTE}}=-0.75\pm 0.22\). Our simulations clearly exhibit that this difference in carbon abundance is attributed to the type of Pop III SN. Specifically, Pop II stars with an exceptionally high [C/Fe] ratio of \(\sim\)2 tend to form when Pop II stars arise immediately after a normal SN explosion with low iron abundance, leading to the formation of CEMP stars. In contrast, the low value of \([\mathrm{C}/\mathrm{Fe}]\approx-0.46\), which matches the observed value of AS0039, is the result of the HN explosion (cyan). Furthermore, Pop II stars with \([\mathrm{C}/\mathrm{Fe}]\approx 0.6\), which form in gas clouds that are more influenced by Pop II than Pop III stars, do not exhibit CEMP features. However, these stars still have distinct features that differ from those with HN signatures, as shown in AS0039.
We observe the narrow scatter of stellar abundances in normal Pop II stars showing similar [X/Fe] values compared to those of stars in Sculptor and the MW. This similarity is likely the result of the short star formation duration of \(\sim\)0.5 Gyr in the simulated UFD analog, while Sculptor has had a star formation duration of \(\Delta\alpha\approx\) 1-3 Gyr, as reported in previous studies (Kirby et al., 2011; Weisz et al., 2014; Bettinelli et al., 2019; de los Reyes et al., 2022), and the MW is undergoing ongoing star formation (Elia et al., 2022).
## 4 Observability
So far, we have discussed the simulation results for a single dwarf galaxy with a halo mass of \(M_{\mathrm{vir}}\approx 10^{8}\,M_{\odot}\) at \(z=0\), by varying the fraction of Pop III stars that undergo HN explosions. However, the important question is to determine the likelihood of observing a Pop III HN signature in a randomly observed dwarf satellite galaxy within the MW's volume and which galaxy masses are most likely to exhibit such signatures. To address these questions, we employ a semi-analytic approach to estimate the number of progenitor minihaloes that are expected to contain Pop III HN remnants within the MW's volume.
### A-SLOTH: semi-analytic model for galaxy formation
We utilize a semi-analytic galaxy formation code called A-SLOTH (Ancient Stars and Local Observables by Tracing haloes, Hartwig et al., 2023; Magg et al., 2022) to track the remnants of early generations of stars down to \(z=0\)(e.g., Magg et al., 2018; Hartwig and Yoshida, 2019; Chen et al., 2022). This approach involves using the Extended Press-Schechter (EPS) formalism (e.g., Press and Schechter, 1974; Lacey and Cole, 1993) to generate halo merger trees for an MW-like galaxy or obtaining them from N-body simulations. Particularly, A-SLOTH adopts the halo merger tree extracted from the Caterpillar project (Griffen et al., 2016), which provides 30 sets of merger trees for MW-sized galaxies using dark matter-only simulations. By traversing through these merger trees, A-SLOTH is able to ascertain the baryonic components of each halo in the tree based on the implemented physics within the code, starting from the first star formation in minihaloes.
One of the key benefits of using a semi-analytic approach is its computational efficiency, which allows for rapid exploration of the optimal parameters that describe baryonic physics. Further information on the specific baryonic physics incorporated in A-SLOTH can be found in Hartwig et al. (2023). Compared to other semi-analytic models, A-SLOTH stands out by including the relevant physics of early star formation, allowing for the formation of individual Pop III and massive Pop II stars, and taking into account the impact of their mechanical and chemical feedback.
The models are calibrated based on six distinct observables to determine the best-fit values of the free parameters that govern baryonic physics. These include the optical depth to Thomson scattering, the stellar mass of the MW, the cosmic star formation rates at high redshift, the distribution of stellar masses among the MW's satellite galaxies, the fraction of extremely metal-poor stars (EMP) (i.e., [Fe/H]\(<\)-3) in the halo, and the ratios of UMP (i.e., [Fe/H]\(<\)-4) to EMP stars. For this study, we adopt not only the proposed values of the free parameters by Hartwig et al. (2023) but also the values by Chen et al. (2022), where they acquire the best-fit values by focusing on the relationship between the stellar mass and halo mass and adjusting the Pop II star formation models to match observed values. Table 4 summarizes the parameters for A-SLOTH that we use in this work.
In Figure 11, we show the cumulative number of satellite galaxies resulting from 30 sets of merger trees of the MW-like galaxy using A-SLOTH, indicated by the black lines, compared to the observational data represented by the red line (McConnachie, 2012; Munoz et al., 2018). It is important to note that the observations of galaxies
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameter & Description & Best Fit & reference \\ \hline \(v_{\mathrm{SV}}\) & Baryonic streaming velocity & \(0.8\sigma_{\mathrm{SV}}\) & (1) \\ \hline \(M_{\mathrm{min}}\) & Minimum mass of Pop III stars & \(5\,M_{\odot}\) & (1) \\ \hline \(M_{\mathrm{max}}\) & Maximum mass of Pop III stars & \(210\,M_{\odot}\) & (1) \\ \hline \(\eta_{\mathrm{III}}\) & Star formation efficiency for Pop III stars & 0.38 & (1) \\ \hline \(\eta_{\mathrm{II}}\) & Star formation efficiency for Pop II stars & 2 & (2) \\ \hline \(\alpha_{\mathrm{out}}\) & Slope of outflow efficiency & 0.72 & (2) \\ \hline \(M_{\mathrm{out,norm}}\) & Normalization mass of outflow efficiency & \(10^{10.5}\,M_{\odot}\) & (2) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Best-fit parameters for A-SLOTH we adopt for this work. Column 1): Parameters. Column 2): A short description of the parameters. Column 3): Values. Column 4): References for the adopted values are as follows: (1) Hartwig et al. (2023), and (2) Chen et al. (2022).
with stellar masses below \(M_{*}\approx 2\times 10^{5}\,M_{\odot}\), which are represented by the blue-shaded region, are still incomplete (e.g., Carsten et al., 2021). The mean stellar mass of the MW-like galaxy, calculated by averaging over 30 sets of A-SLOTH simulations, is approximately \(M_{*}\approx 5.3\times 10^{10}\,M_{\odot}\), which is consistent with the observed stellar mass of the MW galaxy of \(4.86-6\times 10^{10}\,M_{\odot}\)(e.g., McMillan, 2017). The A-SLOTH simulation, however, is likely to predict a large amount of low-mass satellite galaxies (\(M_{*}\lesssim 2\times 10^{4}\,M_{\odot}\)) that exceed what is actually observed by a factor of 2-10. This discrepancy, commonly known as the "missing satellite problem", (e.g., Moore et al., 1999; Klypin et al., 1999; Bullock and Boylan-Kolchin, 2017) may indicate that there are numerous low-mass galaxies that remain undiscovered due to observational limitations or that the stellar feedback implemented in the simulation for small galaxies is not strong enough to suppress star formation. The following sections discuss the likelihood of detecting Pop III remnants with HN signatures in two cases. One case will account for the difference between observations and A-SLOTH simulations, while the other will not consider this discrepancy.
### The effect of Pop III hypernovae
We have employed A-SLOTH to examine the prevalence of Pop III HN explosion signatures in the MW dwarf satellites. We have experimented with the fraction of Pop III stars that die as HN explosions, defined as \(f_{\rm HN}\), in five different scenarios: 1) all Pop III stars end up as HNe, \(f_{\rm HN}\)=100%, as in our Z55-F100; 2) 50% of Pop III stars die as HNe, comparable to our Z35-F50 and Z55-F50; 3) a small proportion of Pop III stars (either 5% or 1%) expire as HNe, which we compare to Z55-Early and Z55-late; 4) Pop III stars having stellar mass from 25 \(M_{\odot}\) to 40 \(M_{\odot}\) end as HNe, in line with Z55-Mass. Note that a Pop III HN event is triggered with an explosion energy of \(10^{52}\) ergs. During each simulation, we identify progenitor minihaloes that are considered to preserve Pop III HNe signatures within the satellite galaxies of the MW-like galaxy.
100% (top panels), while the second scenario assumes that Pop III HNe is triggered within a mass range of 25 \(M_{\odot}\) to 40 \(M_{\odot}\), giving rise to \(f_{\rm HN}=15\%\) (bottom panels). When the results of all 30 runs are considered, the average number of minihaloes with HN signature, denoted as \(<\)\(N_{\rm mini,H\rm{Nisig}}\)\(>\), is found to be \(<\)\(N_{\rm mini,H\rm{Nisig}}\)\(>=583\) and \(<\)\(N_{\rm mini,H\rm{Nisig}}\)\(>=75\) for the cases with \(f_{\rm HN}=100\%\) and \(f_{\rm HN}=15\%\), respectively. In the runs where \(f_{\rm HN}=1\%\), the average value is notably lower, \(<\)\(N_{\rm mini,H\rm{Nisig}}\)\(>=5\), while the highest and lowest values are \(N_{\rm mini,H\rm{Nisig}}=11\) and \(N_{\rm mini,H\rm{Nisig}}=1\), respectively.
The number of progenitor minihaloes with Pop III HN signature within a satellite galaxy increases as the galaxy mass increases, as illustrated by the linear fit represented by the dotted line in Figure 12. For example, in the scenario with \(f_{\rm HN}=100\%\), dwarf galaxies with a stellar mass of \(M_{*}\gtrsim 10^{9}\,M_{\odot}\) - similar to the Large Magellanic Cloud (LMC), which has a stellar mass of \(M_{*}\approx 10^{9.1}\,M_{\odot}\) - would contain \(N_{\rm mini,H\rm{Nisig}}\approx 110\) progenitor minihaloes. However, this number decreases to \(N_{\rm mini,H\rm{Nisig}}\approx 8\) for galaxies with a mass similar to that of the Sculptor dwarf galaxy, \(M_{*}\approx 10^{6.2}\,M_{\odot}\) (e.g., Munoz et al., 2018). In UFD galaxies with stellar masses of \(M_{*}\lesssim 10^{5}\,M_{\odot}\), the number of progenitor minihaloes becomes very low, with \(N_{\rm mini,H\rm{Nisig}}\lesssim 3\). The tendency of more massive galaxies containing a greater number of HN signatures remains consistent even when the frequency of HNe, \(f_{\rm HN}\), is decreased. However, the total number of minihaloes is reduced in scenarios where \(f_{\rm HN}=15\%\). For instance, for dwarf galaxies with stellar masses of \(M_{*}\gtrsim 10^{9}\,M_{\odot}\), \(M_{*}\approx 10^{6}\,M_{\odot}\), and \(M_{*}\lesssim 10^{5}\,M_{\odot}\), the corresponding number of minihaloes, \(N_{\rm mini,H\rm{Nisig}}\), becomes 11, 2, and less than one, respectively.
### The likelihood of finding HN signatures
The question then is identifying the satellite galaxy mass at which we are most likely to discover stars with Pop III HN signatures while searching for a galaxy similar to the MW. To address this question, Figure 13 displays the fraction of progenitor minihaloes featuring Pop III HN signatures within the satellite galaxies of the MW-like halo with respect to the halo peak and stellar mass of satellite analogs at \(z=0\). The resulting distribution is based on a series of 30 A-SLOTH runs for each scenario, with each run assuming a different value of \(f_{\rm HN}\). As a reference, we show the result of the run with \(f_{\rm HN}=100\%\) at the upper panels and compare it with other runs where \(f_{\rm HN}\) decreases from 50% to 1% (bottom panels). Given the prevalence of progenitor minihaloes bearing HN signatures, it becomes apparent that the likelihood of detecting these minihaloes increases as the value of \(f_{\rm HN}\) rises.
However, while keeping \(f_{\rm HN}\) constant, the likelihood of discovering minihaloes with Pop III HN signatures is highest in galaxies with low mass, specifically those with halo peak and stellar mass within the range of \(M_{\rm peak}=10^{8}-10^{9}\,M_{\odot}\) and \(M_{*}=10^{3.5}-10^{4.5}\,M_{\odot}\), respectively. For the simulation with \(f_{\rm HN}=100\%\), these low-mass galaxies show a fraction of 40% for containing such minihaloes. As shown in the bottom panels of Figure 13, this trend is also valid for other runs with different \(f_{\rm HN}\). For instance, if about 1% of Pop III stars undergo HNe, then an MW-like galaxy is predicted to contain a total of 5 minihaloes with the HN characteristics, which are still more likely to be found in small, low-mass satellite dwarfs. It should be, however, emphasized that this trend described above, i.e., the highest probability of finding minihaloes with Pop III HN signatures in low-mass galaxies, could be simply due to the fact that the A-SLOTH model predicts a higher number of low-mass satellites than is observed, as depicted in Figure 11. In other words, as demonstrated in Figure 12, a single large galaxy could possess a greater number of minihaloes with Pop III HN signatures than low-mass galaxies, but the total count of low-mass galaxies within the MW volume is higher than that of massive galaxies.
In order to take into account the discrepancy between the observation and those A-SLOTH runs, in Figure 14, we adjust the fraction of minihaloes with HN signatures by incorporating this difference. This is achieved by reducing the number of satellite galaxies in the low-mass galaxy regime to approximate the observed number while increasing the number in the high-mass range. Consequently, the distribution of minihaloes with HN signatures exhibits a peak among larger satellite galaxies. The highest fraction values, approximately around 40%, are found in the region of halo peak masses \(M_{\rm peak}\approx 10^{10}-10^{11}\,M_{\odot}\) and stellar masses \(M_{*}\approx 10^{7}-10^{8}\,M_{\odot}\), respectively. The following significant fraction is approximately 18%, corresponding to galaxies with halo peak masses \(M_{\rm peak}\approx 10^{9}-10^{9.5}\,M_{\odot}\) and stellar masses \(M_{*}\approx 10^{5}-10^{5.5}\,M_{\odot}\), respectively. Also, we should mention that the total number of minihaloes decreases by a factor of \(\sim 2\) due to the incorporation of the inconsistency between observations and the A-SLOTH simulations. This is because A-SLOTH is inclined to predict a higher abundance of low-mass satellite galaxies, which are incomplete in observational data.
Figure 13: The fraction of progenitor minihaloes experiencing a Pop III HN explosion within the satellite galaxies of the MW-like halo, while altering the value of \(f_{\rm HN}\), with respect to the halo peak and stellar masses for the satellites at \(z=0\). The top panels consider a scenario where all Pop III stars undergo HNe, while the bottom panels examine cases where the fraction of Pop III HNe, \(f_{\rm HN}\), varies from 1% to 50%. Considering the abundances of progenitor minihaloes featuring H\(\rm{Nis}\) signatures, it is evident that the probability of identifying these minihaloes increases as \(f_{\rm HN}\) rises. Meanwhile, when \(f_{\rm HN}\) is held constant, the probability of observing progenitor minihaloes with HN signature is higher in smaller haloes compared to larger ones. It means that if an observation is conducted randomly, the chance of discovering a progenitor minihalo is greater in low-mass satellites than in massive ones. This trend is consistent across all scenarios with varying \(f_{\rm HN}\). This is due to the fact that A-SLOTH is likely to have a higher number of low-mass satellites than massive galaxies.
In summary, according to the results of the A-SLOTH simulation, the most probable dwarf galaxies to contain signatures of HNe associated with Pop III stars have a stellar mass of \(M_{*}=10^{3.5}-10^{4.5}\,M_{\odot}\), which is similar to the UFD galaxies like Bootes I, Hercules, CVn II, UMa I, Leo IV, Hydra II, Columba I, Leo V, ComBer, Indus II, UMa II, and Pisces II. On the other hand, when accounting for the discrepancy between observations and the A-SLOTH runs, galaxies with \(M_{*}\approx 10^{7}-10^{8}\,M_{\odot}\), similar to Sagittarius and Fornax, or with \(M_{*}\approx 10^{5}-10^{5.5}\,M_{\odot}\), corresponding to CVn I, Sextans, Draco, Crater II, are the most probable galaxies to search for to find fossil records of HN signatures of Pop III stars.
We should emphasize that the only evidence of an HN signature associated with Pop III stars in the MW satellite galaxies is AS0039 in the Sculptor galaxy. Although A-SLOTH simulations suggest that there may be many low-mass dwarfs with HN signatures, the lack of observation in the UFD regime may be due to incomplete observations owing to their dimness. Alternatively, if assuming that the missing satellite problem is solved (e.g., Wetzel et al., 2015; Read and Erkal, 2019; Engler et al., 2021) and we use the results that account for the discrepancy between observations and A-SLOTH data, HN signatures are expected to be found in massive satellite galaxies. The discovery of only one HN signature related to Pop III stars in Sculptor implies that the likelihood of Pop III stars ending as HN is probably lower than 1%, according to the A-SLOTH simulation, which predicts only two minihaloes with HN signatures assuming \(f_{\rm HN}=1\%\). Furthermore, this number of two minihaloes can be considered an upper limit, as our hydrodynamic simulations indicate that only 80% of Pop II stars inherit HN signatures when a Pop III HN event occurs. In some cases, the gas may be evaporated or polluted following a Pop III HN, preventing further star formation or the preservation of HN signatures in subsequent Pop II stars.
It is important to mention that A-SLOTH lacks information regarding the metallicity of stars. As stated in Section 4.2, if a Pop III star experiences an HN explosion within a minihalo, it is assumed that subsequent Pop II stars born immediately after the explosion might exhibit an HN signature on their surfaces. We plan to delve deeper into this aspect in our forthcoming research. While quantifying the frequency of AS0039-like stars, displaying HN signatures while maintaining low metallicity, among Pop II stars with HN signatures proves challenging, our hydrodynamic simulations have revealed that generating stars akin to AS0039 requires specific conditions like the rapid infall of pristine gas. Such conditions present difficulties in producing these stars. When considering the amalgamated results of both A-SLOTH and our hydrodynamic simulations, our findings suggest that the appropriate fraction of HN in line with observations is likely \(f_{\rm HN}\lesssim 1\%\).
## 5 Summary and Conclusion
This study aims to explore the origin of metal-poor stars observed in nearby dwarf galaxies, which could harbor remnants of the earliest generation of stars. Our focus is particularly on AS0039, a star in the Sculptor galaxy, exhibiting distinct features that suggest its association with hypernova (HN) explosions from Pop III stars. To investigate the conditions under which such stars can form, we have performed six sets of cosmological zoom-in simulations on ultra-faint dwarf (UFD) analogs with a mass of \(M_{\rm vir}\approx 10^{8}\,M_{\odot}\) at \(z=0\). To explore whether Pop II stars in our simulations exhibit Pop III HN signatures, we vary two parameters: the critical metallicity for Pop II star formation and the fraction of Pop III stars that undergo HN explosions, defined as \(f_{\rm HN}\). The fraction is varied from 100% to the lower limit where a single HN event occurs during the assembly of the simulated galaxy to cover the full range of possibilities.
By analyzing the resulting fraction of Pop II stars exhibiting HN signatures and their metallicity, we have identified the potential environmental conditions for forming metal-poor stars such as AS0039. Furthermore, we also investigate the likelihood of discovering Pop II remnants that contain HN signatures in nearby dwarf satellite galaxies. To do this, we use a semi-analytic approach with the A-SLOTH code that allows us to efficiently explore the parameter space and overcome the limitation of this work, which is confined to a single UFD analog.
Our main findings are summarized as follows.
* According to our simulations, the process of star formation in UFD galaxies is complex, involving the merging of multiple minihaloes. Our results suggest that the main progenitor does not dominate star formation at high redshifts. Instead, we observe that star formation is initiated in minihaloes of comparable mass, which then merge later. Across all simulations, the average in-situ star formation is found to be 19.5%, and about \(-80\%\) of the stars in the simulated galaxies have grown through mergers.
* For each simulation run, the number of progenitor minihaloes that contain Pop II stars exhibiting Pop III SN signatures, \(N_{\rm mini,HNsig}\), varies from zero to 5 depending on the fraction of Pop III stars that die as HNe. However, the presence of a Pop III HN event does not guarantee the formation of Pop II stars with relics of such SN explosions. This can be attributed to the strong feedback from Pop III HNe, preventing further star formation or the contamination of gas by
Figure 14: The same as Figure 13, but it takes into account the actual number of observed galaxies in the MW. As seen in Figure 11, A-SLOTH predicts more low-mass galaxies than are actually observed. To address this, the fraction of progenitor minihaloes that have undergone HNe is recalculated by adjusting for the difference between the actual observations and the predictions of A-SLOTH. This involves decreasing or increasing the number of low and high-mass galaxies by the difference. The result is that the probability of discovering a progenitor minihalo that has experienced HNe with a fixed \(f_{\rm HN}\) is highest at halo peak mass and stellar mass of \(M_{\rm peak}\approx 10^{10.5}\,M_{\odot}\) and \(M_{*}\approx 10^{7.5}\,M_{\odot}\), respectively.
metals from merged minihaloes prior to the formation of Pop II stars, leading to the unique HN signatures being diluted and washed out. Moreover, as the fraction of Pop III HNe decreases, the proportion of Pop II stars with HN signatures denoted as \(f_{\rm PopII,HNsig}\), tends to decrease as well. Specifically, \(f_{\rm PopII,HNsig}\) declines from 16% for Z55-F100 to 2.3% for Z55-Mass.
* The metallicity of Pop II stars within the simulated galaxies ranges of \(-5.5<\rm[Fe/H]<-1.5\), while Pop II stars with HN signatures are typically found in the range from \(\rm[Fe/H]\approx-3.3\) to \(\rm[Fe/H]\approx-2.4\), with one outlier at \(\rm[Fe/H]=-4.17\), similar to the observed AS0039 star. We find that the halo, where such ultra metal-poor (UMP) stars are found, undergoes a rapid gas inflow with a peak gas accretion rate of \(\dot{M}_{\rm gas}\sim\)0.08 \(M_{\odot}\)yr\({}^{-1}\), which is significantly higher than the averaged gas inflow rate of \(\sim\)0.0025 \(M_{\odot}\)yr\({}^{-1}\) for other haloes. This suggests that the formation of the UMP star AS0039 in the Sculptor dwarf galaxy could have been facilitated by a halo that experienced a rapid gas inflow.
* The metallicity of Pop II stars formed after Pop III star formation is highly dependent on the delay time, which refers to the time interval between the Pop III supernova explosion and the formation of subsequent Pop II stars. This is due to the fact that, unless there is external enrichment, metallicity usually declines with increased diffusion associated with longer delay times. This could offer another explanation for the formation of Pop II stars with low metallicities, such as AS0039.
* The estimated individual metal abundances of Pop II stars exhibiting HN signatures are in agreement with the best-fit values for AS0039. We confirm that, aside from Pop III HN events, other progenitors, such as Pop III stars exploding with a typical SN energy of \(E_{\rm SN}=10^{51}\) ergs or Pop II stars, are unable to produce metal abundances consistent with those found in AS0039. Furthermore, we demonstrate that carbon-enhanced metal-poor stars can be formed via Pop III stars and that the carbon abundances of Pop II stars show characteristics that differ from those displaying HN signatures.
* Utilizing the semi-analytic A-SLOTH code, we find that the average number of minihaloes containing HN signatures in an MW-like galaxy is \(<\)\(N_{\rm mini}\)\(>=583\) and \(<\)\(N_{\rm mini}\)\(>=75\) for cases where \(f_{\rm HN}=100\%\) and \(f_{\rm HN}=15\%\), respectively, based on the results of all 30 sets. The number of progenitor minihaloes with HN signatures within a satellite galaxy increases with the galaxy mass. For example, dwarf galaxies with a stellar mass of \(M_{*}\gtrsim 10^{9}\,M_{\odot}\), similar to the Large Magellanic Cloud, would contain \(N_{\rm mini}\approx 110\) progenitor minihaloes if \(f_{\rm HN}=100\%\). However, this number decreases to \(N_{\rm mini}\lesssim 10\) for galaxies with a mass of \(M_{*}\lesssim 10^{6}\,M_{\odot}\).
* Our analysis suggests that the likelihood of discovering a progenitor minihalo with HN signatures while maintaining a constant value of \(f_{\rm HN}\) is higher in low-mass satellites (\(M_{\rm peak}=10^{8}-10^{9}\,M_{\odot}\) and \(M_{*}=10^{3.5}-10^{4.5}\,M_{\odot}\)) compared to more massive ones. However, we also find that the semi-analytic A-SLOTH code tends to over-predict the number of low-mass satellites compared to what is observed in the local Universe. This discrepancy could be due to observational incompleteness caused by the faintness of low-mass galaxies. Taking this into account, we find that the most likely dwarf galaxies to contain HN signatures are shifted towards more massive satellite galaxies, with a probability of 40% in the range of \(M_{\rm peak}\approx 10^{10}-10^{11}\,M_{\odot}\) and \(M_{*}\approx 10^{7}-10^{8}\,M_{\odot}\). The subsequent highest probability of discovering minihaloes with HN signatures is associated with galaxies in the range of \(M_{\rm peak}\approx 10^{9}-10^{9.5}\,M_{\odot}\) and \(M_{*}\approx 10^{5}-10^{5.5}\,M_{\odot}\).
It is worth noting that the simulated galaxy in this study has less mass, approximately two orders of magnitude lower than the Sculptor galaxy, where AS0039 was discovered. The Sculptor galaxy is classified as a dwarf spheroidal (dSph) galaxy and has a mass of \(M_{*}\approx 1.8\times 10^{6}\,M_{\odot}\). Nevertheless, the results of this study indicate that stars like AS0039 could have formed during the early stages of the evolution of the Sculptor galaxy. This is supported by the fact that the formation of Pop III stars occurs early in the Universe with a cosmic time of \(t_{\rm H}\lesssim 0.75\) Gyr, and the remnants of Pop III stars are only found in Pop II stars that formed shortly after. This suggests that early star formation is a key factor in producing stars with characteristics similar to AS0039, regardless of whether the galaxy is a dSph or not. However, the analysis using A-SLOTH indicates that the probability of discovering HN signatures in dwarf galaxies is dependent on the number of satellite galaxies with a specific mass. Given the current limitations in observations, it is more probable to find stars with relics from the first generation of stars in more massive satellite galaxies.
Considering the current limitations in observations, the discovery of only one star, AS0039, among the observed satellite galaxies suggests that the fraction of Pop III stars that undergo HN explosions is extremely low. According to the A-SLOTH analysis, assuming that only 1% of Pop II stars die as HNe, only two minihaloes are predicted to contain Pop II stars with Pop III HN signatures, implying the fraction of Pop III stars exploding as HNe likely to be less than 1%. According to Yoon et al. (2012), Pop III stars with masses between 13 \(M_{\odot}\) and 83 \(M_{\odot}\) could experience HN explosions depending on their degree of rotation. Certainly, the fraction of Pop III stars that end in HNe is determined by the assumed initial mass function (IMF). Our adopted IMF predicts an \(f_{\rm HN}\approx 54\%\) over the possible progenitor masses for HN under chemically homogeneous evolution. Conversely, our results, when combined with such nucleosynthesis findings, could provide insights into the rotational degree of the first generation of stars.
Despite the JWST being a powerful tool for studying early objects formed in the early Universe, it is anticipated that directly observing individual first-generation stars will be extremely challenging (e.g., Schauer et al., 2020; Woods et al., 2021; Katz et al., 2023; Larkin et al., 2023). Consequently, the role of stellar and galactic archaeology in inferring the nature of the first stars will be crucial, providing a complementary understanding of far-field cosmology. Detailed chemical abundances in metal-poor stars within local dwarf galaxies can provide insight into questions such as the end of life of the first stars, their mass and spin, and their birth environment. Moreover, the near-field cosmology approach will be bolstered by future observations using advanced telescopes like the Giant Magellan Telescope (GMT), the Thirty Meter Telescope (TMT), and the European Extremely Large Telescope (E-ELT), which promise to offer improved spectroscopic sensitivity to uncover the chemical signature left behind by early cosmic history.
## Acknowledgements
We express our gratitude to the anonymous referee for their constructive and insightful feedback, which has significantly enhanced the clarity of our paper. We thank Hartwig Tilman for the valuable discussions and for graciously sharing the A-SLOTH code with us. We are grateful to Volker Springel, Joop Schaye, and Claudio Dalla Vecchia for permission to use their versions of gadget. T. L. and M. J. are supported by the National Research Foundation (NRF) grants No. 2021R1A2C109491713 and No. 2022M3K3A1093827, funded by the Korean government (MSIT).
## Data Availability
The simulation data and results of this paper may be available upon request.
|
2306.14696 | How About Kind of Generating Hedges using End-to-End Neural Models? | Hedging is a strategy for softening the impact of a statement in
conversation. In reducing the strength of an expression, it may help to avoid
embarrassment (more technically, ``face threat'') to one's listener. For this
reason, it is often found in contexts of instruction, such as tutoring. In this
work, we develop a model of hedge generation based on i) fine-tuning
state-of-the-art language models trained on human-human tutoring data, followed
by ii) reranking to select the candidate that best matches the expected hedging
strategy within a candidate pool using a hedge classifier. We apply this method
to a natural peer-tutoring corpus containing a significant number of
disfluencies, repetitions, and repairs. The results show that generation in
this noisy environment is feasible with reranking. By conducting an error
analysis for both approaches, we reveal the challenges faced by systems
attempting to accomplish both social and task-oriented goals in conversation. | Alafate Abulimiti, Chloé Clavel, Justine Cassell | 2023-06-26T13:43:06Z | http://arxiv.org/abs/2306.14696v1 | # How About Kind of Generating Hedges using End-to-End Neural Models?
###### Abstract
Hedging is a strategy for softening the impact of a statement in conversation. In reducing the strength of an expression, it may help to avoid embarrassment (more technically, "face threat") to one's listener. For this reason, it is often found in contexts of instruction, such as tutoring. In this work, we develop a model of hedge generation based on _i)_ fine-tuning state-of-the-art language models trained on human-human tutoring data, followed by _ii)_ reranking to select the candidate that best matches the expected hedging strategy within a candidate pool using a hedge classifier. We apply this method to a natural peer-tutoring corpus containing a significant number of disfluencies, repetitions, and repairs. The results show that generation in this noisy environment is feasible with reranking. By conducting an error analysis for both approaches, we reveal the challenges faced by systems attempting to accomplish both social and task-oriented goals in conversation.
## 1 Introduction
When people interact, they attend not just to the task at hand, but also to their relationship with their interlocutors [14]. One key aspect of the relationship that people attend to, while engaging in contexts as diverse as sales [1, 15], education [18, 19] and healthcare [1, 1], is what is referred to as _rapport_, a sense of harmony and mutual understanding between participants in a conversation [15, 16]. Indeed, higher levels of rapport are correlated with better performance in each of these domains. zhao2014learning describes rapport as built upon a base of mutual attentiveness, face management, and coordination. This base is built primarily by conversational strategies, or ways of speaking (including nonverbal and paraverbal behaviors) that manage rapport throughout a conversation. Key conversational strategies include self-disclosure, reference to shared experience, praise, and _hedging_ -- giving instructions or conveying information in an indirect manner when it might otherwise sound rude or overly demanding.
End-to-end large language models (LLM), of the kind that are increasingly popular and powerful, do a good job at carrying out the propositional or information-carrying aspects of conversation, and a relatively good job of maintaining the coherence of a conversation, but they are not as good at changing _how_ they say something as a function of a relationship with the human user, while humans are, for the most part, quite good at this. However, since saying things in a specific manner - for example, through a hedge - helps task performance, it is an important topic for dialogue systems.
Linguists define hedges as a way of diminishing face threat (meaning the "positive social value a person effectively claims for himself" [17] by attenuating the extent or impact of an expression [1, 16]. Figure 1 shows a typical example of hedging in a peer tutoring setting, where the tutor uses two hedges ("I think" and "could" rather than "should") to deliver a hint for the next step of solving an algebra equation.
Tutoring is one context in which hedges are found in abundance and where recognizing them might be important for intelligent tutoring systems, as attested by the number of computational ap
Figure 1: Hedging in peer tutoring
proaches that attempt to do so (see section 2). Interestingly, even unskilled tutors use them. In fact, research on peer tutoring has shown that when rapport between a peer tutor and tutee is low, but the tutor is confident in his/her skills, that tutor tends to use more hedges, and this results in more problems attempted by the student and more problems successfully solved [14].
In this paper, then, we work towards the development of a generation module for a virtual peer tutor that, like real peer tutors, is able to choose the manner of delivering information in such a way. Specifically, we address two research questions:
**RQ1**: How good are end-to-end large language models used alone for generating hedges when fine-tuned on a peer-tutoring dialogue dataset? Are the models able to implicitly learn when and how to generate hedges?
The first question may be answered by comparing the performance of various fine-tuned models. If the end-to-end models cannot learn to hedge implicitly, we might attempt to drive the models to generate the utterances by providing the correct labels. We assume that the correct labels can be provided by another module of the system, so we compare the reranking method with the fine-tuning method, as the former is simple, powerful, and widely used for text generation. Consequently, the second question is:
**RQ2**: Can we improve these models by using a reranking approach? If so, what are the remaining errors and why do they occur?
## 2 Related Work
Considerably more computational methods exist to determine _what_ a dialogue system should say than _how_ to say it. However, more recently, with the increased power of end-to-end models to find information and convey it accurately, we can now turn to ensuring that the end-to-end model simultaneously also meets social goals, to increase the impact and acceptability of what is conveyed.
### Theoretical Approaches to hedges
As described above, a hedge can soften the impact of an utterance that might otherwise seem rude, such as a demand ("could you pass the salt") or an instruction ("you might want to pour the coffee over the sink"). madaio2017
systems tend to integrate indirect speech (Miehle et al., 2022; Briggs et al., 2017), generating hedges with powerful language models, and particularly as a function of the social context, has not been explored. Our desire to look at the social context leads us to train on spontaneous dialogue that is substantially noisier, owing to natural conversational phenomena such as disfluency. This differs from the majority of prior work, trained on written or acted corpora (Li et al., 2017; Rashkin et al., 2019).
### Generation Techniques
Different techniques have been used in the past to generate responses of a particular kind for dialogue systems. Madaan et al. (2020) used n-gram TF-IDF to identify source style words and generate target politeness style utterances by replacing these words. Niu and Bansal (2018) generated politeness formulations by using reinforcement learning with a trained politeness classifier. Similar to our approach, the explicit knowledge of politeness is only given to the classifier. Liu et al. (2021) constructed an emotional support dataset with eight different dialogue strategies and fine-tuned the pre-trained language models by connecting the label tokens to the beginning of each utterance in order to create a dialogue generator that can produce the target responses without focusing on the social context.
The reranking method is also widely used in text generation tasks. Hossain et al. (2020) used a simple and effective pipeline where they retrieved the original texts from the database, then edited with a Transformer (Vaswani et al., 2017) model, and then reranked the text by generation scores. Soni et al. (2021) first applied reranking to conversational strategy generation by controlling the level of self-disclosure in the outputs of DialoGPT (Zhang et al., 2020). The authors of LaMDA (Thoppilan et al., 2022) used various classifiers to rerank and filter out inappropriate responses. Recently, Chat-GPT (OpenAI, 2022) used reinforcement learning with human feedback, and has shown impressive performance.
In the articles above, most algorithms were trained on written dialogue datasets, which facilitated the task. However, our spontaneous dialogue dataset may lead the way for cutting-edge models trained on a real-world, face-to-face interactional dataset.
## 3 Methodology
### Task Description
Let \(D=\{d_{1},d_{2},d_{3},...d_{n}\}\) be a set of dialogues, where each dialogue \(d=\{u_{1},u_{2},u_{3}...u_{m}\}\) is composed of \(m\) turns, where \(u_{i}\) is a turn. Each tutor turn (and each tutee turn, although we will not examine the tutee turns further here) is labeled as hedge or non-hedge; we call \(l_{i}\) the label of \(u_{i}\). A fixed window size \(\omega\) of the dialogue history is assigned to each utterance: \(h_{i}=\{u_{max(1,i-\omega)},u_{i-\omega+1},...u_{i-1}\}\). The goal of this work is to train a generator (\(G\)) that can produce a tutor's utterance \(u^{\prime}_{i}\) that matches a given hedge strategy (i.e., hedge or non-hedge) \(l_{i}\), according to the dialogue history \(h_{i}\).
### Corpus
The dataset we used in the current work is the same as that used in our prior work (Raphalen et al., 2022; Goel et al., 2019; Zhao et al., 2014). 24 American teenagers aged 12 to 15, half boys and half girls, were assigned to same-gender pairs. They took turns tutoring each other in linear algebra once a week for five weeks, for a total of 60 hours of face-to-face interaction. Each interaction was composed of two tutoring periods, where the teens took turns being the tutor, with a social period at the beginning and between the two tutoring periods. For the purposes of the earlier work the corpus was annotated for hedges, as well as the subcategories of hedges, at the clause level. For our purposes, since generation happens at the level of the turn, we merge the clauses and their labels into speaker turns and turn-level hedge labels (see Appendix A for the merge strategy).
Our goal is to create a hedge generation module that can produce an appropriate hedge strategy for a tutor giving an instruction, according to what has been said before as indicated by the dialogue history. For this reason we kept all turns in the dialogue history, even though our model is trained to generate only the tutor's turns (and not those of the tutee). There are 6562 turns in these interactions, of which 5626 contain non-hedges and 936 hedges.
Being authentic interaction, there are disfluencies ("so just yeah just um"), repetitions ("that would be then that would be"), repairs ("oh wait, actually the x would go here"), and other spoken phenomena such as one-word clauses. These phenomena make generating hedges challenging since the language models we use are primarily trained
on written dialogues, which do not contain most of these features. However, our work allows us to see how far we can go with authentic spoken data.
### Methods
We combine two techniques for generating the tutor's turn: _Fine-tuning_ an existing generation model and _Re-ranking_ the generated outputs to match the desired hedge strategy.
#### 3.3.1 Fine Tuning Method
First, we want to evaluate how well the model performs when hedge information is implicitly taught through fine-tuning. We fine-tuned the generation model with the training set of the peer-tutoring corpus. Each utterance \(u_{i}=(w_{1},...,w_{n})\) is composed of \(n\) tokens, the dialogue history \(h_{i}\) as input to the generation model. We apply cross-entropy loss between \(u_{i}\) and \(u^{\prime}_{i}\), where \(u^{\prime}\in R^{|V|}\), \(V\) is the vocabulary.
\[J(u_{i},u^{\prime}_{i})=-\frac{1}{n}\sum_{j=1}^{j=|V|}u_{i,j}\log(u^{\prime}_ {i,j}) \tag{1}\]
#### 3.3.2 Reranking Method
Since a hedge classifier was developed for prior work in our lab Goel et al. (2019); Raphalen et al. (2022), we can use it to determine whether a generated text is a hedge or not and then inform the generator of the decision in order to regulate the output. This is known as reranking, and is what we use here as our second generation strategy.
1) We first pretrain our generator as in fine tuning. We then apply this generator to the test set to generate 501 candidate utterances for each dialogue history (Figure 2). 2) These candidates are first ranked by their sentence scores (i.e., the final outputted token's log probability for each sentence). 3) We then use the hedge classifier described above to filter out the utterances that do not match the selected strategy (i.e., hedge or non-hedge). 4) We keep utterances that match the selected hedge strategy. If more than one candidate matches the strategy, we pick the first one that matches, which means the one with the highest sentence score. 5) If none of the candidates matches the selected hedge strategy, we output the one that has the highest sentence score.
Footnote 1: See Appendix C for the details
## 4 Experimental Setting
### Data Processing
We randomly split the final dataset based on a 60:20:20 ratio. Of these, 60% is the training set, 20% is the validation set, and 20% is the test set.
Since our dataset is highly unbalanced, if we used it as is the results would be too biased towards non-hedges. In that approach the gap between the results of different models would not be clear because non-hedges are so much more frequent. For this reason, we manually balance by randomly selecting 235 non-hedge turns to balance the 235 hedges in the test set, and combine the data to form a new balanced test set. On the other hand, in order to have a large enough training set, we retain all tutor turns from the complete dataset, which therefore consists of 701 hedge turns and 4455 non-hedge turns, resulting in a dataset that is very skewed, but has more turns.
While the complete dataset contains a relatively small number of hedge turns, we believe that preserving the natural data distribution is crucial for addressing our first research question. Underscoring the wisdom of this approach, the results we obtained on perplexity and the BARTscore (that are indicative of fluency in the generated responses, as described below) demonstrate that the models were able to generate responses with reasonable fluency and quality despite the small number of hedge turns.
### SOTA Pretrained Language Models
We compare the performance of different state-of-the-art (SOTA) free open-source pretrained models as our generators: BART, DialoGPT, and BlenderBot. BART Lewis et al. (2020) uses an encoder-decoder architecture, trained on books and Wikipedia data, and performs well on tasks as varied as Q&A (SQuAD Rajpurkar et al. (2016)), text generation, text classification (MNLI Williams et al. (2018) ), and text summarization tasks (ELI5 Fan et al. (2019)). It is pretrained by distorting the format of the input text in various ways, and this training helps us to visualize its possible application to noisy spontaneous spoken dialogues. DialoGPT Zhang et al. (2020) is a dialogue version of GPT-2 Radford et al. (2019), an autoregressive language model with a multi-layer Transformer Vaswani et al. (2017) decoder as its model architecture. It is trained on 140 million conversational exchanges extracted from Reddit comment
threads. BlenderBot (Roller et al., 2021) uses the standard Seq2Seq Transformer architecture, but incorporates a number of dialogue training sets: Empathetic Dialogue (Rashkin et al., 2019), PersonaChat (Zhang et al., 2018), ConvAI2 (Dinan et al., 2020), and other datasets that, while largely handcrafted, focus on personality and emotions, enabling it to potentially develop some version of social skills.
### Evaluation Metrics
To evaluate performance, we used the most widely used set of reference-based metrics for natural language generation tasks (Liu et al., 2021; Ziems et al., 2022). Since these metrics have not been used for conversational strategies, we add an unsupervised reference-free metric, the BART score (Yuan et al., 2021). The BART score formulates the evaluation process as a text generation task using a pre-trained model. The score represents the probability of generating a hypothesis given a source text. The higher BART score represents better text from different perspectives (e.g., informativeness, factuality). In this paper, we denote the dialogue history as the source text and the generated utterance as the hypothesis. For comparison, we calculate the BART score between the dialogue history and the real response in the test dataset, giving a result of \(-6.44\). We also evaluated the relevance of the generated hedge strategy using an F1 score. The results using these metrics are presented in Table 2. The detailed description of the metrics used is in Appendix B.
### Human Evaluation
While the metrics described above are important for comparison with the performance of other work in the field, they do not obviate the need for human annotation. We therefore asked two annotators to ignore sub-categories and annotate only hedge or non-hedge on each tutor turn of the model's output, with access to 4 prior turns of the dialogue history. During a training phase the annotators reached an inter-rater reliability of over \(.7\) Kripendorff's alpha (Kripendorff, 2004) which indicates substantial agreement. One of the annotators then finished the remainder of the annotation. We computed the F1 scores for the label of the generated utterances with respect to the real tutor turn's label. A higher F1 score indicates that the approach is better suited to generate the correct hedge strategy (see Table 2). We also asked the annotators to pay attention to whether the output was unnatural and to note it if so. The annotators reported no concerns with the naturalness of the generated utterances.
The concept of fluency has recently gained popularity in the dialogue community (Li et al., 2019; See et al., 2019), but the current definition of fluency varies. More fundamentally, evaluations of this kind are more applicable to written text or scripted dialogues (Pang et al., 2020; D'Haro et al., 2019). as they cannot handle disfluencies (e.g., hesitations, repetitions, false starts) of the kind that are common in spontaneous spoken dialogues, and that may serve to give the speaker time to plan the next utterance (Biber et al., 1999; Thornbury and Slade, 2006). We therefore did not assess fluency in this work.
## 5 Results
### RQ1: How well do end-to-end models perform alone for generating hedges?
Table 2 compares the performance of the generation models. BlenderBot outperforms the other 2 models on most metrics,although with similar per
Figure 2: Reranking method
formance to DialoGPT, on BLEU and ROUGE-L. The discrepancy between BlenderBot and BART in each score is relatively wide. This discrepancy is most apparent on measures that compute scores based on n-gram-level overlaps (BLEU, ROUGE). To find the reason for this discrepancy, we calculate the average length of the outputs of the 3 models and observe 5.2 words for BART, 11.8 words for BlenderBot, and 14.5 words for DialoGPT, while the average length of the tutor's utterances in test data is 15.2 words. The average length of the output of DialoGPT is therefore close to that of the test set. This further explains DialoGPT's strong performance on the BLEU and ROUGE scores. On the other hand, BART tends to generate shorter turns, consequently demonstrating lower scores on metrics that require the calculation of repetition grams to yield scores. Note that in similar tasks, the best model was Blenderbot with a BLEU 2 score of 6.21, in the case of emotional support conversational strategy generation Liu et al. (2021), while DialoGPT reached 5.52. The best score in the positive text reframing task, meanwhile, was 11.0 for BLEU 1 Ziems et al. (2022), while BART reached 10.1 and GPT-2 reached 4.2.
Table 1 shows that BART has the lowest perplexity score, indicating that BART is more adaptive to our dataset compared to the other two models. This may be due to its pre-training approaches (see Section 4.2) that corrupt input texts with an arbitrary noising function. These approaches enable more accurate predictions in our noisy real-world dataset.
In response to our first research question, then, the performance of all three models was comparable but very limited. This suggests that the fine-tuning approach does not allow language models to learn hedge knowledge implicitly.
We therefore next turn to an approach that may improve performance by screening utterances with a given label.
### RQ2: Does reranking improve hedge generation?
Table 2 shows the performance of each model for the reranking method. BlenderBot once again performs well on all metrics and has a virtually identical F1 score to BART. Additionally, we find some interesting similarities among models: 1) BlenderBot and DialoGPT outperform BART in both the fine-tuning and the reranking methods (Table 2) with respect to reference-based metrics such as BLEU, ROUGE-L, etc., and 2) DialoGPT still underperforms the other two models in terms of F1 score, and in the reranking condition the gap widens.
This result could suggest that 1) the pretraining of the models (i.e., DialoGPT, BlenderBot) on dialogue datasets may help to generate longer utterances, and therefore to improve the reference-based metrics performance, and 2) the autoregressive model (e.g., DialoGPT) may not be suitable for the generation of social dialogue such as hedges.
### Comparing Fine-tuning and Reranking
To summarize results on the fine-tuning versus reranking approaches we observe that: 1) With the help of a hedge classifier, the reranking approach can do a good job at generating hedges, 2) BlenderBot is better suited to the task of generating long utterances, as described in Section 5.1. This could be because BlenderBot is pretrained with various social dialogue datasets, giving it a certain ability to generate the social aspects of dialogue.
Table 2 shows that models deployed with the reranking method have relatively higher or comparable Bart scores, but greatly improved performance on the F1 score (from \(.54\) to \(.85\)). This result, too, underscores the advantages of the reranking method.
### Error Analysis
While BlenderBot showed strong performance when using reranking, a certain number of generated utterances still did not match the real tutor
\begin{table}
\begin{tabular}{l l l} \hline \hline BART & BlenderBot & DialoGPT \\ \hline
34.9 & 69.3 & 72.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Language Model (LM) Perplexity (the lower is the better
labels. When a matching utterance type cannot be found in a limited pool of candidates, we could have chosen to increase the candidate pool to promote the probability of selecting a match. However, in this early effort to generate hedges, we want to ensure sufficient quality in the generated output but also explore the limitations of current language models for generating socially relevant phenomena on the basis of a spontaneous spoken interaction dataset.
We can learn about the limitations of these models by examining places where the system did not generate the desired strategy (that is, generated a hedge when the real tutor did not or vice versa). We first divide these strategy mismatches into _over-generation errors_, where the generator generates a hedge where it should not and _under-generation errors_ when it does not generate a hedge but should. Among the 1395 annotated turns outputted by the 3 generators, there are 13.3% of _over-generation errors_ and 86.7% _under-generation errors_. These errors are particularly interesting in the context of reranking, as it relied strongly on the hedge classifier. The hedge classifier selected the most suitable utterances, and yet the model still produced the wrong strategy - or at the very least mismatches with the strategy of the real tutor.
Therefore, we analyze the generated utterances corresponding to these two types of errors and identify two potential causes.
First, there are still some places where the model generates a hedge where it should generate a non-hedge. As we mentioned in Section 4.4, we invited humans to annotate the models' outputs in terms of hedge labels. We compare the human-annotations of the model output (where they labeled the output as hedge or non-hedge) with the output of the BERT-based classifier on the same generated utterances to calculate the F1 score. We find that there is a difference of about 9 points between the F1 score for human annotation (85%) shown in Table 2, and the F1 score for the same BERT-based hedge classifier (94%) reported in Raphalen et al. (2022). We assume that the classifier we used may have misclassified some generated utterances and we therefore label them as **Classification Errors**. This category accounts for 92.5% of _over-generation errors_, and 15.3% of _under-generation errors_.
Second, the basic functionality of an end-to-end language model of this kind is to produce the most coherent next utterance based on the dialogue history. This may result in the language model privileging coherence of content over style of delivery. That is, the model may not be able to find an appropriate strategy match among the coherent candidates, even when the candidate pool size is 50. We label this a **Goal Mismatch** as the propositional or content coherence goals of the system may be trumping the social goals, We found 84.7% in _under-generation errors_ and 7.5% in _over-generation errors_. 18% of the cases where the pool did not include the right strategy.
An example of each type of error is given in Figure 3. The first example belongs to the **Classification Error** type, where the classifier misclassified the system response (i.e. "We just found that the answer is two x equals three") as a hedge. In the second example, the tutor is trying to help the tuee to approach the answer step by step, but the tutee cannot come up with a worked idea. Here it is clear that the tutee is flailing and it is therefore probably not advisable to increase the student's stress with a volley of questions that the tutee can clearly not answer. The tutor thus uses a hedge as a response. Conversely, the generator produces a question. The generated utterance is "What do you think we should do, what's the next step". This example corresponds to our **Goal Mismatch Error**. It shows that the generator may not understand the social context, but is looking for a coherent response.
The **Goal Mismatch Error** is perhaps the most interesting of the errors, and thus to verify our hypothesis -- that the coherence goals of the models may impede the social goals -- we looked into the nature of the relationship between rapport (between tutor and tutee) and the generation of hedges. As described above, Madaio et al. (2017) found that hedges are generated when rapport is low. Since our corpus contained rapport annotations for every 30 seconds of the interaction, we looked at the rapport level in play when the model over-generated and under-generated hedges. Since rapport is annotated from 1 to 7 in the dataset, for convenience, we divided it into 3 levels: high (5-7), medium (3-5), and low rapport (1-3), as shown in Table 3.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Type\({}^{\text{Rapport}}\) & **High** & **Medium** & **Low** \\ \hline
**Over-generation** & 0 & 3 & 0 \\
**Under-generation** & 13 & 130 & 75 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Goal Mismatch Errors Distribution
_generation error_, we cannot obtain a meaningful conclusion due to size. However, the generators generate fewer hedges when rapport is low, an _under-generation error_, in contradiction to studies showing that speakers are more careful about threatening the face of (or embarrassing) their interlocutors when the social bond between them is weak (Madaio et al., 2017). We believe that this is because more hedges are found in low rapport interaction. Therefore, we count the hedge distribution of the low and high rapport interaction in the test dataset. 264 hedges are found in low rapport interaction, and 42 in high rapport interaction. This distribution corresponds to the fact that a hedge is most likely to happen in low rapport interactions. The under-generation errors are the cases where there should be hedges but non-hedges were generated. In the test dataset, more hedges occur in low rapport, and the generator under-generates more in low rapport, because there are more hedges that should be generated in low rapport. So, the generators make more errors in low rapport interaction due to an imbalance in hedge distribution between low and high rapport interaction.
Goal Mismatch error directly addresses our primary question 1: How effectively do end-to-end models perform when generating hedges on their own? Due to this fundamental discrepancy between competing goals, end-to-end language models are unable to inherently learn and discern when to apply hedges appropriately.
#### 5.4.1 Lexical Diversity of the Generated Output
As we have seen, LLMs can generate a hedge or non-hedge with the help of the reranking method. However, do language models spontaneously use different types of hedges in a human-like way? To investigate this question, we applied the rule-based hedge classifier from (Raphalen et al., 2022) to automatically annotate the utterances generated by models in subcategories of hedges (as defined in Section 2.1), and we compare the models' and humans' distributions of different hedge strategies. The rule-based classifier used linguistic patterns to identify each hedge subcategory. We have preferred here to use the rule-based classifier rather than the machine learning classifiers to avoid the dependence on and bias of probabilistic learning-based classifiers. Indeed, learning-based classifiers may be biased towards predicting the categories that are the most frequent in the dataset. Furthermore, the rule-based classifier reaches a 94.7 F1 score (Raphalen et al., 2022), which is comparable to the best performance (96.7 F1 score) using the Light Gradient-Boosting Machine (LGBM) (Ke et al., 2017) classifier.
The above results show that the model can spontaneously learn to use different types of hedges. Indeed, the models are capable of carrying out linguistic diversity on hedges based on learning from real human dialogues.
## 6 Conclusion and Future Work
In this paper, we have shown that the reranking method helps LLMs to generate hedges -- an important social conversational strategy that can avoid face threats towards an interlocutor by attenuating an impact of an expression. We find that an implicit fine-tuning approach (i.e., without any supervision by a hedge label) is not sufficient for generating hedges, but a reranking method significantly improves performance in generating hedges, with a final F1 score of \(.85\) for the BART model and \(.84\) for the BenderBot model. We also performed an error analysis on the generated results and found that two types of errors occur in the reranking method: **Classification**, and **Goal Mismatch**. The vast majority of errors fall into the category of Goal Mismatch, indicating an important conflict between
Figure 3: Strategy Mismatch Errors for Reranking Method
contemporary language models' primary goal of ensuring coherence and the social goal of managing face, which is indispensable for human conversation. While we were able to generate hedges, we were not able to necessarily generate them where they were needed most. That is, conversational strategies are adaptive in the sense that they respond to conversational strategies uttered by the previous speaker Zhao et al. (2014). We conclude that, going forward, we will need a way of adding an underlying representation of the social state of the dialogue to improve dialogue generation.
In this paper we addressed the question of how to generate hedges, but when to generate hedges remains an important and unexplored question. In future work, we may first explore the temporal relationships between the hedge and other conversational information (e.g., other conversational strategies, level of rapport) by sequential rule mining techniques, then apply RL-based methods to investigate in a more detailed manner the optimal way to predict where hedges should occur. In this context, we note that ChatGPT can generate a hedge when requested explicitly to do so, but does not generate hedges of its own volition (so to speak), for example, when face-threatening acts such as instruction are engaged in.
We began this paper by describing the need for hedges in instructional dialogues such as those engaged in by intelligent tutoring systems. The current dataset consists of authentic real-world tutoring sessions, but as carried out by untrained teenagers. We note that peer tutoring is a powerful method of teaching, used in classrooms around the world, and previous work shows that when untrained peer tutors use hedges, their tutees attempt more problems and solve more problems correctly Madajo et al. (2017). However, they are inexperienced and so in future work it will be important to investigate the interaction between trained tutors and tutee as well, for instance, by using the Teacher-Student Chatroom Corpus Caines et al. (2020). We believe that the methods and results from the current work will facilitate the investigation of expert tutors in future research.
## Broader Impact
Since the 1990s, research has shown the the importance of intelligent tutoring systems as effective learning environment,s and supports for classroom learning Anderson et al. (1995). Peer tutoring plays a powerful role as well, as peer tutors can motivate learners to try harder, as well as helping them to succeed, and it is particularly effective for low-achieving learners Cassell (2022). But virtual peer tutors have not yet achieved their potential, in part because of the difficulty of generating the social infrastructure of peer learning as well as the content of the matter being tutored. This paper, whose data comes from a corpus of peer tutoring dialogues, should therefore be seen as a step in the right direction.
## Acknowledgments
We thank the anonymous reviewers for their helpful feedback. We express sincere gratitude to the members of the ArticulLab at Inria Paris for their invaluable assistance in the successful completion of this research, and to the members of the ArticuLab at Carnegie Mellon Pittsburgh for answering our questions about their prior work. This study received support from the French government, administered by the Agence Nationale de la Recherche, as part of the "Investissements d'avenir" program, with reference to ANR-19-P3IA-0001 (PRAIRIE 3IA Institute).
## Limitations
Several limitations apply to the current study. While research shows that multimodal signals play an important role in conversational strategies Zhao et al. (2016), we did not take them into account. It is an open question as to how to render large language models capable of generating multimodal behaviors. A second limitation concerns the recent arrival on the science of ChatGPT, that has shown impressive performance. However the models are not free, and therefore were not included. As noted above, another important limitation is the untrained status of the tutors in our corpus, who are teenagers, and not trained tutors. Their use of hedges, therefore, comes from their knowledge of everyday social interaction, and not from expertise in teaching. In looking at the data, we find a few places where, as instructors ourselves, we believe that a hedge is important, even though the real (teenage) tutor did not use one.
The last limitation is that, while we focused only on generating hedge or non-hedge, there are actually 3 different kinds of hedges, that function differently. We hope to extend this work and take
advantage of a text style transfer technique to generate more kinds of hedges in future work.
## Ethical Statement
The corpus used here comes from earlier work by the last author and her colleagues, and was used in accordance with the original experimenters' Institutional Review Board (IRB). Those experimenters also anonymised the data, removing any identifying information. A pixelated example of the video data is available at github.com/neuromaancer/hedge_generation. To counteract potential gender bias concerning the use of hedges in peer tutoring, the data was collected from equal number of boys and girls. In text generation tasks, it is important to be aware of the potential risk of generating inappropriate content. We believe that, in fact, hedges used by tutors are perhaps the least likely conversational strategy to be inappropriate, as they are the most polite and "delicate" conversational moves. But, more generally, considerable additional work would be needed to filter out all inappropriate language for safe tutoring systems that engage in social and task interaction.
|
2304.13958 | Learning and Reasoning Multifaceted and Longitudinal Data for Poverty
Estimates and Livelihood Capabilities of Lagged Regions in Rural India | Poverty is a multifaceted phenomenon linked to the lack of capabilities of
households to earn a sustainable livelihood, increasingly being assessed using
multidimensional indicators. Its spatial pattern depends on social, economic,
political, and regional variables. Artificial intelligence has shown immense
scope in analyzing the complexities and nuances of poverty. The proposed
project aims to examine the poverty situation of rural India for the period of
1990-2022 based on the quality of life and livelihood indicators. The districts
will be classified into `advanced', `catching up', `falling behind', and
`lagged' regions. The project proposes to integrate multiple data sources,
including conventional national-level large sample household surveys, census
surveys, and proxy variables like daytime, and nighttime data from satellite
images, and communication networks, to name a few, to provide a comprehensive
view of poverty at the district level. The project also intends to examine
causation and longitudinal analysis to examine the reasons for poverty. Poverty
and inequality could be widening in developing countries due to demographic and
growth-agglomerating policies. Therefore, targeting the lagging regions and the
vulnerable population is essential to eradicate poverty and improve the quality
of life to achieve the goal of `zero poverty'. Thus, the study also focuses on
the districts with a higher share of the marginal section of the population
compared to the national average to trace the performance of development
indicators and their association with poverty in these regions. | Atharva Kulkarni, Raya Das, Ravi S. Srivastava, Tanmoy Chakraborty | 2023-04-27T05:33:08Z | http://arxiv.org/abs/2304.13958v1 | Learning and Reasoning Multifaceted and Longitudinal Data for Poverty Estimates and Lvelihood Capabilities of Lagged Regions in Rural India
###### Abstract
Poverty is a multifaceted phenomenon linked to the lack of capabilities of households to earn a sustainable livelihood, increasingly being assessed using multidimensional indicators. Its spatial pattern depends on social, economic, political, and regional variables. Artificial intelligence has shown immense scope in analyzing the complexities and nuances of poverty. The proposed project aims to examine the poverty situation of rural India for the period of 1990-2022 based on the quality of life and livelihood indicators. The districts will be classified into 'advanced', 'catching up', 'falling behind', and 'lagged' regions. The project proposes to integrate multiple data sources, including conventional national-level large sample household surveys, census surveys, and proxy variables like daytime, and nighttime data from satellite images, and communication networks, to name a few, to provide a comprehensive view of poverty at the district level. The project also intends to examine causation and longitudinal analysis to examine the reasons for poverty. Poverty and inequality could be widening in developing countries due to demographic and growth-agglomerating policies. Therefore, targeting the lagging regions and the vulnerable population is essential to eradicate poverty and improve the quality of life to achieve the goal of 'zero poverty'. Thus, the study also focuses on the districts with a higher share of the marginal section of the population compared to the national average to trace the performance of development indicators and their association with poverty in these regions.
## 1 Introduction
Poverty is a complex situation in which the lack of capabilities translates to low income of the household [21]. From a monetary perspective, poverty can be described as an interlacement of income distribution below a threshold value and the disproportions that exist within that boundary [1]. It is a state of destitution in which individuals lack the basic essential means, such as food, water, shelter, and money, required to sustain their daily livelihood [1]. While poverty has been a perennial socio-economic problem of mankind, the last two decades have witnessed a steady decline in global poverty [1]. However, owing to the COVID-\(19\) pandemic, the compounding effects of climate change and socio-economic conflicts, the pursuit to end poverty has suffered a significant setback for the first time in a generation. The pandemic resulted in the addition of almost \(100\) million people diving into extreme poverty [1]. Moreover, as the United Nations (UN) lists poverty eradication as one of its primary Sustainable Development Goals (SDGs), global communities are striving hard to develop efficient techniques for poverty tracking, estimation, and eradication.
**India under crisis:** As per Agriculture Census, 2015-20161, around 68% of the population resides in rural India. The latest Multidimensional Poverty Index (MPI) scores indicate that the poverty score is \(32.75\%\) among the rural population, contrary to \(8.8\%\) in urban India. The BIMARU2 states continued to have the most deprived districts of the country, with some statistics being comparable to Sub-Saharan African countries, thus, calling into question the inclusiveness of policies in India. Around \(50\%\) population of the country is engaged in agriculture and allied sector; therefore, the regional development of agriculture has an immense impact on the quality of life of rural households. Indian society has a hierarchy with different levels of mobility across social groups. As per the NFHS-4 data, two minority classes, namely scheduled caste (SC) and scheduled tribe (ST) households, have more poverty prevalence than general social groups.
Footnote 1: [https://agcensus.nic.in/document/agcen1516/T1_ac.2015_16.pdf](https://agcensus.nic.in/document/agcen1516/T1_ac.2015_16.pdf)
Footnote 2: BIMARU is an acronym formed from the first letters of the names of the Indian states of Bihar, Madhya Pradesh, Rajasthan, and Uttar Pradesh.
**Poverty estimation is crucial:** For any nation, accurately measuring poverty statistics and the economic characteristics of the population critically influences its research and national policies [14]. Moreover, economic growth cannot be the exclusive goal of a nation's economic policies; it is equally vital to ensure that the benefits
of economic prosperity reach all segments of society. This underpins the importance of assessing poverty in all its manifestations. Also, poverty measurement is crucial to evaluate how an economy is performing in terms of providing a certain minimum standard of living to all its citizens. In summary, measuring poverty has significant implications for policy drafting and implementation. As a result, it is no surprise that poverty estimation and tracking have garnered the attention of economists, social scientists, and statisticians alike.
Multifaceted measurement of poverty:Poverty, however, is not just based on the monetary distribution of wealth amongst the masses but is a multifaceted idea comprehensive of social, financial, and political components. Thus, to unthread the fabric of poverty and understand its nuances, a deeper and multidimensional study of its different facets is required. Such _multidimensional measurement of poverty_ encompasses two approaches - poverty as capability deprivation, and poverty as a measure of deprivation [1]. The Multidimensional Poverty Index (MPI), jointly developed by the Oxford Poverty and Human Development Initiative (OPHI) and United Nations Development Programme, considers both these factors (incidence and intensity of deprivation) for measuring poverty. The widely adopted Alkire and Foster's methodology [1] considers the three indicators of standard of living, education, and income at the household levels to measure multidimensional poverty. However, in the case of developing countries, such as India, one must look beyond these aspects, as here, poverty estimation is beset by several quantitative and qualitative concerns. This calls for the consideration of other ancillary factors, such as the variance in climate, healthcare facilities, dietary habits, political status, cultural influence, infrastructure development, and geographical benefits and drawbacks along with the financial information [1].
**Existing studies on India-specific poverty estimation and research gaps:** For India, the traditional narrative on poverty estimation makes use of publicly-available data collected through household surveys, such as the National Family and Health Survey (NFHS) [14, 15, 16, 17, 18], National Sample Survey (NSS) [19, 18], Indian Human Development Survey (HIDS) [12] to name a few, to calculate the multidimensional poverty index. The census survey of the government of India publishes village-level statistics, and ICRISAT publishes the longitudinal village-level data on land relations, crop yield, access to facilities, and resource possession. However, due to the paucity and the lack of comparability, veracity, and precision of the data at different time periods, accurate estimation and tracking of poverty are challenging. Thus, recent studies in the realm of poverty estimation have focused on alternate and proxy data sources, such as satellite images [1, 13, 14, 15], mobile phone usage [16, 17, 18], geospatial information [19], and information on the web [20]. Along with being inexpensive to produce and easier to scale, the use of these data sources also introduces the scope for utilizing modern machine learning (ML) techniques for poverty and quality of life estimation. While the use of artificial intelligence (AI) is a regular feature for achieving other sustainable development goals, such as climate change, education, healthcare, clean energy, and gender equality, its application to poverty tracking has been limited. Moreover, studies on poverty estimation in India have exclusively focused on household survey data, excluding the possibility of using alternative proxy data sources.
**Longitudinal and temporal analysis of poverty estimate:** While India has witnessed a commendable rise in its economic growth post-independence, it has been non-inclusive and exclusionary, exacerbating the rural-urban divide, regional disparities, and social and gender-based inequalities [4]. Studies on analyzing multidimensional poverty in India highlight the disparity at regional levels, where eastern and central states reflect high MPI scores [13]. The states of Jharkhand, Uttar Pradesh, Rajasthan, Odisha, Bihar, Chhattisgarh, and Arunachal Pradesh, have a higher poverty head-count ratio while Kerala, Mizoram, Nagaland, Punjab, Himachal Pradesh, and Haryana have lower poverty rate [15]. Moreover, the temporal distribution of poverty has not been constant, as, over time, the poverty has aggrandized in the states of Manipu and Arunachal Pradesh in the rural setting, and Meghalaya, Odisha, and Jharkhand in the urban setting [15]. This underlines the need for temporal and longitudinal analysis of poverty estimation. With the amalgamation of traditional as well as proxy data sources, such in
Figure 1: A bird’s-eye view of our proposal.
vestigations will help unravel the peculiarities of areas diving south of the poverty line, thus, enabling the policymakers to establish suitable and appropriate poverty eradication guidelines. Similarly, on other hand, identifying the regions where the poverty index has increased over time will help diagnose the policies that worked and the ones which did not. Lastly, it will also help differentiate the geographical regions of chronic poverty from transient poverty.
Major contributions:The proposed study investigates the spatial mapping of poverty by the quality of life and livelihood capabilities in the lagged regions vs. advanced regions of rural India at the district level. We also intend to a _longitudinal and temporal analysis_ to focus on the trajectory of growth and catching up capabilities of lagged regions of the country. We further herald a new research direction towards estimating poverty in lagged Indian states using data aggregation and integration and expound on its application using ML. Figure 1 shows a bird's eye view of our overall proposal. Thus, with the poverty-struck Indian states as a case study, we aim to address the following broad research questions:
1. **Multidimensional data integration:** How different types of proxy data sources be used to estimate poverty in India?
2. **Efficient prediction:** How can these proxy data sources be integrated with the traditional survey data for more accurate, interpretable, and efficient poverty estimation through traditional ML and advanced neural models?
3. **Longitudinal analysis and estimation:** Can a temporal and longitudinal examination of poverty using proxy data sources reveal salient information about the factors that contribute to poverty trajectory?
4. **Correlation, association, and causality analysis:** Can we identify variables that have a direct causal, correlative, and associative effect on poverty estimation?
5. **Policy Implications:** How can causal, correlative, and associative analysis, along with the classification and forecasting study help in policymaking?
## 2 Goals
The primary purpose of this research is to demonstrate the shortcomings of the conventional approach to estimating poverty in India and to illustrate new approaches and methodologies for doing so with the recent advances in AI and ML. Furthermore, this study emphasises the need for a temporal and longitudinal analysis of poverty, rather than a static and time-constrained one, in order to fully comprehend the expression and expansion of poverty in the socio-economically backward states of India. This section expands on these goals.
### Limitations of Traditional Data Sources
The data from the household expenditure and income surveys, such as the National Family and Health Survey (NFHS), different rounds of the National Sample Survey (NSS), and the Indian Human Development Survey (IHDS), form the backbone for identifying and measuring the poverty status of Indian households. Although the census data provides a comprehensive measurement of the material living standard of individuals in a population, it is conducted over long cycles (usually every ten years) and encompasses only a few characteristics of the target population. Household surveys, on the other hand, cover a wide range of variables, but their reliability for local statistical inference is limited. Conducting such surveys is also expensive, time-consuming, imprecise, and at times infeasible for poverty assessment. The majority of these statistics are generated over a reasonably long period and do not accurately reflect the attributes that affect the livelihoods of the residents of a particular area. Moreover, the target population is also not the same for all the datasets. For example, the states of Arunachal Pradesh, Bihar, Jharkhand, and Orrisa have a considerable tribal population along with the standard rural and urban distribution. This raises concerns over the credibility of the estimated poverty figures. With a few exceptions, such as information about education and job status, such surveys focus solely on household monetary information [1]. However, as poverty is not a uni-dimensional phenomenon and not just a mere reflection of the economic status of an individual, it encompasses other concomitant aspects, such as data on infrastructural development, agricultural growth, vehicle count, network towers, political climate, case information, electricity usage, and so forth. Such elements are more relevant in rural settings as they are direct markers of prosperity. While household data comprising the income statistics does have its place in poverty prediction, they have certain limitations, as stated above. Therefore, _we propose a data integration mechanism that takes into account both the traditional datasets along with the new proxy data sources for poverty prediction_.
### Towards the Use of Proxy Data Sources
There have been numerous studies on multidimensional poverty estimation, albeit most of them incorporate just the conditions of education, health, and living standards in addition to the economic dimension [1]. While these strategies are unquestionably superior to the single-source poverty assessment methodologies, they do have some detrimments, as discussed in the previous section. To compensate for these deficiencies, the advent of big data, combined with technological breakthroughs in ML, offers great promise for poverty tracking as well as interpreting and predicting social-economic conditions. This accounts for the data gathered from social media, remote sensing, agricultural growth, vehicular traffic, infrastructural development, mobile phone meta-data, and housing details. Political, cultural, and environmental aspects also factor in while estimating poverty. There are examples aplenty that elaborate on the successful use of these proxy data sources for poverty estimation across geographies. For instance, remotely sensed images, such as Landsat data and night-time light images, serve as the most representative data sources as they provide important information about the region's landscape. Various case studies on China [13, 14], Thailand [21], Philippines [15], Bangladesh [20], and African countries [16, 17] have demonstrated the usability of remote sensing for poverty tracking and estimation. Some other studies have shown that employing data from mobile phones [1], communication networks [Smith
Clarke _et al._, 2014], and political climate [Van der Berg _et al._, 2006] also yields promising results. While these research works do validate the use of proxy data sources for poverty estimation, they also bring to light that no such study has scrutinized the Indian subcontinent. Moreover, these works use independent proxy data sources for poverty estimation and do not contemplate the idea of combining and integrating different data sources for more efficient, explainable, and robust predictions. Thus, this works aims to address this bottleneck by proposing a data aggregation and integration methodology, combining the traditional as well as proxy data sources, for poverty estimation of Indian states. In addition, we offer a multi-input deep learning-based architecture for aggregating and processing data from many sources. Finally, we provide methods for investigating the correlative, associative, and causative relationships between various input variables and poverty estimation, the identification of which can help in better policymaking.
### Temporal Analysis of Poverty
Poverty is a temporal phenomenon that aggravates or declines over time. Thus, to forecast poverty statistics and its progress in a region, its historical data characterizing the temporal shifts should also be taken into consideration. While such a methodology would, in all likelihood, yield better, more accurate, and robust predictions, it will also help differentiate the prominent factors contributing towards poverty from the weaker ones. It will also help diagnose the schemes, plans, and policies that succeeded during a specific time leading to poverty alleviation or that had an adverse effect, leading to poverty expansion. The contemporary literature on poverty estimation exclusively focuses on static and time-specific data, failing to account for the temporal aspect of poverty. In rapidly developing countries like India, it is paramount to take into account the temporal dimension of poverty to pinpoint its core causes and design policies to tackle it. Thus, this research proposes a temporal data collection, integration, and prediction scheme for more robust poverty forecasting.
## 3 Methodology
In this section, we elaborate on the different proxy data sources, the data integration methodology, the use of ML and deep learning techniques for poverty estimation, and the temporal analysis of poverty. Furthermore, we classify the districts based on the performance (MPI score) between 1993-2015 considering several rounds of NFHS household surveys into 'advanced', 'catching up', 'falling behind', and 'lagged' districts. The study further targets the four most lagged states - Uttar Pradesh, Jharkhand, Bihar, and Odisha, as they rank the highest in the multidimensional poverty index of 2021 [Tripathi and Yennet, 2020].
### Proxy Data Sources and Data Integration
To develop a functional methodology that governments of developing countries can use for accurate poverty estimation and tracking, one requires a dataset that is representative of the country's population, that can be collected and timely updated automatically, and that is available at a fine level geographical granularity. In this section, we explore the different proxy data sources for poverty estimation as well as elaborate on the data integration approach.
**Remote sensing**
Remote surveying with satellite imagery is a low-cost and dependable method of tracking human development at fine spatial and temporal resolution [Jean _et al._, 2016]. Remote sensing involves various types of satellite imaging, conveying different types of information during the day and night. Daytime satellite images provide a wealth of information on the region's geography, infrastructure development, and population growth. Moreover, it can be used to infer other signals of prosperity, such as growth in the road network, building density, forest cover, and infrastructural expansion. The nighttime luminosity information provided by the satellite images provides a lens over a region's nocturnal activity. It serves as an ideal proxy for electricity consumption, degree of electrification and population growth [Ghosh _et al._, 2013]. Studies also showed a positive correlation of nighttime luminosity with carbon dioxide emissions, GDP, GDP per capita, constant price GDP, non-agricultural GDP, and capital stocks [Addison and Stewart, 2015]. Therefore, we aim to collect and utilize both daytime and nighttime satellite image data as each has its own advantages. We propose to use the satellite imagery from the Landsat 7 mission from the years 2001, 2011, 2016, and 2019 to track daytime activity. The nighttime light data can be procured from the United States Air Force Defense Meteorological Satellite Program (DMSP). The satellite's Operational Linescan System (OLS) sensors have a spatial resolution that allows them to make observations ranging from entire continents to less than a square kilometre. To map villages and districts to their satellite-image locations, we also propose to collect the information on each of their _shapefiles_. Once a region has been linked to its satellite images, we can extract its human development attributes from the satellite-image features for that region, as stated above, based on its daytime and nighttime visual look from space.
**Communication networks**
Active research in the last decade has shown that information from mobile phone usage and telecommunications networks are a strong indication of a region's socio-economic status [Soto _et al._, 2011; Smith-Clarke _et al._, 2014; Pokhriyal _et al._, 2015]. We can automatically infer proxy indicators of poverty from unobtrusively obtained call network data by a detailed examination of patterns inherent in mobile phone users' collective behaviour. For instance, cellphone top-up behaviour suggests that poorer people are likely to top-up their phone credit regularly in small amounts, whereas wealthier people are more likely to top-up infrequently in larger amounts. Furthermore, an increase in network density of communication reflects an infrastructural development, an increase in population, and socio-economic well-being. In our study, _we propose to collect the Call Detail Records_ (CDR) _data aggregated to the cell tower level_. The mobile phone operators collect such user-specific data primarily for billing purposes. Using such data, we can obtain a wealth of information about each call or text message, including the time, duration, caller and callee IDs, as well as the base station towers routing the
call or text. Such precise data not only reveal the degree of the penetration rate of mobile technology in developing countries but also provides a relatively unbiased picture in terms of demographics. To protect users' privacy, we plan to collect the CDR data aggregated by the cell towers through which the calls are routed rather than using the data at an individual user level. _We plan to extract two types of data from the CDR, the first pertaining to a single tower (measuring the number of incoming/outgoing calls)_, _and the second concerning a pair of two towers_ (_measure the flow of calls between them_). The raw CDR data contains each cell tower's location information in latitude and longitudes. We intend to work at the spatial granularity of the _Voronoi areas_ associated with cell tower placements. To successfully use such data, telecommunication providers just need to share anonymized, aggregated call detail records in a regulated manner. Early hints of this are already visible (e.g., the D4D Challenge3), and different frameworks are being developed to attract even more providers to join the endeavour [13].
Footnote 3: [http://www.d4d.orange.com/home](http://www.d4d.orange.com/home)
**Other data sources as indicators of poverty**
Data from remote sensing and communication networks form the primary proxy data source for poverty estimation. Other variables that may contribute towards poverty are the environmental, political, and cultural characteristics of a region. For instance, natural calamlities, such as floods, droughts, tornados, extreme/high low rainfall, and famines could result in poverty aggravation. Furthermore, the details about the ruling party of a region, the policies it has implemented, and quantifying its progress could also shed light on the region's economic development. Population-related statistics, such as the number of women, men, children, and senior citizens in an area might as well have some bearing on its economic condition. Information from local news snippets could also shed light on regional developments, crime rates, and other advancements. It also remains an open question whether cultural information, such as characteristics of different tribes, the different case and their distribution, plays an active part in poverty tracking, assessment, and expansion. Street-view images of an area over time could also help depict its developments or degradation. _We intend to consider these ancillary factors and determine whether any of these positively correlate with poverty deprivation_.
**Data integration**
The distinctive aspect of this study is that, rather than relying solely on new proxy data sources for poverty estimation, we integrate them with traditional census and household survey data. The survey datasets provide vital individual-level information about age, sex, religion, caste, mother tongue, marital status, education, disability status, land ownership, irrigation infrastructure, and tenancy status, to name a few. They also include details on the availability of bathrooms, drinking water, separate kitchen, electricity, electronic devices, vehicle count, cooking fuel, and roof, wall, and floor materials, among other things4. This results in a diverse dataset containing images, text, numeric data, and categorical labels. While traditional statistical methods cannot accommodate non-numeric data inputs, the current state-of-the-art deep learning architectures are perfectly suited for processing and merging unstructured data from disparate sources. We elaborate on this in the following section.
Footnote 4: [https://censusindia.gov.in/census_and_you/data_item_collected_in_census.aspx](https://censusindia.gov.in/census_and_you/data_item_collected_in_census.aspx)
### Learning and Reasoning Poverty Estimation
**Neural models for poverty estimation**
Deep neural models are being aggressively employed to attain various sustainable development goals [23, 24]. For poverty assessment and tracking as well, there have been numerous successful attempts of using ML methods utilizing traditional [11, 12] as well as proxy data sources [15, 16]. However, none of these methods designs systems that aggregate and combine different data sources. To address this, _we aim to present a multi-input neural architecture that can aggregate and combine information from several disparate data sources in a non-trivial fashion_. Figure 2 presents a schematic diagram of our generalized deep neural model that takes various inputs, such as images, text, as well as numeric and categorical data. The numeric data derived from the census and household surveys can be easily fed to a deep neural model by combining all the features into a vector representation. Categorical attributes, such as gender, education, religion, caste, and marital status can be encoded using a technique, called _entity embeddings_[10]. Unstructured data types of images and text snippets cannot be fed directly into neural models. While deep learning models specialize in processing unstructured data, they require input in the form of numeric representations. However, as we have limited training data, training visual or textual embeddings from scratch will, in all likelihood, not yield desirable results. Thus, _we propose to use a transfer learning approach wherein we use pre-trained models trained on large corpora and fine-tune them to our task-specific dataset_. For the visual inputs of satellite images, we can utilize pre-trained CNN-based models, such as VGG Net [16], or ResNet [17]. For the textual inputs, we prefer the language models, such as BERT [18] or RoBERTa [19]. Each of the representations goes through a non-linear transformation before the data merging operation. The data merging can be done in various ways, such as early fusion, late fusion, naive concatenation, attention mechanism [1], or gating-based fusion. We treat our problem as a classification task wherein we have four ground-truth labels at the district level - 'advanced', 'catching up', 'falling behind', and 'lagged'. This is calculated based on aggregating the individual MDPI scores at the district level.
**Correlation, Association, and Causality**
Analysing the relationship of the input variables with the target output can be a challenging step but it is important for strategic actions. Such insights are important to determine the driving factors (causative), factors that exhibit linear relationships (correlative), and factors that co-occur (associa
tive). We propose to use the Bayesian Networks for identifying causative factors, Pearson's correlation to determine the correlative factors, and the Hypergeometric test to discover the associative factors. Such correlative, associative, and causal analyses will aid in shaping and designing efficient policymaking.
### Temporal Analysis for Poverty Estimation
While the current vogue is to predict poverty using static data properties, this approach does not encapsulate the concept of poverty in its entirety. Poverty is a complex phenomenon with a very strong temporal component. Furthermore, recent studies have demonstrated that ML models trained for poverty prediction have poor time transferability, i.e., whether models built on data from one year can make sound predictions on data from another year [1]. Thus, for efficient and robust poverty assessment, we must consider and process data in a temporal fashion. For example, a consistent decline in rainfall in an area over time might actively contribute toward increased chronic poverty. However, outlier events, such as natural calamities or socio-political upheavals are rare phenomena leading to only ephemeral impoverishment and transient poverty as people finally recover. Thus, in this project, _we propose a new research direction and methodology for the temporal assessment and prediction of poverty using longitudinal dissection of data_. To start with, we plan to consider a time frame of \(5\) years for the temporal analysis. State-of-the-art sequences learning models, such as LSTMs and GRUs have been very successful in capturing temporal dependencies and sequential data. The range of transformer-based architectures [21] succeed likewise in this using the self-attention mechanism. Using these architectures, we will train a neural network using _teacher-forcing_ in which, during training, the model receives the ground-truth output \(y(t)\) as input at time \(t+1\). Thus, for each time step, the model receives the standard inputs, along with the output from the previous time step.
## 4 Model Evaluation
When evaluating a deep learning model for multifaceted and longitudinal data for poverty estimates and livelihood capabilities, it is important to consider several criteria. For our case we focus on the following six criteria:
1. **Accuracy:** The model should be able to accurately predict poverty estimates and livelihood capabilities based on the given data.
2. **Bias:** The model should not be biased towards any particular group or demographic. The model should be fair and impartial in its predictions.
3. **Generalizability:** The model should be able to generalize well to new data that it has not seen before. This is important because poverty estimates and livelihood capabilities can vary across different regions and populations. Thus, the model needs to be able to capture these variations.
4. **Interpretability:** The model should be interpretable, meaning that it should be possible to understand how the model arrived at its predictions. This is important for ensuring that the model is not making predictions based on
Figure 2: Proposed multi-input deep learning model for aggregating and processing data from proxy and traditional data sources.
irrelevant or biased factors. Moreover, to design effective policies, the model interpretability can help provide causal relations between the input data features and the poverty prediction outcome.
5. **Robustness:** The model should be robust to changes in the data and any noise or errors that may be present. This is important because poverty estimates and livelihood capabilities can be affected by various factors, and the model should be able to handle these variations.
6. **Scalability:** The model should be scalable, meaning that it should be able to handle large volumes of data efficiently. This is important because poverty estimates and livelihood capabilities data can be vast and complex, and the model should be able to handle this complexity.
## 5 Challenges
Though poverty amelioration has been a primary priority of the Indian government, the fact of the matter remains that millions of Indians continue to be poor by national and international standards, despite persistent efforts since independence. The utilization of AI/ML techniques may introduce additional challenges for efforts regarding poverty assessment. They are elaborated as follows.
1. **Lack of labeled data:** Deep learning models require large amounts of labeled data to achieve accurate predictions. The more data you feed them, the better results they are likely to yield. However, in many cases, poverty-related data is not readily available or is difficult to label accurately. Therefore, this requires substantial efforts in data gathering, storing, and processing.
2. **Data quality:** Poverty-related data can be subject to quality issues such as missing values, errors, and inconsistencies, leading to biased results. or example, there may be missing data on income or consumption, or the data may not be representative of the population being studied.
3. **Data bias:** Data bias can occur when the training data used to train deep learning models is not representative of the population being studied. This can lead to inaccurate predictions and exacerbate existing inequalities.
4. **Data privacy:** Poverty-related data may contain sensitive information about individuals or households, making it difficult to collect and share. This can limit the availability of data for training deep learning models.
5. **High Carbon Footprint:** the AI algorithms processing big data have high energy requirements and carbon footprints, which can have a detrimental impact on SDG 7 (Affordable and Clean Energy) and SDG 13 (Climate Action).
6. **Lack of skilled personnel:** Finally, designing and monitoring AI-based systems necessitates experts in these fields. Moreover, as AI/ML is a rapidly evolving discipline, the recruited employees must regularly update their understanding of this field. However, India, which otherwise produces thousands of IT workers each year, has a severe scarcity of AI skills professionals.
## 6 Risks, Limitations, Ethical Considerations
Poverty is a structural phenomenon, and tracing its causal factors over a time period might not be enough to articulate the causes of deprivation. The macro statistics, including budget allocation by government, public and private investment, and revenue generation of respective states, are not included in the analysis. Qualitative information like life history, challenges, and opportunities at the household level are not incorporated into the analysis. The data integration might reduce the quality and explainability of the individual datasets. The data used in the study are accessible from public platforms. For satellite images, outputs will be published at the district level.
## 7 Expected Results and Long-Term Plans
The holistic results of this study will provide a cogent understanding of the underlying causes of poverty and the capability of lagged regions to come out of it. Socio-economic factors in terms of access to amenities, education, health, connectivity, and living conditions facilitate livelihood capabilities in rural India. Inability to generate sustainable livelihood results in pushing the lagged region behind. The proxy indicators have the potential for a deeper understanding of poverty. The traditional poverty measurement indicators of income level or consumption expenditure also have the probability of reporting error, whereas the analysis of poverty from different facets will assist policymakers in eradicating poverty. Even the advanced states have intrastate differences, and the districts with a concentration of marginalized populations are the most lagged regions of the state. The integrated results can provide a different perspective of poverty based on the quality of life and livelihood capabilities and strengthen the outcome of conventional indicators. Policy suggestions will be based on the causal relationship of socioeconomic determinants of quality of life based on longitudinal data. In this context, the current social welfare and rural development schemes will be evaluated based on the results.
## 8 Conclusion
Regional poverty has been a major concern since the Independence period in India. The polarised growth in advanced states does not have the ripple effect of growth for low-income states. The proposal aims to understand the social, economic, regional, and cultural dimensions of uneven development and its impact on the poverty condition of the lagged region. The application of AI in targeting lagged regions has immense scope to identify the explaining factors of poverty. The analyses also trace the quality of life and development indicators of the advanced region and the catching-up region, which will suggest a path of development for the lagged region. Data integration is a powerful method to measure poverty rather than using only traditional or proxy variables. Poverty and inequality would be widening in developing countries due to an increase in population and growth-centred policies. Therefore, targeting the lagged region and the vulnerable population is essential to eradicate poverty and improve the quality of life to achieve the goal of "zero poverty". |
2310.09647 | Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video | Dynamic radiance fields have emerged as a promising approach for generating
novel views from a monocular video. However, previous methods enforce the
geometric consistency to dynamic radiance fields only between adjacent input
frames, making it difficult to represent the global scene geometry and
degenerates at the viewpoint that is spatio-temporally distant from the input
camera trajectory. To solve this problem, we introduce point-based dynamic
radiance fields (\textbf{Point-DynRF}), a novel framework where the global
geometric information and the volume rendering process are trained by neural
point clouds and dynamic radiance fields, respectively. Specifically, we
reconstruct neural point clouds directly from geometric proxies and optimize
both radiance fields and the geometric proxies using our proposed losses,
allowing them to complement each other. We validate the effectiveness of our
method with experiments on the NVIDIA Dynamic Scenes Dataset and several
causally captured monocular video clips. | Byeongjun Park, Changick Kim | 2023-10-14T19:27:46Z | http://arxiv.org/abs/2310.09647v2 | # Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video
###### Abstract
Dynamic radiance fields have emerged as a promising approach for generating novel views from a monocular video. However, previous methods enforce the geometric consistency to dynamic radiance fields only between adjacent input frames, making it difficult to represent the global scene geometry and degenerates at the viewpoint that is spatio-temporally distant from the input camera trajectory. To solve this problem, we introduce point-based dynamic radiance fields (**Point-DynRF**), a novel framework where the global geometric information and the volume rendering process are trained by neural point clouds and dynamic radiance fields, respectively. Specifically, we reconstruct neural point clouds directly from geometric proxies and optimize both radiance fields and the geometric proxies using our proposed losses, allowing them to complement each other. We validate the effectiveness of our method with experiments on the NVIDIA Dynamic Scenes Dataset and several causally captured monocular video clips.
## 1 Introduction
Consider a monocular video recording of dynamic objects. While it is challenging to distinguish between static and dynamic areas in a single frame, analyzing the entire video sequence enables us to differentiate the background from the moving objects. Moreover, we can also predict the background outside a captured frame by assuming that the background scene remains constant over time. This scene reasoning ability enables us to identify the moving objects and integrate partially available scene information, which is crucial for understanding in-the-wild videos and scaling the free-viewpoint rendering.
Existing novel view synthesis methods for monocular videos often use separate modules for static and dynamic regions, where view-dependent radiance fields are designed for static regions and time-dependent radiance fields for dynamic regions [13, 22, 23, 25, 31, 39, 44, 47]. In this regard, recent deformable NeRFs [12, 31, 32, 44] learn sufficient view dependencies from small camera trajectories to represent the background, while representing the remaining regions using time-dependent radiance fields. However, in
the real world, there are many cases where the camera does not follow a narrow trajectory, and deformable NeRFs fail to distinguish between the background and dynamic objects due to the lack of learning view dependencies.
On the other hand, flow-based methods [22, 23, 25, 13] use additional supervisions from pre-trained depth [33], optical flow [38] and semantic segmentation [16] estimation networks to constrain the radiance field since identifying moving objects and estimating their motion in monocular videos are challenging. By imposing geometric constraints on the radiance field, flow-based methods can design dynamic radiance fields for large scenes. Despite its scalability, we observe that flow-based methods quickly degenerate for viewpoints in spatio-temporally distant from the input camera trajectory, and the generated image is blurry and sometimes contain duplicated objects. This is because time-dependent radiance fields are trained by the optical flow supervision to satisfy geometric consistency between adjacent frames, which fails to incorporate global geometric information of entire scene from wide-range camera trajectories. Figure 1-(b) shows the problem of a state-of-the-art dynamic view synthesis method [13] where a person is duplicated outside of the input frame and the background is not preserved after the person walks by because of the duplicated person.
Motivated by our observations, we introduce point-based dynamic radiance fields (**Point-DynRF**) to represent the entire scene geometry and produce more realistic long-term novel view synthesis results. Point-DynRF is built upon the Point-NeRF [46] representation, which reconstructs 3D neural point clouds and encodes the localized scene representation from neighboring neural points. While Point-NeRF aims at static scenes, we extend it to consider the time domain where different subsets of neural point clouds are sampled at each time step to represent time-varying radiance fields. Specifically, we utilize a pre-trained depth estimation network [33] and pre-defined foreground masks [13] to initialize pixel-wise depth and rigidness of our neural point clouds, respectively. Moreover, we propose a dynamic ray marching, where we march a ray over a subset of the entire point cloud consisting of all background points and the dynamic points corresponding to the rendering time. As each subset of neural point clouds represents the actual scene surface of the corresponding rendering time, our Point-DynRF can regress dynamic radiance fields only on the scene surface at that rendering time and alleviate to generate of duplicated dynamic objects.
To train Point-DynRF, we simply modify the training objective of DVS [13] and jointly optimize the neural point clouds and dynamic radiance fields, rather than solely supervising the radiance fields using initialized depth and foreground masks. Specifically, we train Point-DynRF to align the initialized learnable depth and foreground masks with the volume rendered depth and dynamicsness maps. Through the joint optimization scheme, the global scene geometry and dynamic radiance fields are further refined and complement each other, addressing the degeneration problems of previous methods in long-term dynamic view synthesis. Extensive experiments on the NVIDIA Dynamic Scenes [47] and several monocular video clips show the efficiency and effectiveness of our method.
## 2 Related Works
Neural representations for novel view synthesis.Novel view synthesis aims to generate new views of a scene given multiple posed images. To consider the arbitrary viewpoints in three-dimension, multiple-view geometry is often utilized and combined with image-based rendering methods to synthesize realistic novel views [10, 20, 34, 50, 9]. Moreover, deep neural networks have been explored to improve the visual quality of novel views by using explicit geometric proxies, such as multi-plane image [49, 36, 42], point cloud [43, 1, 40], and voxel [12, 35].
Recently, coordinate-based neural representations [27, 26, 28, 29] have achieved outstanding results in modeling the scene as implicit scene representations. In the context of novel view synthesis, Neural Radiance Fields (NeRF) [27] has been proposed to model the scene as a continuous volumetric field with neural networks. The success of NeRF is attributed to the extension of neural representation design, which facilitates free-viewpoint rendering with various applications, such as relighting [4], appearance editing [48, 24], reflections [14], and generative models [28, 5, 7]. Despite its remarkable scalability, several methods [19, 46] focus on the fact that NeRF samples a large number of unnecessary points for each ray. Specifically, Point-NeRF [46] models a volumetric radiance field with 3D neural point clouds, avoiding ray sampling in the empty space and encoding localized scene representations. Our work extends Point-NeRF, encoding different scene representations for static and dynamic regions by leveraging its capability to encode localized scene representations.
Dynamic view synthesis for videos.Dynamic view synthesis focuses on generating novel views with dynamically moving objects at arbitrary viewpoints and time stamps. Several works have been proposed to model time-varying scenes on multiple time-synchronized videos [3, 21, 37, 50], sparse camera views [17, 11], stereo camera [2], and specific domain [15, 41, 6]. However, modeling neural scene representation from a monocular video is more challenging since it contains a single viewpoint for each time stamp. This causes ambiguities that radiance can be changed in either a view-dependent or a time-varying or both. To solve this ambiguity, Yoon [47] combines an explicit depth estimation module to leverage geometric transformations
(i.e., warping) and to blend strategies for synthesizing novel views of a dynamic scene, but it requires a time-consuming preprocessing to generate manually annotated foreground masks. Recently, flow-based methods [13, 22, 25, 45] directly regress 4D space-time radiance fields by using additional geometric proxies, such as depth [33] and optical flow [38] estimation networks. Geometric proxies are used as additional supervision to learn their deformation module and constrain temporal changes of a dynamic scene. Several methods [39, 30, 31, 12, 44] propose deformable neural radiance fields by modeling a canonical template radiance field and a deformation field for each frame. Our work also uses geometric proxies for point cloud initialization, but we optimize the dynamic radiance fields and geometric proxies together based on the volume rendering process. Moreover, point-based dynamic radiance fields allow us to incorporate the entire scene geometry and regress the radiance fields from the actual scene surface for each rendering time.
## 3 Method
Given a monocular video \(V=\{I_{1},I_{2},\ldots,I_{N}\}\) consisting of \(N\) frames, our goal is to synthesize novel views at arbitrary viewpoints and time steps. To achieve this, we design point-based dynamic radiance fields as shown in Fig. 2. Our model is built on the Point-NeRF [46] representation and extends it to consider time-varying radiance fields. We briefly describe the volume rendering formulation in 3.1 and then explain how to extend Point-NeRF to consider the time domain in 3.2. Finally, we illustrate the optimization scheme of Point-DynRF in 3.3.
### Volume rendering
We construct continuous volumetric fields for modeling dynamic scenes, following the formulation in NeRF [27]. Given the camera center \(\mathbf{o}\in\mathbb{R}^{3}\) and viewing direction \(\mathbf{d}\in\mathbb{R}^{2}\), each pixel's RGB color \(\mathbf{C}\in\mathbb{R}^{3}\) is computed by marching a ray \(\mathbf{r}(s)=\mathbf{o}+s\mathbf{d}\) through the pixel and approximate the integration over radiance and its volume density \(\{(r_{j},\sigma_{j})\in\mathbb{R}^{3}\times\mathbb{R}\mid j=1,\ldots,M\}\) for \(M\) sampling points in the ray as:
\[\mathbf{C}(\mathbf{r})=\sum_{j=1}^{M}T_{j}(\alpha(\sigma_{j}\delta_{j}))r_{j}, \tag{1}\]
\[T_{j}=\text{exp}(-\sum_{k=1}^{j-1}\sigma_{k}\delta_{k}), \tag{2}\]
where \(\alpha(x)=1-\text{exp}(-x)\) outputs the opacity at each sampling point, \(\delta_{j}\) is the distance between two adjacent sampling points and \(T_{j}\) represents a volume transmittance.
### Point-DynRF Representation
Point-NeRF [46] is pre-trained on a multi-view stereo dataset [18] or uses only points located on the actual surface with high confidence from COLMAP. In dynamic scenes, however, it fails to accurately regress the scene geometry since dynamic objects disrupt to estimate point-to-point correspondences. To solve this ambiguity, we propose Point-DynRF with associated neural point clouds, which are initialized by imprecise depth maps and pre-defined fore
Figure 2: **An overview of network architecture.** Our framework consists of three components. First, we initialize per-frame depth maps \(D_{n}\) and foreground masks \(M_{n}\) for a given \(N\) frames. Then, we back-project each pixel of \(N\) frames to reconstruct our neural 3D point clouds. Each neural point \(i\) contains its spatio-temporal locations \((\mathbf{p}_{i},t_{i})\), a point-wise rigidness \(\gamma_{i}\), a randomly initialized neural feature vector \(f_{i}\) to represent the local scene representation. Then, we select a subset point cloud at a rendering time step \(t\) and assign sampling points where the ray meets the neural points as they march. Finally, we regress a volume density and a radiance on both view-dependent and time-dependent radiance fields. The volume density and radiance for each sampling point in the ray are integrated via volume rendering to output an RGB color.
ground masks, and jointly optimize scene geometry and dynamic radiance fields.
Neural Point Clouds Reconstruction.Our neural point clouds are reconstructed by depth maps \(\{D_{1},...,D_{N}\}\) and foreground masks \(\{M_{1},...,M_{N}\}\). We first initialize per-frame depths by using disparity maps \(disp_{n}\) obtained from DPT [33] and convert it to depth maps with per-frame scale \(s_{n}\) and shift \(b_{n}\) values as:
\[D_{n}(p)=s_{n}/(disp_{n}(p)+b_{n}). \tag{3}\]
Note that we design a more stable network by optimizing scale, shift, and disparity together rather than optimizing pixel-wise depth values individually. Per-frame foreground masks are obtained as same as DVS [13], and we directly parameterize our point-wise rigidness \(\gamma\) with \(1\) for the background and \(0\) for moving objects. Thus, we reconstruct neural point clouds as \(\mathbb{P}=\{(\textbf{p}_{i},t_{i},\textbf{f}_{i},\gamma_{i})\mid i=1,...,L\}\), where each point \(i\) is located at \(\textbf{p}_{i}\) and captured at time steps \(t_{i}\) with a point-wise rigidness \(\gamma_{i}\). We also use a neural feature vector \(\textbf{f}_{i}\), which are randomly initialized and parameterized to encode local scene representations. Since each neural point is a one-to-one match to each pixel of input frames, training the \(\textbf{p}_{i}\) and \(\gamma_{i}\) of each neural point optimizes the depth and foreground masks.
Dynamic Ray Marching.Our dynamic radiance fields are regressed from a different subset of the entire point cloud set \(\mathbb{P}\) at each time step based on the sampling time and the point-wise rigidness. Specifically, we select neural points where their point-wise rigidness is higher than the threshold \(\lambda=0.5\), or its temporal location is the same as the sampling time as:
\[\mathbb{P}_{t}=\{(\textbf{p}_{i},t_{i},\textbf{f}_{i},\gamma_{i})\in\mathbb{P }\mid t_{i}=t\text{ or }\gamma_{i}>\lambda\}, \tag{4}\]
where neural points with \(\gamma_{i}\) is higher than \(\lambda\) to be background points to represent the static region whether the position of the dynamic object changes with each subset. Moreover, dynamic neural points do not represent the scene surface from different viewpoints, resulting in avoiding unnecessary ray sampling and not duplicating objects.
Neural Point Aggregation.After we select the subset of the neural point cloud, Point-DynRF aggregates neural points to output the density and radiance for each shading point. Specifically, we follow the Point-NeRF [46] to query \(K=8\) neighbor neural points for ray sampling, and we encode per-point local scene features with an MLP layer \(F\) for each shading point **x** as:
\[f_{i,\textbf{x}}=F(\textbf{f}_{i},\textbf{x}-\textbf{p}_{i}). \tag{5}\]
Volume Density Regression.We use density regression MLP layers \(G_{s}\) and \(G_{d}\) for static and dynamic regions, respectively. We first encode per-point time-invariant volume density \(\sigma^{s}\) and time-varying volume density \(\sigma^{d}\) as:
\[\sigma^{s}_{i,\textbf{x}}=G_{s}(f_{i,\textbf{x}}), \tag{6}\]
\[\sigma^{d}_{i,\textbf{x}}=G_{d}(f_{i,\textbf{x}},t). \tag{7}\]
Then, the time-invariant volume density \(\sigma^{s}_{\textbf{x}}\) and time-variant volume density \(\sigma^{d}_{\textbf{x}}\) at the sampling point **x** is regressed as:
\[\sigma^{s}_{\textbf{x}}=\sum_{i}\sigma^{s}_{i,\textbf{x}}\frac{w_{i}}{\sum w_{ i}}, \tag{8}\]
\[\sigma^{d}_{\textbf{x}}=\sum_{i}\sigma^{d}_{i,\textbf{x}}\frac{w_{i}}{\sum w_{ i}}, \tag{9}\]
where \(w_{i}=\frac{1}{|p_{i}-x|}\) is for a distance-based weighted sum that gives higher weight to neural points closer to **x**.
Radiance Regression.We regress a view-dependent radiance \(r^{s}_{\textbf{x}}\) and a time-dependent radiance \(r^{d}_{\textbf{x}}\) by using MLP layers \(R_{s}\) and \(R_{d}\), respectively, as:
\[r^{s}_{\textbf{x}}=R_{s}(\sum_{i}\frac{w_{i}}{\sum w_{i}}f_{i,\textbf{x}},d), \tag{10}\]
\[r^{d}_{\textbf{x}}=R_{d}(\sum_{i}\frac{w_{i}}{\sum w_{i}}f_{i,\textbf{x}},t), \tag{11}\]
where \(d\) and \(t\) is the viewing direction and sampling time, respectively.
Blending Weight Regression.We directly regress blending weights from the point-wise rigidness \(\gamma_{i}\) of neighboring points as:
\[b_{\textbf{x}}=\mathbb{1}[\sum_{i}(\frac{w_{i}}{\sum w_{i}}(1-\gamma_{i}))> \lambda], \tag{12}\]
where \(\mathbb{1}\) equals to one if the condition is true. We define the blending weight as \(0\) or \(1\) so that either static or dynamic radiance fields dominate at each shading point. To optimize \(\gamma_{i}\), we use the gradient clamping used in Point-NeRF to \(\sum_{i}(\frac{w_{i}}{\sum w_{i}}(1-\gamma_{i}))\) if MAX\((\sigma^{s}_{\textbf{x}},\sigma^{d}_{\textbf{x}})\) is larger than a threshold \(0.7\) and there exists at least one dynamic point.
### Training Objectiveness
In this section, we briefly demonstrate how we jointly optimize dynamic radiance fields and neural 3D point clouds. Specifically, we introduce reconstruction losses to learn combined NeRF, static NeRF, and dynamic NeRF in Sec. 3.3.1, scene geometry losses to reconstruct accurate neural points in Sec. 3.3.2 and joint optimization of Point-DynRF and neural 3D points in Sec. 3.3.3.
#### 3.3.1 Reconstruction Loss
Combined NeRFWe apply a reconstruction loss to dynamic radiance fields, which are a blend of view-dependent and time-dependent radiance fields. To this end, we combine two radiance fields with blending weights as:
\[\mathbf{C}(\mathbf{r},t,\mathbb{P}_{t})=\sum_{j=1}^{M}T_{j}\left(\alpha(\sigma_{ j}^{s}\delta_{j})(1-b_{j})r_{j}^{s}+\alpha(\sigma_{j}^{d}\delta_{j})b_{j}r_{j}^{d} \right), \tag{13}\]
\[T_{j}=\text{exp}\left(-\sum_{k=1}^{j-1}\left((\sigma_{k}^{s}(1-b_{k})+\sigma_{ k}^{d}b_{k})\delta_{k}\right)\right), \tag{14}\]
where \(\mathbf{C}(\mathbf{r},t,\mathbb{P}_{t})\) is a volume rendered RGB value from a ray \(\mathbf{r}(s)=\mathbf{o}+s\mathbf{d}\), rendering time \(t\), and a subset point cloud \(\mathbb{P}_{t}\). To ensure that the dynamic radiance fields accurately reconstruct the input video sequence, we jointly train view-/time-dependent radiance fields by applying a reconstruction loss \(L_{rec}^{full}\) as:
\[L_{rec}^{full}=\sum_{i=1}^{N}\sum_{uv}\lVert\mathbf{C}(\mathbf{r}_{uv}^{i}, \mathbb{P}_{i})-I_{u,v}^{i}\rVert_{2}^{2}, \tag{15}\]
where \(\mathbf{r}_{uv}^{i}\) is a ray for pixel coordinates \((u,v)\) in \(i\)-th frame and \(I_{u,v}^{i}\) is a ground-truth RGB value for pixel coordinates \((u,v)\) in \(i\)-th frame.
Static and Dynamic NeRFWe leverage point-based neural scene representations to learn time-invariant radiance fields (Static NeRF) and time-variant radiance fields (Dynamic NeRF), respectively. If we sample a subset point cloud \(\mathbb{P}_{t,s}\) consisting of only background points as:
\[\mathbb{P}_{t,s}=\{(\mathbf{p}_{i},t_{i},\mathbf{f}_{i},\gamma_{i})\in \mathbb{P}\mid\gamma_{i}>\lambda\}, \tag{16}\]
a volume rendered image contain only the background with no dynamic objects. Likewise, if we sample a subset point cloud \(\mathbb{P}_{t,d}\) captured at a specific time \(t\) as:
\[\mathbb{P}_{t,d}=\{(\mathbf{p}_{i},t_{i},\mathbf{f}_{i},\gamma_{i})\in \mathbb{P}\mid t_{i}=t\}, \tag{17}\]
Point-DynRF can render an image restricted to only the neural points at that moment. Figure 3 shows which subset point clouds are used by combined NeRF, Static NeRF, and Dynamic NeRF. Thus, each radiance field is regressed by using Eq. 1 as:
\[\mathbf{C}^{s}(\mathbf{r},t,\mathbb{P}_{t,s})=\sum_{j=1}^{M}T_{j}^{s}(\alpha( \sigma_{j}^{s}\delta_{j}))r_{j}^{s}, \tag{18}\]
\[\mathbf{C}^{d}(\mathbf{r},t,\mathbb{P}_{t,d})=\sum_{j=1}^{M}T_{j}^{d}(\alpha( \sigma_{j}^{d}\delta_{j}))r_{j}^{d}. \tag{19}\]
Then, we apply reconstruction losses to each radiance field as:
\[L_{rec}^{s}=\sum_{i,u,v}\lVert\mathbf{C}^{s}(\mathbf{r}_{uv}^{i},t,\mathbb{P }_{t,s})-I_{u,v}^{i}\rVert_{2}^{2}*\mathbb{1}[M_{u,v}^{i}>\lambda], \tag{20}\]
\[L_{rec}^{d}=\sum_{i,u,v}\lVert\mathbf{C}^{s}(\mathbf{r}_{uv}^{i},t,\mathbb{P }_{t,d})-I_{u,v}^{i}\rVert_{2}^{2}, \tag{21}\]
where we only apply \(L_{rec}^{s}\) to background regions by using a foreground mask, and \(\mathbb{1}[M_{u,v}^{i}>\lambda]\) indicates whether a rigidness value of pixel coordinates \((u,v)\) in \(i\)-th frame is higher than the threshold \(\lambda\). Finally, our reconstruction loss is formulated as:
\[L_{rec}=\lambda_{rec}^{full}L_{rec}^{full}+\lambda_{rec}^{s}L_{rec}^{s}+ \lambda_{rec}^{d}L_{rec}^{d}. \tag{22}\]
#### 3.3.2 Scene Geometry Loss
Initialized depth maps well represent the scene geometry but contain scale ambiguities with other frames. Therefore, we use optical flow maps \(f_{gt}\) from RAFT [38] to supervise scale \(s_{t}\) and shift \(b_{t}\) by applying a flow loss \(L_{flow}\) only for background pixels as:
\[[u^{\prime},v^{\prime},z^{\prime}]^{T}=T_{t^{\prime}}^{-1}T_{t}D_{t}[u,v,1]^{T}, \tag{23}\]
\[L_{flow}=\sum_{uv}\lVert(\frac{u^{\prime}}{z^{\prime}}-u,\frac{v^{\prime}}{z^{ \prime}}-v)-f_{gt}\lVert*\mathbb{1}[M_{u,v}^{i}>\lambda], \tag{24}\]
where \(t^{\prime}\) indicates a time step for adjacent frames and \(T_{t}\) is known camera parameters at \(t\). Note that we detach the gradient from back-propagating to the disparity so that only the scale and shift values can be trained from the flow loss.
Moreover, we observe two cases that point-based ray marching can miss the ray as shown in Fig. 4. While learning the scene geometry, some pixels may have large depth values to satisfy the geometric consistency. As a result,
Figure 3: **An overview of point cloud subsets for each NeRF.**
neighboring pixels in the image plane are also outside the query boundary, resulting in the ray can not be marched. Therefore, we introduce \(L^{s}_{miss}\), which is an \(\ell_{2}\)-loss to minimize the depth value corresponding to the pixel for which the ray is not marched. Also, rays may not be marched for different render times in the fixed-viewpoint due the dynamic ray sampling. To solve this problem, we introduce \(L^{d}_{miss}\), which is also an \(\ell_{2}\)-loss to maximize the rigidness of a green point in Fig. 4-(b) to be one. Note that \(L^{s}_{miss}\) and \(L^{s}_{miss}\) are introduced to deal with outlier cases, since missing rays are rarely present in the entire training process. Consequently, a scene geometry loss \(L_{geo}\) is formulated as:
\[L_{geo}=\lambda_{flow}L_{flow}+\lambda^{s}_{miss}L^{s}_{miss}+\lambda^{d}_{ miss}L^{d}_{miss}. \tag{25}\]
#### 3.3.3 Joint Optimization
We further introduce loss functions that optimize the dynamic radiance field and neural points together. Our joint optimization losses are formulated in the same manner as DVS [13]. However, we make a modification by introducing learnable per-frame depth and foreground masks, in contrast to the supervised learning approach of matching the volume-rendered depth \(\mathbf{\tilde{D}}(\mathbf{r},t,\mathbb{P}_{t})\) and dynamicsness map \(\mathbf{\tilde{M}}(\mathbf{r},t,\mathbb{P}_{t})\) to the initialized depth and foreground mask.
Depth Adjust LossWe apply a depth adjust loss \(L_{depth}\) to train the depth map of \(i\)-th frame \(D_{i}\) to match the expected depth \(\mathbf{\tilde{D}}(\mathbf{r},t,\mathbb{P}_{t})\) as:
\[L_{depth}=\sum_{i=1}^{N}\sum_{uv}\lVert\mathbf{\tilde{D}}(\mathbf{r}^{i}_{uv},i,\mathbb{P}_{i})-D^{i}_{u,v}\rVert_{2}^{2}, \tag{26}\]
where \(D^{i}_{u,v}\) is a depth value of pixel coordinates \((u,v)\) in \(i\)-th frame.
Mask Adjust LossSimilar to expected depth maps, we use volume rendering for the blending weight to get the dynamicness map \(\mathbf{\tilde{M}}(\mathbf{r}^{i}_{uv},i,\mathbb{P}_{i})\) and propose a mask adjust loss \(L_{mask}\) to match the per-frame foreground mask.
\[L_{mask}=\sum_{i=1}^{N}\sum_{uv}\lVert\mathbf{\tilde{M}}(\mathbf{r}^{i}_{uv},i,\mathbb{P}_{i})-M^{i}_{u,v}\rVert_{2}^{2}. \tag{27}\]
## 4 Experiments
### Experimental Settings
Dataset.We evaluate our method on the Dynamic Scene Dataset [47]. We also use the same evaluation protocol in DVS [13], which evaluate the quality of the synthesized novel views through PSNR, SSIM and LPIPS metrics with ground truth images. Note that we exclude the Umbrella sequences since COLMAP estimates inaccurate camera poses, failing to regress the scene geometry accurately. Instead, we evaluate our method on several causally captured monocular video clips, which are more realistic videos and have a wide range of camera trajectories. Causally captured videos provide various scene contexts that can be happened in real-world scenarios, and COLMAP accurately estimates camera poses.
### Comparison to Baselines
We now compare our method with the state-of-the-art methods on the NVIDIA Dynamic Scene dataset [47]. Table 1 shows quantitative results, and Point-DynRF demonstrates competitive performance with previous methods across most scenes. Specifically, Point-DynRF outperforms all previous methods on the SSIM metric for all scenes. However, due to the inaccurate camera pose estimated by COLMAP, the construction of neural points in Point-DynRF is not optimal. As a result, the rendered position of the dynamic object by Point-DynRF may differ from the ground truth. In the Playground scene depicted in Fig. 5, Point-DynRF generates a visually pleasing view but the position of the object is slightly shifted behind compared to the ground-truth. In the Skating scene, Point-DynRF generate realistic images, while flow-based methods like NSFF [22], DVS [13], and RoDynRF [25] produce blurry images, and deformable NeRFs such as HyperNeRF [31] and TiNeu-Vox [12] struggle to represent the scene.
### Long-Term View Synthesis
We evaluate our method and flow-based dynamic view synthesis methods DVS [13], RoDynRF [25] and DynI-BaR [23] on real-world scenarios with a wide-range camera
Figure 4: **Missing rays.** Assuming a ray is marched when it has three neighbor points for a shading point. (a) If the depth value is too large, the distance between neighboring pixels in 3D world coordinates is larger than the querying radius and fails to march the ray. Moreover, the subset point cloud is changed as the render time varies, causing rays can be marched (b) or sometimes not (c).
trajectory. Point-DynRF can generate realistic novel views for viewpoints far from the input camera trajectory in both space and time because it leverages global scene geometry (i.e., neural 3D points). Figure 6 shows the long-term view synthesis results where DVS [13] has quickly degenerated for unseen viewing directions and produces artifacts in the background regions. Moreover, DynIBaR [23] fails to generate a dynamic object since it is highly over-fitted on the input trajectory. On the other hand, our Point-DynRF effectively captures both the moving object and the background.
We also observe that DVS [13] infinitely duplicates the moving object and RoDynRF [25] is also degenerated when a camera moves in an extremely single direction, as shown in Fig. 7. This is due to geometric constraints focused on the input camera trajectory, which fails to represent the global scene geometry. Notably, our Point-DynRF generates more detailed dynamic regions as well as static background regions. These results confirm the superiority of our dynamic ray sampling and joint optimization scheme, which incorporates the entire scene geometry.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c}{PSNR(\(\uparrow\)) / SSIM(\(\uparrow\)) / LPIPS(\(\downarrow\))} \\ \cline{2-9} & Jumping & Skating & Truck & Balloon1 & Balloon2 & Playground & Avg \\ \hline NeRF [27] + time & 16.6 / 0.42 / 0.48 & 19.1 / 0.46 / 0.54 & 17.1 / 0.39 / 0.40 & 17.5 / 0.40 / 0.29 & 19.8 / 0.54 / 0.22 & 13.7 / 0.18 / 0.44 & 17.3 / 0.40 / 0.40 \\ D-NeRF [32] & 21.0 / 0.68 / 0.21 & 20.8 / 0.62 / 0.35 & 22.9 / 0.71 / 0.15 & 18.0 / 0.44 / 0.28 & 19.8 / 0.52 / 0.30 & 19.4 / 0.65 / 0.17 & 20.4 / 0.59 / 0.24 \\ NR-NeRF [39] & 19.4 / 0.61 / 0.29 & 23.2 / 0.72 / 0.23 & 18.4 / 0.44 / 0.45 & 17.0 / 0.34 / 0.35 & 22.0 / 0.70 / 0.21 & 14.3 / 0.19 / 0.33 & 19.2 / 0.50 / 0.33 \\ HyperNeRF [31] & 17.1 / 0.45 / 0.32 & 20.6 / 0.58 / 0.19 & 19.4 / 0.43 / 0.21 & 12.8 / 0.13 / 0.56 & 15.4 / 0.20 / 0.44 & 12.3 / 0.11 / 0.52 & 16.3 / 0.32 / 0.37 \\ TiNeVox [12] & 19.7 / 0.60 / 0.26 & 21.9 / 0.68 / 0.16 & 22.9 / 0.63 / 0.19 & 16.2 / 0.34 / 0.37 & 18.1 / 0.41 / 0.29 & 12.6 / 0.14 / 0.46 & 18.6 / 0.47 / 0.29 \\ NSFF [22] & 23.9 / 0.80 / 0.15 & 28.8 / 0.88 / 0.13 & 25.4 / 0.76 / 0.17 & 21.5 / 0.69 / 0.22 & 23.8 / 0.73 / 0.23 & 20.8 / 0.70 / 0.22 & 24.1 / 0.76 / 0.18 \\ DVS [13] & 23.4 / 0.83 / 0.10 & **31.9** / 0.94 / **0.04** & 27.9 / 0.86 / 0.09 & 21.6 / 0.75 / **0.11** & **26.6** / 0.85 / **0.05** & 23.7 / 0.85 / 0.08 & **25.9** / 0.85 / 0.08 \\ RoDynRF [25] & **24.3** / 0.84 / **0.08** & 27.5 / 0.93 / 0.06 & 28.3 / 0.89 / **0.07** & 21.4 / 0.26 / **0.11** & 25.6 / 0.84 / 0.06 & **24.3** / 0.82 / **0.05** & 25.2 / 0.86 / **0.07** \\ \hline
**Point-DynRF (Ours)** & 23.6 / **0.90** / 0.14 & 29.6 / **0.96** / **0.04** & **28.5** / **0.94** / 0.08 & **21.7** / **0.88** / 0.12 & 26.2 / **0.92** / 0.06 & 22.2 / **0.91** / 0.09 & 25.3 / **0.92** / 0.08 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative results on NVIDIA Dynamic Scene dataset [47].** Image quality is measured by PSNR and LPIPS. Furthermore, we show the average performance over all view changes at the end. Best results in each metric are in **bold**, and second best are underlined.
Figure 5: **Comparison to baselines on NVIDIA Dynamic Scene Dataset [47].**
Figure 6: **Long-Term Novel View Generation.** For a fixed camera viewpoint at time \(t_{0}\), the first column shows the novel view at time \(t_{0}\) and the second column shows the novel view at time \(t_{1}\). DynIBaR is over-fitted on the input camera trajectory (green box)
Figure 7: **Extremely Wide-Range Camera Trajectory.** DVS produces duplicated dynamic objects in distant spatio-temporal locations from the input camera trajectory (green boxes). Moreover, previous methods are quickly degenerated at the spatio-temporally distant viewpoints.
### Effect of Accurate Scene Geometry
To verify the effectiveness of our proposed losses for training the scene geometry, we visualize initialized and refined point clouds as well as a depth map on the Skating scene as shown in Fig. 8-9. The results demonstrate that our joint optimization effectively regresses the scene geometry and address the scale ambiguity problem in monocular videos, resulting in a dynamic radiance field that accurately reflects this geometry.
### Training and Rendering Time
Table 2 shows the training and rendering time on NVIDIA Dynamic Scene Dataset [47] for recent dynamic view synthesis methods. Since Point-DynRF avoids the unnecessary ray marching for empty space, the training process converges faster, leading to a reduction in overall training time. In the rendering process, however, searching neighbor neural points for each shading point requires additional computational costs, and the rendering time is slower than DVS [13] and RoDynRF [25].
### Ablation Study for Point-DynRF Design
We conduct an ablation study for each component of Point-DynRF as shown in Table 3. The results show the quantitative results, and we verify all components contribute to the design of our Point-DynRF. Especially from the results on \(L_{flow}\) and \(L_{depth}\), we confirm that the accuracy of the neural points has a direct impact on the performance of dynamic radiance fields. Also, the results on \(\mathbb{P}_{t}\) confirm that our dynamic ray marching scheme significantly improves the performance. Dynamic ray marching ensures that the dynamicsness map for the novel view is well matched to the actual scene, as shown in Fig. 10.
## 5 Conclusion
We propose a novel framework called point-based dynamic radiance fields for long-term dynamic view synthesis from monocular videos. In our approach, we employ neural point clouds to encode geometric information and dynamic radiance fields to handle the volume rendering process. Our framework, Point-DynRF, optimizes the neural point clouds and dynamic radiance fields jointly, leveraging direct regression from neural 3D points. This allows us to effectively utilize the global scene geometry, which sets our method apart from previous approaches relying on correspondences between neighboring frames, limiting their ability to incorporate the overall scene geometry. We believe that our work contributes significantly to the field of dynamic view synthesis, enabling realistic rendering in various real-world scenarios.
\begin{table}
\begin{tabular}{l|c|c} \hline Method & Training (GPU hours) & Rendering (s/img) \\ \hline HyperNeRF [31] & 32 & 15 \\ DVS [13] & 36 & 8 \\ RoDynRF\({}^{\dagger}\)[25] & 28 & 8 \\ DynInBa\({}^{\dagger}\)[23] & 48 & 20 \\ Ours & 20 & 11 \\ \hline \end{tabular}
\end{table}
Table 2: **Comparison of Training and Rendering Time.** Methods denoted by \(\dagger\) refer to reported performance in the paper.
\begin{table}
\begin{tabular}{l|c c c} \hline & PSNR (\(\uparrow\)) & SSIM (\(\uparrow\)) & LPIPS (\(\downarrow\)) \\ \hline Ours w/o \(\mathbb{P}_{t}\) & 23.62 (**-1.68**) & 0.755 (**-0.161**) & 0.148 (**+0.067**) \\ Ours w/o \(L_{rec}^{s}\) & 24.38 (**-0.92**) & 0.843 (**-0.073**) & 0.121 (**+0.040**) \\ Ours w/o \(L_{rec}^{d}\) & 25.08 (**-0.22**) & 0.901 (**-0.015**) & 0.097 (**+0.016**) \\ Ours w/o \(L_{flow}\) & 24.13 (**-1.07**) & 0.872 (**-0.042**) & 0.099 (**+0.018**) \\ Ours w/o \(L_{depth}\) & 24.45 (**-0.85**) & 0.856 (**-0.070**) & 0.100 (**+0.019**) \\ Ours w/o \(L_{mask}\) & 24.66 (**-0.64**) & 0.884 (**-0.032**) & 0.111 (**+0.030**) \\ Ours & **25.30** & **0.916** & **0.081** \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation Study of our proposed losses.** We report the PSNR, SSIM and LPIPS on the average of NVIDIA Dynamic Scene Dataset [47].
Figure 8: **Refinement of scale ambiguity.** The initial point cloud may not capture the complete scene geometry, but after optimization, the refined point cloud is free from scale ambiguity.
Figure 10: **Qualitative Ablation of Dynamic Ray Marching.** Without the dynamic ray marching, dynamic points at other times interfere with dynamic radiance fields at the rendering time.
\begin{table}
\begin{tabular}{l|c c} \hline & PSNR (\(\uparrow\)) & SSIM (\(\uparrow\)) & LPIPS (\(\downarrow\)) \\ \hline Ours w/o \(\mathbb{P}_{t}\) & 23.62 (**-1.68**) & 0.755 (**-0.161**) & 0.148 (**+0.067**) \\ Ours w/o \(L_{rec}^{s}\) & 24.38 (**-0.92**) & 0.843 (**-0.073**) & 0.121 (**+0.040**) \\ Ours w/o \(L_{rec}^{d}\) & 25.08 (**-0.22**) & 0.901 (**-0.015**) & 0.097 (**+0.016**) \\ Ours w/o \(L_{flow}\) & 24.13 (**-1.07**) & 0.872 (**-0.042**) & 0.099 (**+0.018**) \\ Ours w/o \(L_{depth}\) & 24.45 (**-0.85**) & 0.856 (**-0.070**) & 0.100 (**+0.019**) \\ Ours w/o \(L_{mask}\) & 24.66 (**-0.64**) & 0.884 (**-0.032**) & 0.111 (**+0.030**) \\ Ours & **25.30** & **0.916** & **0.081** \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation Study of our proposed losses.** We report the PSNR, SSIM and LPIPS on the average of NVIDIA Dynamic Scene Dataset [47].
Figure 9: **Refinement of depth map.** For a input frame (a) and initialized disparity map (b), optimized disparity map (c) well represent the boundary of a dynamic object. Also, expected depth map (d) is well aligned with the disparity map.
A Overview
In this supplementary material, we further demonstrate our experimental setup and provide additional results that the scene geometry is well regressed. First, we explain the total loss formulation in our training process in Sec. B. Then, we describe implementation details with image near-far bound determination by neural points in Sec. C and provide additional results for dynamicscs mang of novel views in Sec. D. Finally, we demonstrate failure cases in Sec. E.
## Appendix B Losses
Our optimization process involves utilizing the loss functions \(L_{rec}\), \(L_{geo}\), \(L_{depth}\), and \(L_{mask}\). These loss functions are either modifications of those used in DVS [13] or newly introduced in this paper. To train Point-DynRF more stable, we also incorporate with a depth order loss \(L_{order}\) introduced in DVS [13] and a sparsity loss \(L_{sparse}\) introduced in Point-NeRF [46].
Depth Order LossWhile the depth adjust loss helps optimize the overall scene geometry, there are inherent challenges in accurately determining the distance between dynamic objects and the background. Therefore, we use depth order loss \(L_{order}\) to allow the dynamic radiance fields to be regularized via a frame-by-frame depth map. Since regularizing the dynamic radiance fields with per-frame depth maps has scale and shift ambiguities as mentioned earlier, we leverage the volume rendering process of Dynamic NeRF to propose \(L_{order}\) as:
\[L_{order}=\sum_{i=1}^{N}\sum_{uv}\lVert\textbf{\hat{D}}(\textbf{r}_{uv}^{i},i, \mathbb{P}_{i})-\textbf{\hat{D}}^{d}(\textbf{r}_{uv}^{i},i,\mathbb{P}_{i,d}) \rVert_{2}^{2}. \tag{28}\]
Sparsity LossFollowing the point-based representation, we apply a sparsity loss \(L_{sparse}\) on the point-wise rigidness to enforce it to be close to zero or one as:
\[L_{sparse}=\sum_{i}(\log(\gamma_{i})+\log(1-\gamma_{i})). \tag{29}\]
Total Training Loss FormulationWe formulate a reconstruction loss \(L_{rec}\), a scene geometry loss \(L_{geo}\), a depth adjust loss \(L_{depth}\), a depth order loss \(L_{order}\), a mask adjust loss \(L_{mask}\) and a sparsity loss \(L_{sparse}\), to train our Point-DynRF and neural points. Specifically, we define \(\lambda_{rec}^{full}=3\), \(\lambda_{rec}^{s}=1\), \(\lambda_{rec}^{d}=1\) for the reconstruction loss. For the scene geometry loss, we define \(\lambda_{flow}=0.1\), \(\lambda_{miss}^{s}=1\), \(\lambda_{miss}^{d}=1\). Finally, we define \(\lambda_{depth}=0.1\), \(\lambda_{order}=0.1\), \(\lambda_{mask}=0.1\), and \(\lambda_{sparse}=0.0002\) to formulate the final loss as:
\[L_{total}=L_{rec}+L_{geo}+\lambda_{depth}L_{depth}+\lambda_{ order}L_{order}+\\ \lambda_{mask}L_{mask}+\lambda_{sparse}L_{order}.\]
## Appendix C Implementation Details.
We randomly sampled \(1024\) rays in a batch, and each ray was assigned up to \(32\) sampling points. We used COLMAP to estimate the camera poses and resized all images into a resolution of \(480\times 272\). Also, we initialized our scale and shift parameters by using near and far bounds from COLMAP. We trained Point-DynRF for \(250k\) iterations, and training takes about \(20\) hours on a single NVIDIA Geforce RTX \(3090\) GPU.
Near-Far Boundary DeterminationAs our Point-DynRF is built on Point-NeRF [46] representation, dynamic radiance fields are regressed in 3D world coordinates, not in NDC space used by previous methods. Moreover, we need to render the far background as well, so we set the image near-far bound dynamically associated with the neural points. Specifically, we set the image near boundary to be the depth for the nearest neural point multiplied by \(0.9\), and the image far boundary to be the depth for the farthest neural point multiplied by \(1.1\). Figure 11 shows the convergence of the image near-far boundary of the scenes in the Dynamic Scene Dataset [47] during training. This result confirms that the scene geometry is stably trained and refined the initialized scene geometry well.
## Appendix D Additional Results
Additional Qualitative Results.We further provide additional qualitative results on Dynamic Scene Dataset [47]. Point-DynRF generates more realistic images compared to previous methods, and the human face in the third row of Fig. 12 confirms that Point-DynRF produces much sharper images, while other methods either fail to synthesize or produce blurry images. We also provide a video result of a causally captured monocular video that our Point-DynRF generates realistic images while the state-of-the-art method DVS [13] suffers from duplicated dynamic objects when rendering from a fixed viewpoint.
Figure 11: **Image Near-Far Bound Determination.**
Our foreground masks \((M_{1},\dots,M_{N})\) are also optimized during the training, so we provide dynamicsness maps for novel views, as shown in Fig 13. For each novel view, our Point-DynRF can render blending weights by using the volume rendering process. These dynamicsness maps for novel views confirm that our Point-DynRF well represents dynamic regions in the scene, and we can see that the static representation in the center of the person in the Playground Sequence is due to the fact that all the sequences in the input video for that region are learned as dynamic regions and represented as background by the miss ray marching scheme.
## Appendix E Failure Cases
While Point-DynRF optimizes well the ambiguous initial geometry and foreground masks, it fails to represent the scene if the neural point clouds are unnaturally initialized. A combination of inaccurate camera pose, depth map, and foreground masks sometimes unnaturally initialize neural point clouds where background points are closer to the camera than dynamic points as shown in Fig. 14. In this failure case, Point-DynRF falls short of distinguishing background points in front of the dynamic objects even addressing the scale ambiguity, and novel views also contain artifacts on these background points.
|
2305.12238 | Low-Entropy Latent Variables Hurt Out-of-Distribution Performance | We study the relationship between the entropy of intermediate representations
and a model's robustness to distributional shift. We train models consisting of
two feed-forward networks end-to-end separated by a discrete $n$-bit channel on
an unsupervised contrastive learning task. Different masking strategies are
applied after training that remove a proportion of low-entropy bits,
high-entropy bits, or randomly selected bits, and the effects on performance
are compared to the baseline accuracy with no mask. We hypothesize that the
entropy of a bit serves as a guide to its usefulness out-of-distribution (OOD).
Through experiment on three OOD datasets we demonstrate that the removal of
low-entropy bits can notably benefit OOD performance. Conversely, we find that
top-entropy masking disproportionately harms performance both in-distribution
(InD) and OOD. | Nandi Schoots, Dylan Cope | 2023-05-20T17:09:44Z | http://arxiv.org/abs/2305.12238v1 | # Low-Entropy Latent Variables Hurt Out-of-Distribution Performance
###### Abstract
We study the relationship between the entropy of intermediate representations and a model's robustness to distributional shift. We train models consisting of two feed-forward networks end-to-end separated by a discrete \(n\)-bit channel on an unsupervised contrastive learning task. Different _masking strategies_ are applied after training that remove a proportion of low-entropy bits, high-entropy bits, or randomly selected bits, and the effects on performance are compared to the baseline accuracy with no mask. We hypothesize that the entropy of a bit serves as a guide to its usefulness out-of-distribution (OOD). Through experiment on three OOD datasets we demonstrate that the removal of low-entropy bits can notably benefit OOD performance. Conversely, we find that top-entropy masking disproportionately harms performance both in-distribution (InD) and OOD.
## 1 Introduction
The key challenge that we seek to address is that of identifying learned features in a model's intermediate representations that are more or less likely to be robust to distributional shift. Our approach starts from the intuition that for high-entropy features in a model's training distribution, it will have learned a better understanding for when the feature is relevant. More precisely, it will be better at distinguishing the presence or absence of the feature across different situations. Consider a hypothetical data set containing photographs from two safari trips, where each trip contains the same people on the same safari, but driving around in different trucks. Suppose that it is useful for the given task to identify which of the two trips a given image corresponds to; we might expect the model to be particularly good at distinguishing between the trucks. On the other hand, if a rare tree appears in exactly one photograph, the model may have learned to recognise the specific pattern of pixels in that photograph corresponding to the tree, but it might not have the capability to recognise the tree in new situations.
As models have increased in performance within the bounds of the i.i.d. assumption, recent years have seen growing interest in the OOD behaviour of machine learning systems. While many approaches have studied OOD detection or the effects of external changes to a model's training regime on OOD behaviour (e.g. domain randomization or auxiliary loss functions), to the best of our knowledge our proposal of the entropy of an intermediate representation as a guide to its effects OOD is a novel approach. In this paper we demonstrate that the removal of low-entropy representations via the masking of learned discrete bits can notably improve OOD performance.
## 2 Task and Model Description
To learn representations of a domain we train an _encoder_ network to produce a representation \(r\) of a given input \(x^{*}\). This representation is given to a _distinguisher_ network that is tasked with identifying \(x^{*}\) from a set of \(k\) images composed of \(x^{*}\) and \(k\) - 1 _distractor inputs_ arranged randomly. We use the CIFAR-10 dataset (Krizhevsky, 2009) as the training distribution. The labels from the dataset are discarded and an unsupervised \(k\)_-contrast task_ is constructed by pairing each image with \(k\) - 1 distractor images, shuffling, and giving the distinguisher \(k\) inputs to choose from. The same preprocessing is later used when out-of-distribution datasets are introduced. See Figure 1for an example of a contrastive task and Figure 4 in the Supplementary Material for the full architecture.
It is important to note that we use a'soft-discretization' technique (Foerster et al., 2016) on the intermediate representation \(r\) such that it can be learned with gradient-descent, but each dimension can be mapped to a binary digit at test time with no loss in performance. While the use of a communication channel to discretize representations poses optimization challenges, it also provides a large benefit when it comes to computing the entropy values of each bit in the representation. The computation is reduced from approximating an integral to the simple formula for the entropy of a binary variable, as outlined in Section 3.1. This allows us to run a greater number of experiments with higher precision than if we had used continuous representations.
This unsupervised contrastive learning task was chosen as it can be easily transferred to different data distributions. A task such as image classification limits the available datasets as it requires the out-of-distribution testing data to have the same (or at least overlapping) image labels.
## 3 Entropy-based Masking
### Entropy of Representation Bits
Each representation \(r\) produced by an encoder network consists of a number of bits \(|r|\), referred to as the representation length. By considering each bit at index \(i\) as a random variable \(B_{i}\) we can compute the binary entropy of the bit on a given dataset \(\mathcal{D}\):
\[H(B_{i}\mid\mathcal{D})=-p\log_{2}p-(1-p)\log_{2}(1-p),\quad\text{where }p=P(B_{i}=1\mid\mathcal{D}). \tag{1}\]
Entropy close to 1 means that the bit is 0 or 1 with roughly equal probability of \(p=0.5\). Very low entropy means that the bit is either almost always 0 or almost always 1. We notice that for smaller representation lengths and/or few distractors the distribution tends to skew towards higher entropy bits. In separate experiments where we further varied representation lengths, we find that for smaller \(|r|\) equal to 8, 16 or 32, all bits have entropy higher than 0.8, which makes studying bits based on entropy variation uninteresting for these representation lengths. For a visualization of these entropy values see Figure 5 in the Supplementary Material. Representation lengths of 64, 128, 256 and 512 all lead to a wide range of entropy values. A theoretical analysis of the optimal bit-entropy can be found in Section B of the Supplementary Material.
### Bit Masking Strategies
In this paper we are interested in the effects of strategically'removing' parts of the model's intermediate representation, i.e. obscuring bits in \(r\). It is important to note that we are only applying masking at _test time_. The masking is not used to train any of the models. The mask is defined by a set _masking variables_\(m_{i}\in\{0,1\}\) for each bit \(r_{i}\) in the representation. The masked bit \(\hat{r}_{i}\) is computed:
\[\hat{r}_{i}=m_{i}r_{i}+(1-m_{i})\frac{1}{2}. \tag{2}\]
In other words, when the masking variable \(m_{i}=0\) then \(\hat{r}_{i}=0.5\), and otherwise \(\hat{r}_{i}=r_{i}\). In this paper we use three _masking strategies_; Random Masking, Top-Entropy Masking, and Bottom-Entropy Masking. In order to construct a mask with any of these strategies, we define a _masking proportion_\(p_{\text{mask}}\) that represents the percentage of bits in \(r\) that should be masked.
Figure 1: An example of a contrastive task (\(k=3\)). For a given dataset, the distinguisher is shown \(k\) images, among which \(k\)-1 distractor images, and has to predict the correct image.
To construct any mask \(M=\{m_{1},\ldots,m_{|r|}\}\) we will need to choose \(l_{\text{mask}}=\lfloor\,p_{\text{mask}}\cdot|r|\,\rfloor\) bits to remove. For a _random mask_ we draw \(l_{\text{mask}}\) masking variables from \(M\) at random with uniform probability and without replacement, and set them to 0, we set the remaining \(|r|-l_{\text{mask}}\) variables to 1. To construct a _top-entropy mask_ we compute the entropy for each bit \(h_{i}=H(B_{i}\mid\mathcal{D})\) and sort these values in descending order. We then take the bits associated with the first \(l_{\text{mask}}\) entropy values (i.e. highest entropy) and set their corresponding masking variables to zero. Likewise, for the _bottom-entropy mask_ we take the last \(l_{\text{mask}}\) bits and remove those instead.
## 4 Experimental Results
We trained 54 encoder-distinguisher pairs1
Footnote 1: A sweep of 3 runs for each pair of \((|r|,k)\) plus 6 initial separate runs.
of the different masking strategies for masking proportions between 0.15 and 0.5 at 0.05 intervals. We found that for any masking proportion, removing the top-entropy bits is more damaging to accuracy than masking out bottom-entropy bits. In light of general insights from information theory, this result is not too surprising. The highest entropy bits necessarily convey the most information, and so it follows that their removal should lead to the largest drop in performance.
In general, we did not expect any of the masking strategies to provide a benefit when applied within the training distribution. Yet, we saw that with a small masking proportion (around \(p_{\text{mask}}<0.3\)) we see an _increase_ in accuracy for low-entropy and random masks. Our initial hypothesis was that the masking may be 'undoing' overfitting to the training set. But for each of the trained models we have verified that there is no overfitting (see Section D.1 in the Supplementary Material for a visualization).
### Analysis of Masking Effects Out-of-Distribution (OOD)
In order to understand the effects of masking on accuracy in the OOD setting we measure the _mean change in accuracy_ of a masking strategy under various circumstances. We also report the standard deviations associated with these estimates. As in the case of in-distribution masking we evaluated the masking strategies for a sweep of masking proportions between 0.15 and 0.5 at 0.05 intervals. We cut-off the maximum masking proportion \(p_{\text{mask}}\leq 0.25\) for all further analysis as beyond that threshold masking has an almost universally negative effect. The overall mean accuracy changes can be seen in Table 2. We see that masking the bottom-entropy or random bits produces the highest increase, albeit with a large variance.
This variance can be understood and disentangled by separating the low-\(k\) models from the high-\(k\) models. What we see is that the benefits of bottom-entropy masking are more prevalent for low-\(k\) models. This is visualized in Figure 3 where we illustrate the _effective robustness_ of each of the masking strategies on the three OOD datasets. In the Supplementary Material Section D.2 we include plots for all values of \(k\) and \(p_{\text{mask}}\) that we tested. Effective robustness is a concept introduced by Taori et al. (2020) as a way to understand the efficacy of a method for increasing robustness to distributional shift. By plotting the baseline regression line for unaltered models with differing in-distribution accuracy values on the diagram we can observe whether a proposed robustness method moves towards the \(y=x\) line (i.e. no degradation). Crucially, with these plots, we are able to account for each model's performance on the training distribution. Hence, despite the large variance in the performance of models trained across various \(k\) and \(|r|\) values2, we are able to discern the effects of the masking interventions.
Footnote 2: Accuracy ranging between 0.65 and 0.95 for even the high-performing low-\(k\) models, as shown in the \(x\)-axes of Figure 3.
In our case, we see that - as is consistent with previous results - for each dataset the top-entropy masking moves below the dashed green line showing the baseline unmasked models. On the other hand, the random masking and bottom-entropy masking lines move closer to \(y=x\) (as compared to the no masking lines). For Plant Village we see that almost all of the in-distribution accuracy is recovered. For MNIST we find the most substantial jump, and the largest benefit of bottom-entropy over random masking.
Figure 3: Effective robustness plots for low-\(k\) models. \(y=x\) shown as black dashed line.
## 5 Related Work
Our work adds to the toolkit of methods to aid in understanding and improving robustness to distributional shift, which for example includes forms of data augmentation Hendrycks et al. (2021) and abstaining from making a prediction in the face of uncertainty Thulasidasan et al. (2021). For a general overview of problems and methods in OOD robustness see Shen et al. (2015).
Below we reference some notable entropy-based methods that have a _different purpose_ than improving OOD robustness. Chatterjee and Mishchenko (2019) use low entropy (or "rare") signals to analyze the extent to which a model is overfitted to the training distribution. Entropy-based methods have also been used widely in the adjacent problem of OOD detection. For example, predictive entropy measures the uncertainty of the prediction of a sample given a training distribution and is used to calculate the extent to which a sample is OOD Kirsch et al. (2021). However, we apply entropy in an entirely different context, namely, we calculate the entropy of _latent variables_ to estimate how robust they will be to distributional shift. Relative entropy (KL-divergence) is a popular measure and is notably used in the Bits-Back method Hinton and van Camp (1993), Flamich et al. (2020) to calculate the optimal compression rate in latent variables. Images that are traditionally compressed by a variational auto-encoder have now been compressed with code-length close to this theoretical optimum Flamich et al. (2020).
Contrastive representation learning takes many forms; in computer vision alone there are many approaches for applying deep learning to multiple inputs and producing representations to distinguish between them; see Jaiswal et al. (2020) for a review. To our knowledge, there are no existing suitable state-of-the-art (SOTA) methods for OOD robustness in contrastive learning to benchmark our proposals against.
## 6 Conclusion
In this paper we have investigated the out-of-distribution effects of using different post-hoc strategies to remove bits from discrete intermediate representations in an unsupervised contrastive learning task. We have studied how the difficulty of the task (more distractors) impacts the entropy distribution of the learned representations and shown that removing low-entropy bits can improve the performance of models out-of-distribution (Section 4.2), notably almost entirely restoring in-distribution performance for one of our datasets (see Figure 3). However, the results also present mysteries that prompt further experiments and analysis. At the time of writing, we do not have a clear understanding of why the removal of bits within the training distribution should increase performance, as we would expect the encoder to learn an optimal protocol.
Next, there is a need for a deeper understanding of the conditions in which our results hold. Within our experimentation, we found that the effect (of harm from low-entropy features OOD) was less pronounced for models trained on the more difficult tasks (higher numbers of distractors). From our data, it is unclear if this relationship represents something fundamental or if it is a side-effect of these models generally performing to a lower standard. One of the most important avenues of further work is in testing if other systems built on top of the learned representations in this paper inherit the same OOD robustness under low-entropy masking.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & CIFAR-10 & Colorectal Histology & MNIST & Plant Village \\ \hline Masked Bottom Entropy & \(1.6\pm 8.0\) & \(-2.0\pm 14.3\) & \(9.4\pm 15.6\) & \(3.0\pm 23.7\) \\ Masked Top Entropy & \(-4.3\pm 21.4\) & \(-7.8\pm 19.0\) & \(-16.6\pm 5.3\) & \(-18.5\pm 21.8\) \\ Random Mask & \(2.5\pm 12.3\) & \(3.4\pm 10.9\) & \(4.2\pm 13.7\) & \(2.1\pm 19.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean accuracy shift (in percentage points) after masking with each strategy. After running paired t-tests we find that all of these accuracy shifts are statistically significant (with \(p=0.05\)).
## Acknowledgements
Work done by both authors is thanks to the UKRI Centre for Doctoral Training in Safe and Trusted AI (EPSRC Project EP/S023356/1).
|
2308.05960 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | 2023-08-11T06:37:54Z | http://arxiv.org/abs/2308.05960v1 | # Bolaa: Benchmarking and Orchestrating LLM-augmented Autonomous Agents
###### Abstract
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very recent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, _i.e._ BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehensively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at [https://github.com/salesforce/BOLAA](https://github.com/salesforce/BOLAA).
## 1 Introduction
Recent booming successes of large language models (LLMs) [3, 14] motivate emerging exploration of employing LLM to tackle various complex tasks [11, 12], amongst which LLM-augmented **A**utonomous **A**gents (LAAs) [1, 13, 14, 15, 16, 17, 18] stand with most spotlights. LAA extends the intelligence of LLM to sequential action executions, exhibiting superiority in interacting with environments and resolving complex tasks via collecting observations. To name a few, BabyAGI1 proposes an AI-powered task management system, which leverages OpenAI LLM2 to create, prioritize, and execute tasks. AutoGPT3 is another popular open-source LAA framework that enables the API calling capability of LLMs. ReAct [16] is a recently proposed LAA method to interact with environments then consecutively generate the next action. Langchain4 is a recently released open-source framework for developing LAA.
Footnote 1: [https://github.com/yoehinakajima/babyagi](https://github.com/yoehinakajima/babyagi)
Footnote 2: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
Footnote 3: [https://github.com/Significant-Gravitas/Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT)
Footnote 4: [https://github.com/langchain-ai/langchain](https://github.com/langchain-ai/langchain)
Due to the initial investigation, LAA is rather under-explored. Firstly, the optimal agent architecture is undetermined. ReAct [16] prompts the agents with pre-defined examples such that the LLM learns to generate the next action via in-context learning. Moreover, ReAct argues that an agent should have intermediate reasoning steps before action executions. ReWOO [16] introduces additional planning steps for LAA. Langchain generalizes the ReAct agent with
zero-shot tool usage ability. Intrinsically, the optimal architecture of agents should be aligned with both tasks and the associated LLM backbone, which is less explored in the existing works.
Secondly, understanding the efficacy of the existing LLMs in LAA is far from comprehensive. The existing preliminary works only compare the performances of a few LLM backbones. RecAct adopts the PaLM (Chowdhery et al., 2022) as the backbone LLM. ReWOO employs OpenAI text-davinci-003 model for instruction-tuning Alpaca model (Taori et al., 2023) for agent planning. MIND2Web (Deng et al., 2023) compares Flan-T5 and OpenAI GPT3.5/4 for generist web agent. Nevertheless, few current works comprehensively compare the performance of LAA with regard to various pre-trained LLMs. A very recent work (Liu et al., 2023) releases a benchmark for evaluating LLMs as Agents. Nevertheless, they fail to jointly consider the agent architectures along with their LLM backbones. Selecting the optimal LLMs from both efficacy and efficiency perspectives advances the current exploration of LAA.
Thirdly, the increasing complexity of tasks may require the orchestration of multiple agents. ReWOO recently identifies that decoupling reasoning from observation improves the efficiency for LAA. In this paper, we argue that as the task complexity increases, especially in open-domain environments, it is better to coordinate multiple agents to complete one task. For example, regarding the web navigation task, we could employ one _click agent_ to interact with clickable buttons and request another _search agent_ to retrieve additional resources. Nonetheless, there are few works discussing how to orchestrate multiple agents and investigating the impacts of orchestration.
To address these research gaps, this paper proposes to comprehensively compare the performances of LAAs. We dive deep into the agent architecture of LAAs and the LLM backbones. Specifically, we construct agent benchmarks from the existing environments to evaluate the performances of various agent architectures built upon various LLM backbones. The tasks in our agent benchmarks are associated with different task complexity levels, which enables the agent performance analyses w.r.t. task complexity. Those agent architectures are designed to extensively verify the existing design choices. Regarding the orchestration of multiple LAAs, we propose a novel LAA architecture BOLAA5, which has a controller module on top of multiple collaborated agents, for enabling the selection and communication between multiple labor LAA.
Footnote 5: For easy memorizing, we intentionally name it the same as paper title.
The contributions of this paper are as follows:
* We develop 6 different LAA agent architecture. We combine them with various backbone LLMs to justify the designing intuition of LAA from prompting, self-thinking, and planning. We also develop BOLAA for orchestrating multi-agent strategy, which enhances the action interaction ability of solo agents.
* We conduct extensive experiments on both decision-making web navigation environment and knowledge reasoning task environment. We report the performance in terms of final sparse rewards and intermediate recalls, which provides qualitative indications for the optimal choice of LAAs as well as their compatible LLMs.
* BOLAA on the WebShop environment consistently yields the best performance compared with other LAA architectures. Our results demonstrate that the importance of designing specialist agents to collaborate on resolving complex task, which should be as equally important as training a large LLM with high generalization ability.
## 2 Related Work
### Augmented Language Agent Architecture
The completion of a complex task typically entails multiple stages. An agent must possess an understanding of these stages and plan accordingly. Chain-of-Thoughts, also known as CoT (Wei et al., 2022), is a groundbreaking work that prompts the agent to deconstruct challenging reasoning tasks into smaller, more manageable steps. On the other hand, ReAct (Yao et al., 2023) proposes leveraging this aptitude for reasoning and action within Language and Learning Models (LLMs) to foster interactive engagement with the environment, such as utilizing the Wikipedia search API, by mapping observations to the generation of reasoning and action traces or API calls in natural language.
This agent architecture has given rise to various applications, including HuggingGPT (Shen et al., 2023), Generative Agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023).
However, these approaches neglect to incorporate valuable feedback, such as environment rewards, to enhance the agent's behaviors, resulting in performances that rely solely on the quality of the pre-trained Language and Learning Model (LLM). Self-refine (Madaan et al., 2023a) tackles this limitation by employing a single LLM as a generator, refiner, and provider of feedback, enabling iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, REX (Murthy et al., 2023) and RAP (Hao et al., 2023) repurpose the LLM to function as both a comprehensive world model and a reasoning agent. They incorporate Monte Carlo Tree Search for strategic exploration within the vast realm of reasoning with environment rewards. This approach facilitates effective navigation and decision-making in intricate domains. Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic memory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors. Recently, Yao et al. (2023b) proposes a framework, namely Retroformer, which leverages policy gradient optimization to align the agent's behaviors with environment-specific rewards by learning a plug-in retrospective language model.
### Web Agent
Web navigation is the foundation for humans to collect information and communicate. Before the boom of LLM, previous endeavours (Liu et al., 2018; Shi et al., 2017) already explored how to train web agent in a web simulation environment. Very recently, a series of works have been devoted to developing LAA to tackle complex web navigation tasks. Though action space of web navigation is almost infinite due to numerous available elements online, these action can be divided into a few operation types, such as _click_, _type_ and _select_. MIND2Web (Deng et al., 2023) collects a web browser data to fine-tune LLM to generate executable actions, which functions as a Web LAA. WebAgent (Gur et al., 2023) is able to decompose task instruction into sub-tasks, which directly generates executable python program for web navigation. WebArena (Zhou et al., 2023) supports realistic tasks simulation for designing Web LAA. Langchain and ChatGPT both provide convenient web plugin such that the LLM behaves as Web LAA. We believe that the web navigation is the next fundamental task for LAA to shine its superiority.
### Tool Agent
The evolution of LLM and their interactions with various tools has been a focal point of recent research. The concept of a "Tool Agent" encapsulates the idea of LLMs leveraging external tools to enhance their capabilities and solve complex tasks. One of the pioneering works in this domain is the introduction of "Gorilla" (Patil et al., 2023). This model is adept at writing API calls and exhibits the ability to adapt test-time document changes. Another noteworthy work is the "ToolLLM" framework (Qin et al., 2023). This open-source framework incorporates LLMs to efficiently engage with a myriad of tools, particularly APIs, to execute intricate tasks. The framework encompasses ToolBench, an instruction-tuning dataset tailored for tool utilization More recently, a paradigm shift in teaching LLMs to use new tools has been discussed in (Hsieh et al., 2023), which champions the use of tool documentation. The authors present empirical evidence suggesting that tool documentation offers detailed descriptions of tool usage, which is a more effective and scalable approach. Notably, their research indicates that zero-shot prompts, which are exclusively based on tool documentation, can rival the performance of few-shot prompts.
## 3 Agent Architectures
In this section, we compare various LAA architectures. We first present how to design different solo LAA based on the intuition of existing work. We then present the our orchestration designing of multiple LAAs, _i.e._ BOLAA.
### Solo Agents
Hereafter, we present 5 different LAAs. Each type of LAA is able to interact with the environment with its own interaction strategy.
**Zeroshot LAA** (ZS-LAA) directly extends the LLM to be action executor. Specifically, the prompt for LLMs to function as the action executor consists of detailed descriptions for those actions. For example, if we prompt LAA to understand the _click_ action with "_click: using this action to click observed [button], the clickable buttons are in []._", it may behave as a web navigation agent. We present the architecture of ZS-LAA in Figure 1(a). The working flow is as follows:
* _Initial step_: firstly, the ZS-LAA receives the task instruction and constructs the zeroshot prompt. Then, the LLM layer generates a possible response, which is parsed to output a feasible action. After that, the observation from environment is appended into the agent memory.
* _Working teps_: the agent checks whether the task is finished. If not, ZS-LAA retrieves the previous actions and observations from memory, and constructs the prompts for LLM to generate the next executable actions. ZS-LAA continues the working stage until reaching the maximum steps or completing the task.
ZS-LAA is a minimum LAA architecture. It enables the action generation ability of LLM via zeroshot prompt layer, which is easy to generalize to new environments and requires no examples.
**ZeroshotThink LAA** (ZST-LAA) is an extended version of ZS-LAA. Different from ZS-LAA, ZST-LAA has an additional self-think flow. The architecture of ZST-LAA is presented in Figure 1(b), where we denote the self-think flow as in pink arrow lines. Self-think is running in intermediate steps of action generations flow, which enables the Chain-of-Thought (CoT) reasoning ability.
* _Self-think Step_: before generating the next action, ZST-LAA collect observations and previous actions to construct the _think_ prompt. Then, the _thought_ is stored into memory.
Self-think step is generally useful when given reasoning tasks. Note that the think prompt is also in a zero-shot format, such as _"think: using this action to plan your actions and reasoning"_.
**ReAct LAA** additionally advances ZST-LAA in the prompt layer, where fewshot examples are provided. The architecture of ReAct LAA is illustrated in Figure 1(c). ReAct LAA is able to leverage successful running examples to improve the action generation ability of LLM and enhance the environment interaction of LAA, because those fewshot examples endows the in-context learning ability of LLM. However, the drawback for ReAct LAA is that, due to the limited context length, fewer token spaces are available after the occupancy of fewshot examples in the prompt.
**PlanAct LAA** is designed to facilitate the planning ability of LAA. PlanAct LAA differs from ZS-LAA in two parts: 1) the planning flow and 2) the fewshot prompt. The architecture is depicted
Figure 1: The LAA architectures for Zeroshot-LAA (ZS-LAA), ZeroshotThink LAA (ZST-LAA) and ReAct LAA. ZS-LAA generates actions from LLM with zeroshot prompt. ZST-LAA extends ZS-LAA with self-think. ReAct LAA advances ZST-LAA with fewshot prompt. They all resolve a given task by interacting with environment via actions to collect observations. Better view in colors.
in Figure 2. The planning flow is executed before the initial action generation step, which has additional plan prompt to construct the input for the core LLM.
\(\bullet\)_Planning Step_: PlanAct LAA generates a plan for a given task before interacting with environments. The plan is memorized and will be retrieved to construct prompts.
It is worth noting that the plan prompt in this paper is in fewshot way, which allows LAA to generate plans based on previous successful plans.
**PlanReAct LAA** extends PlanAct LAA with additional self-think flow, which also enables the CoT ability. The architecture of PlanReAct LAA is presented in Figure 2. Intuitively, since the Planning flow is executed before the LAA observes the environment, self-think flow alleviates the hallucination incurred from incorrect plans.
Next, we introduce our multi-agent orchestrating architecture, _i.e._ BOLAA.
### BOLAA: Orchestrating Multiple Agents.
Though the success of the existing LLMs in completing various language understanding tasks, plenty of issues are still under-explored, such as the context length constraints, in-context learning and generalization ability, and etc. Hence, it is challenging to employ a solo LAA to complete all tasks, especially when tasks are of high complexity. Therefore, we propose a new agent architecture for orchestrating multiple LAAs, which is illustrated in Figure 3. BOLAA has two main modules, the labor agents pool and the controller. The labor agents pool manages multiple LAAs. Each LAA may only focus on generating one type of actions. For example, in the web navigation environment, we could establish _click_ LAA and _search_ LAA. In this way, the former only generates the next button to click, while the later only outputs search query, which divides a complex task into feasible tasks. The controller is devised to selectively call LAAs from agents pool. Controller has the agents selection
Figure 3: The BOLAA architecture, which employs a controller to orchestrate multiple LAAs.
Figure 2: The LAA architectures for PlanAct LAA and PlanReAct LAA.
layer for choosing the most relevant LAA to call. Then, the controller constructs the message for the selected LAA and builds the communication. After obtaining the response from the labor LAA, the controller parses it to an executable action and then interacts with the environment. Note that we can also design those labor LAAs to be think/plan agent. In this way, the self-think and plan work flows are also retained.
## 4 Experiment
### Environment Benchmark
We construct the evaluation benchmarks from two environments, _i.e.,_ the WebShop (Yao et al., preprint) and HotPotQA (Yang et al., 2018) with Wikipedia API usage (Yao et al., 2023).
WebShop is a recently proposed online shopping website environment with 1.18M real-world products and human instructions. Each instruction is associated with one ground-truth product, and contains attribute requirements, _e.g. I'm looking for a travel monopod camera tripod with quick release and easy to carry, and price lower than 130.00 dollars_. This instruction includes 3 attribute requirements _i.e._ "quick release", "camera tripod" and "easy carry" attributes. We define the complexity of an instruction using the number of attribute requirements. Thus, this instruction example above is of complexity 3. We equally sample 150 instructions regarding each complexity level. Since we have fewer than 150 instructions for complexity larger than 6, we only include instructions from complexity in \(\{1,2,\ldots,6\}\), which sums up to 900 tasks for benchmark evaluation in the WebShop environment. In the WebShop environment, an agent operates either search[query] or click[element] actions to interact the environment, for evaluating the interactive decision making ability of LAA. The observation from WebShop is simplified web browser, which includes the clickable buttons and associated page content. LAA interacts with the WebShop environment as a web navigation agent.
HotPotQA with Wikipedia API is another environment considered in this paper, which contains multi-hop questions answering tasks that requires reasoning over two or more Wikipedia passages. This simulation environment serves as a powerful tool for evaluating the multi-step planning and comprehension capabilities and information retrieval skills of AI models, ensuring they are proficient in sourcing reliable information from vast online resources. With its unique blend of real-world internet browsing scenarios and text analysis, HotpotQA is an invaluable asset for the advancement of augmented large language agent systems. In HotPotQA environment, an agent has three types of actions, _i.e.,_ search[entity], lookup[string] and finish[answer] to interact with HotPotQA environment. HotPotQA environment aims at evaluate the knowledge reasoning ability of LAA. We randomly sample 100 questions from easy, medium and hard levels, which constitutes the final 300 benchmark questions for evaluating LAAs.
### Evaluation Metrics
We mainly use the _reward_ score in each environment to evaluate the performances of LAAs. In the WebShop environment, the reward is defined as the attribute overlapping ratio between the bought item and ground truth item. In HotPotQA environment, the reward is defined as the F1 score grading between agent answer and ground-truth answer. Additionally, we develop the _Recall_ performance for WebShop environment, which is defined as 1 if the ground truth item is retrieved and 0 if not during one task session. The Recall is reported as the average recall scores across all tasks in WebShop environment.
### LLM Utilization
The core component of LAA is the LLM backbone. We compare different LLMs with various choices of model size and context length. We reported the results w.r.t. open LLM models such as fastchat-3b, vicuna-3b/13b/33b (Zheng et al., 2023), Llama-2-7b/13b/70b6(Touvron et al., 2023), MPI-7b/30b (Team, 2023), xgen-8k-7b, longchat-16k-7b/13b and OpenAI API LLMs, including text-davinci-003, gpt-3.5-turbo and gpt-3.5-turbo-16k.
### Decision-making Simulation
In this section, we present and compare the decision-making performances of LAAs in the WebShop environment. The performance regarding the average reward is reported in Table 1. The agent prompts are constructed based on the maximum context length of different LLM models. Regarding BOLAA, we devise one search LAA and one click LAA to generate search query and click elements, respectively. We have the following observation:
* BOLAA performs the best compared with the other LAA architectures, especially when built on the high performing LLMs. BOLAA is able to actively select the appropriate LAA and yield qualitative communication, which stabilizes the action generation. We observe that BOLAA, when paired with a 3b fastchat-t5 LLM, performs comparably to other LAA architectures with more powerful LLMs. The superiority of BOLAA indicates that orchestrating multiple smaller-sized LAAs is a better choice if the computing resources are limited. This further exemplifies the potential for fine-tuning multiple smaller-sized specialised LAAs rather than fine-tuning one large generalized LAA.
* Pairing the LLM with the optimal LAA architecture is crucial. For example, Llama-2-13b performs best under PlanAct LAA arch while Llama-2-70b performs best under the BOLAA arch. Also, Longchat-13b-16k performs best when using PlanAct and PlanReAct, which may indicate the extraordinary planning ability of longchat-13b-16k models.
* Increasing the context length alone may not necessarily improve the LAA performances. For example, when comparing longchat-13b-16k with llama-2-13b models, the latter yields better performances though with less context length. By checking the running log of those LAAs, we observe more occurrence of hallucinated generation when the LAA runs for more steps, which in the end degrades the benefits of longer context.
* A powerful LLM is able to generalize under the zeroshot LAA arch. The best performance of OpenAI API-based models are actually under ZS and ZST arch. This indicates the great potential of developing a generic LAA with powerful LLM. Actually, this is currently what open-source projects are working towards, directly calling OpenAI API and tuning the zeroshot agent prompt instead. Our benchmark results quantitatively justify that using only a ZS LAA can already achieve comparable or even better performances than LAA arch with additional Plan or Self-think flow. However, for other less powerful LLMs, fewshot prompts are necessary for LAAs.
* Plan flow generally improves the performances when the agent is built on open-source LLMs. By comparing the performances of ReAct, PlanAct and PlanReAct, we observe a performance gain
\begin{table}
\begin{tabular}{l|c|c c c c c c} \hline \hline \multirow{2}{*}{LLM} & \multirow{2}{*}{Len.} & \multicolumn{6}{c}{LAA Architecture} \\ \cline{3-8} & & ZS & ZST & ReAct & PlanAct & PlanReAct & BOLAA \\ \hline fastchat-t5-3b & 2k & 0.3971 & 0.2832 & 0.3098 & 0.3837 & 0.1507 & **0.5169** \\ vicuna-7b & 2k & 0.0012 & 0.0002 & **0.1033** & 0.0555 & 0.0674 & 0.0604 \\ vicuna-13b & 2k & 0.0340 & 0.0451 & 0.1509 & 0.3120 & 0.4127 & **0.5350** \\ vicuna-33b & 2k & 0.1356 & 0.2049 & 0.1887 & 0.3692 & 0.3125 & **0.5612** \\ llama-2-7b & 4k & 0.0042 & 0.0068 & 0.1248 & 0.3156 & 0.2761 & **0.4648** \\ llama-2-13b & 4k & 0.0662 & 0.0420 & 0.2568 & **0.4892** & 0.4091 & 0.3716 \\ llama-2-70b & 4k & 0.0122 & 0.0080 & 0.4426 & 0.2979 & 0.3770 & **0.5040** \\ mpt-7b-instruct & 8k & 0.0001 & 0.0001 & 0.0573 & 0.0656 & **0.1574** & 0.0632 \\ mpt-30b-instruct & 8k & 0.1664 & 0.1255 & 0.3119 & 0.3060 & 0.3198 & **0.4381** \\ xgen-8k-7b-instruct & 8k & 0.0001 & 0.0015 & 0.0685 & 0.1574 & 0.1004 & **0.3697** \\ longchat-7b-16k & 16k & 0.0165 & 0.0171 & 0.069 & 0.0917 & 0.1322 & **0.1964** \\ longchat-13b-16k & 16k & 0.0007 & 0.0007 & 0.2373 & 0.3978 & **0.4019** & 0.3205 \\ \hline text-davinci-003 & 4k & 0.5292 & 0.5395 & 0.5474 & 0.4751 & 0.4912 & **0.6341** \\ gpt-3.5-turbo & 4k & 0.5061 & 0.5057 & 0.5383 & 0.4667 & 0.5483 & **0.6567** \\ gpt-3.5-turbo-16k & 16k & 0.5657 & 0.5642 & 0.4898 & 0.4565 & 0.5607 & **0.6541** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average reward in the WebShop environment. Len denotes the maximum context length. **Bold** results denote the best results in one row, _i.e._ best LAA architecture w.r.t. one LLM. _Underline_ results denote the best performance in one column, _i.e._ best LLM regarding one LAA architecture.
on most LLM cases when using plan flow. However, planning and thinking require the LLM to be able to reason in steps, which may be challenging for small size LLMs. For example, fastchat-t5-3b performs above average on ZS LAA arch. But the performance degrades by a large margin under PlanReAct arch.
We also report the intermediate Recall performances for all LAAs, which are illustrated in Table 2. Recall is mainly related to the search action. High recall performances indicate that the LAA is capable of generating a precise search query. High recalls usually lead to better rewards. But they are not tightly related. For example, LLama-2-70b has a recall performance of nearly 0.3344 on ZS LAA, which is comparable to the best LAA. However, the reward performance in Table 1 of ZS LAA Llama-2-70b is only 0.0122. The reason is that generating the search query requires a different LLM ability from generating the correct click action, where the latter is more challenging. Another observation is that our proposed BOLAA generally performs the best on all LLMs, which indicates that separating the search agent from the click agent improves the accuracy of the search action, leading to a higher recall value.
**LAA performance w.r.t. Complexity**. After the overall performances of those LAAs and LLMs are compared, we conduct more details investigation of the performance w.r.t. the task complexity. Due to the space limitation, we only report the performance of text-davinci-003 and llama-2-70b. The reward performance is illustrated in Figure 4. The BOLAA model consistently performs better on all complexity levels. We also observe the degraded performances when the task complexity is increased, which follows the intuition. Surprisingly, we find out that further increasing the complexity of tasks greater than 4 will not further degrade the performances. The reason is that the recall performance increases when the task is of higher complexity, which we demonstrated in Figure 5. This is due to the fact that high-complexity task instruction provides more additional context information for the LAA. As such, the _search_ action can be more specific and accurate under high complexity levels.
### Knowledge Reasoning Simulation
We benchmark on the HotPotQA environment to evaluate the multi-step reasoning ability of LAAs. Since the available search, lookup and finish operations are all related to knowledge reasoning in this environment and hard to separate, we therefore leave the BOLAA arch for future work and only compare the performance on other agent arch. The results are in Table 3. In general, ReAct agent arch achieves the best performances, which can be interpreted in twofold. Firstly, fewshot prompt is necessary to enable the action generation and reasoning ability for LAA, especially when
\begin{table}
\begin{tabular}{l|c|c c c c c c} \hline \hline \multirow{2}{*}{LLM} & \multirow{2}{*}{Len.} & \multicolumn{6}{c}{LAA Architecture} \\ \cline{3-8} & & ZS & ZST & ReAct & PlanAct & PlanReAct & BOLAA \\ \hline fastchat-t5-3b & 2k & 0.3533 & 0.3122 & 0.3800 & 0.3700 & 0.3722 & **0.3867** \\ vicuna-7b & 2k & 0.0833 & 0.0500 & 0.3600 & 0.3233 & 0.3278 & **0.3522** \\ vicuna-13b & 2k & 0.0867 & 0.0644 & 0.3622 & 0.3444 & 0.2367 & **0.3700** \\ vicuna-33b & 2k & 0.3600 & 0.3411 & 0.3822 & 0.3733 & 0.3567 & **0.3956** \\ llama-2-7b & 4k & 0.0678 & 0.0311 & 0.3744 & 0.3400 & 0.3578 & **0.3856** \\ llama-2-13b & 4k & 0.2856 & 0.2211 & 0.3844 & 0.3278 & 0.3500 & **0.4078** \\ llama-2-70b & 4k & 0.3344 & 0.3244 & 0.3789 & 0.3400 & 0.3600 & **0.4011** \\ mpt-7b-instruct & 8k & 0.0144 & 0.0322 & **0.3644** & 0.3200 & 0.3400 & 0.3600 \\ mpt-30b-instruct & 8k & 0.2973 & 0.3372 & 0.3333 & 0.3575 & 0.3412 & **0.3900** \\ xgen-8k-7b-instruct & 8k & 0.0667 & 0.1400 & 0.3711 & 0.3400 & 0.3278 & **0.3800** \\ longchat-7b-16k & 16k & 0.1344 & 0.1856 & 0.3644 & 0.3622 & 0.3622 & **0.3811** \\ longchat-13b-16k & 16k & 0.0756 & 0.0867 & 0.3678 & 0.3467 & 0.3471 & **0.3789** \\ \hline text-davinci-003 & 4k & 0.3800 & 0.3856 & 0.3767 & 0.3711 & 0.3889 & **0.3956** \\ gpt-3.5-turbo & 4k & 0.3889 & 0.3756 & **0.3933** & 0.3789 & 0.3867 & 0.3929 \\ gpt-3.5-turbo-16k-0613 & 16k & 0.3856 & 0.3833 & **0.4011** & 0.3756 & 0.3811 & 0.3933 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average recall in the WebShop environment. Len denotes the maximum context length. **Bold** results denote the best results in one row, _i.e._ best LAA architecture w.r.t. one LLM. _Underline_ results denote the best performance in one column, _i.e._ best LLM regarding one LAA architecture.
experimenting with those small-size language models. Secondly, comparing ReAct, PlanAct, and PlanReAct, we would conclude that planning flow of LAA hinders performance the in knowledge reasoning environment and tasks. The reason is that knowledge reasoning tasks require contextualized information to conduct reasoning, whereas planning flow is executed ahead of interactions. Thus, those generated plans tend to lead to more hallucination of LAA. Thirdly, regarding this knowledge reasoning task, model size is much more important than the context length. Large-sized model has better abilities in reasoning, thus performing better. Additionally, the superior reasoning ability of OpenAI gpt-3.5 models is again verified. We also observe the best performance of Llama-2-70b on all open-source LLMs, which suggests that potential future fine-tuning can be applied on Llama-2 models.
**LAA performance w.r.t. Complexity**. Since we have easy, medium, and high level tasks, we compare the performance of Llama-2-70b and regarding different levels of complexity, as illustrated in Figure 6. We observe degrading performance if increasing the complexity of tasks. In HotPotQA tasks, the hardness is defined as the question answer hops. Therefore, hard question requires more context understanding and reasoning ability of LAA. Though OpenAI text-davinci-003 model consistently outperforms Llama-2-70b on all levels of complexity, their difference is of smaller margin in hard questions. Since hard questions requires more resoning efforts, we can conclude that Llama-2-70b posses comparable reasoning ability with text-davinci-003.
Figure 4: The reward w.r.t. task complexity in WebShop. Each bar represents one LAA.
Figure 5: The recall w.r.t. task complexity in WebShop. Each bar represents one LAA.
## 5 Conclusion and Future Work
In this paper, we systematically investigate the performances of various LAA architecture paired with different LLM backbones. We also provide one novel orchestrating method for multiple agents, _i.e._ BOLAA. The benchmarking results provide experimental justification for the LAA investigation and verify the potential benefits of BOLAA architecture. During the investigation, we also identify the challenge of designing BOLAA architecture for environments with compounding actions. In the future, we will explore whether we can harness LLMs in the controller such that selection and communication with labor agents is also fully autonomous. We will continue developing more LAA architectures and include more LLMs and environments for evaluations.
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline \multirow{2}{*}{LLM} & \multirow{2}{*}{Len.} & \multicolumn{5}{c}{LAA Architecture} \\ \cline{3-7} & & ZS & ZST & ReAct & PlanAct & PlanReAct \\ \hline fastchat-t5-3b & 2k & 0.0252 & 0.0067 & 0.0692 & **0.1155** & 0.0834 \\ vicuna-7b & 2k & **0.1339** & 0.0797 & 0.0318 & 0.0868 & 0.0956 \\ vicuna-13b & 2k & 0.1541 & 0.0910 & **0.2637** & 0.1754 & 0.2075 \\ vicuna-33b & 2k & 0.2180 & 0.2223 & **0.2602** & 0.1333 & 0.2016 \\ llama-2-7b & 4k & 0.0395 & 0.0207 & **0.2624** & 0.1780 & 0.1417 \\ llama-2-13b & 4k & 0.1731 & 0.2313 & **0.2521** & 0.2192 & 0.2177 \\ llama-2-70b & 4k & 0.2809 & 0.3207 & **0.3558** & 0.1424 & 0.1797 \\ mpt-7b-instruct & 8k & 0.0982 & 0.0483 & **0.1707** & 0.1147 & 0.1195 \\ mpt-30b-instruct & 8k & 0.1562 & 0.2141 & **0.3261** & 0.2224 & 0.2315 \\ xgen-8k-7b-instruct & 8k & 0.1502 & 0.1244 & **0.1937** & 0.1116 & 0.1096 \\ longchat-7b-16k & 16k & 0.0791 & 0.0672 & **0.2161** & 0.1296 & 0.0971 \\ longchat-13b-16k & 16k & 0.1083 & 0.0562 & **0.2387** & 0.1623 & 0.1349 \\ \hline text-davinci-003 & 4k & 0.3430 & 0.3304 & **0.4503** & 0.3577 & 0.4101 \\ gpt-3.5-turbo & 4k & **0.3340** & 0.3254 & 0.3226 & 0.2762 & 0.3192 \\ gpt-3.5-turbo-16k-0613 & 16k & **0.3027** & 0.2264 & 0.1859 & 0.2113 & 0.2251 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average reward in the HotPotQA environment. Len denotes the maximum context length. **Bold** results denote the best results in one row, _i.e._ best LAA architecture w.r.t. one LLM. _Underline_ results denote the best performance in one column, _i.e._ best LLM regarding one LAA architecture.
Figure 6: The reward w.r.t. complexity level in HotPotQA. Each bar represents one LAA. |
2302.01585 | SegForestNet: Spatial-Partitioning-Based Aerial Image Segmentation | Aerial image segmentation is the basis for applications such as automatically
creating maps or tracking deforestation. In true orthophotos, which are often
used in these applications, many objects and regions can be approximated well
by polygons. However, this fact is rarely exploited by state-of-the-art
semantic segmentation models. Instead, most models allow unnecessary degrees of
freedom in their predictions by allowing arbitrary region shapes. We therefore
present a refinement of our deep learning model which predicts binary space
partitioning trees, an efficient polygon representation. The refinements
include a new feature decoder architecture and a new differentiable BSP tree
renderer which both avoid vanishing gradients. Additionally, we designed a
novel loss function specifically designed to improve the spatial partitioning
defined by the predicted trees. Furthermore, our expanded model can predict
multiple trees at once and thus can predict class-specific segmentations. As an
additional contribution, we investigate the impact of a non-optimal training
process in comparison to an optimized training process. While model
architectures optimized for aerial images, such as PFNet or our own model, show
an advantage under non-optimal conditions, this advantage disappears under
optimal training conditions. Despite this observation, our model still makes
better predictions for small rectangular objects, e.g., cars. | Daniel Gritzner, Jörn Ostermann | 2023-02-03T07:35:53Z | http://arxiv.org/abs/2302.01585v3 | # SegForestNet: Spatial-Partitioning-Based Aerial Image Segmentation
###### Abstract
Aerial image analysis, specifically the semantic segmentation thereof, is the basis for applications such as automatically creating and updating maps, tracking city growth, or tracking deforestation. In true orthophotes, which are often used in these applications, many objects and regions can be approximated well by polygons. However, this fact is rarely exploited by state-of-the-art semantic segmentation models. Instead, most models allow unnecessary degrees of freedom in their predictions by allowing arbitrary region shapes. We therefore present a refinement of our deep learning model which predicts binary space partitioning trees, an efficient polygon representation. The refinements include a new feature decoder architecture and a new differentiable BSP tree renderer which both avoid vanishing gradients. Additionally, we designed a novel loss function specifically designed to improve the spatial partitioning defined by the predicted trees. Furthermore, our expanded model can predict multiple trees at once and thus can predict class-specific segmentations. Taking all modifications together, our model achieves state-of-the-art performance while using up to 60% fewer model parameters when using a small backbone model or up to 20% fewer model parameters when using a large backbone model.
Computer Vision, Semantic Segmentation, Deep Learning, Spatial Partitioning, Aerial Images.
## 1 Introduction
Computer vision techniques, such as object detection and semantic segmentation, are used for many aerial image analysis applications. As an example, traffic can be analyzed by using object detection to find vehicles in aerial images. Semantic segmentation is the basis for creating and updating maps [11] and can be used for tracking city growth [13] or tracking deforestation [23].
However, any error in the prediction of a computer vision model, i.e., object detection or semantic segmentation, causes errors in applications based on said prediction. Precise predictions are required, which can be achieved by exploiting domain knowledge. One fact rarely exploited in the deep learning-based analysis of aerial images is that many objects and regions can be approximated well by polygons, especially in true orthophotos. By restricting a model's prediction to a limited number of polygons, the degrees of freedom in the prediction are reduced. Arbitrary region shapes, e.g., circle-shaped cars which make no sense, are no longer feasible predictions.
We created BSPSegNet [14], a model for aerial image segmentation which predicts polygons as shown in Fig. 1. The model partitions the input image into \(8\times 8\) blocks. For each block, a binary space partitioning (BSP) tree [10] with depth 2 is predicted, i.e., every block is partitioned into up to four segments with a separate class prediction for each segment. The parameters of the inner nodes of the BSP tree encode parameters of a line partitioning a region into two subregions, while the parameters of the leaf nodes are class logits. The lines encoded by the inner nodes together with the block boundaries form polygons. As we demonstrated in our earlier paper, the ground truth of common aerial image datasets can be encoded extremely accurately this way (\(99\%\) accuracy and \(99\%\) mean Intersection-over-Union (mIoU)).
However, despite reaching state-of-the-art performance in semantic segmentation, BSPSegNet still does not offer proven improvements over existing models. In this article, we aim to improve BSPSegNet such that it performs better than the state-of-the-art for segmentation. We present multiple contributions:
1. We replace BSPSegNet's two feature decoders with an architecture using residual connections and modify the model's differentiable BSP tree renderer to avoid using sigmoid activations. BSPSegNet tries to push the output of these activations close to \(0\) or \(1\). However, in these value regions the derivative of
Fig. 1: Overview of our approach: our model predicts binary space partitioning (BSP) trees from aerial images. The inner nodes of such a tree define the shape of regions, while the leaf nodes define the content of each region. The leaf nodes effectively map shapes to classes. BSP trees can be rendered in a differentiable way into a full segmentation thus enabling end-to-end model training. Our proposed refinements improve gradient computations in the BSP renderer and parts of the model. Additionally, a novel loss function improves the predicted inner node parameters, i.e., the predicted shapes. Furthermore, we extended the approach to predict multiple trees at the same time in order to enable class-specific shape predictions.
the sigmoid function is almost \(0\). Vanishing gradients are avoided due to these modifications.
2. We introduce a novel loss function punishing BSP trees which create partitions containing multiple different classes (according to the ground truth). This loss specifically affects only the inner nodes of the BSP trees, instead of all the nodes as categorical cross entropy does, thus improving the shapes of the predicted partitions.
3. We expand BSPSegNet's prediction s.t. it is able to predict multiple BSP trees for each \(8\times 8\) block, thus enabling class-specific segmentations for more precise shape predictions.
Due to the last modification, we rename our model to SegForestNet. SegForestNet is able to reach state-of-the-art performance while using up to \(60\%\) less parameters, i.e., it is less prone to overfitting to a small dataset. Small dataset size is a common problem in aerial image analysis.
With these modifications SegForestNet can be trained end-to-end without the two phase training previously recommended in [14]. In the first phase, an autoencoder phase, a mapping of the ground truth to BSP trees was learned. In the second phase, this new ground truth representation was used to learn the actual semantic segmentation without using the differentiable BSP renderer and thus avoiding its effects on gradients. Our modifications allow the faster and more efficient training of SegForestNet without the autoencoder phase.
In addition to the above contributions, we also experimented with different signed distance functions (SDFs) and tree structures. SDFs are used by the inner node nodes in order to partition a segment into subsegments. By trying different SDFs, we were able to generalize the shape of the partitioning boundary from lines to other shapes such as circles. Also, our approach generalizes to different tree structures such as quadtrees [9] and k-d trees [1]. However, we found that neither of these two changes provide any benefit for the semantic segmentation of aerial images. Since we believe that these ideas may be useful in other domains with more organic shapes, e.g., the segmentation of images captured by cameras of self-driving cars, we still present these experiments here so that they become more easily reproducible.
In this article's next section we will discuss related work, followed by a description of SegForestNet. We then evaluate our contributions by comparing SegForestNet on several datasets to multiple state-of-the-art semantic segmentation models. We finally conclude with a summary.
Our implementation can be found on GitHub1.
Footnote 1: [https://github.com/gritzner/SegForestNet](https://github.com/gritzner/SegForestNet)
## 2 Related Work
Modern semantic segmentation models rely on deep learning and an encoder-decoder structure [3, 19, 30, 43, 49, 31]. An encoder is used to map an input image to a feature map with a reduced spatial resolution but a much higher number of channels/features. While a large number of features is computed, the spatial information is bottlenecked. This implies that downsampling, e.g., through max-pooling or strided convolutions, is performed. Using such a feature map, a decoder predicts the desired semantic segmentation. To get a segmentation of the same resolution as the input image, the decoder uses bilinear sampling and/or transpose convolutions (also called deconvolutions) in order to upsample the spatial resolution of the feature map back to the initial input resolution.
U-Net [32] is a popular model following this paradigm. Though designed for medical image analysis, it is also used for other types of data, including even 3D data [7]. U-Net's encoder and decoder are almost symmetrical and skip connections at each spatial resolution are used to allow predictions with high frequency details. Other models, such as Fully Convolutional Networks for Semantic Segmentation (FCN) [25], have an asymmetrical architecture with a significantly more sophisticated encoder than decoder. They rely on existing classification models, such as VGG [37], ResNet [16], MobileNetv2 [35] or Xception [6], as encoder (also called backbone or feature extractors in these cases) by removing the final classification layers. Instead, semantic segmentation models append a novel decoder to these encoders. The complexity of these decoders varies. While FCN uses a rather simple decoder, DeepLabv3+ [4] uses a more sophisticated decoder and also adds atrous spatial pyramid pooling [2] to the encoder. By doing so, DeepLabv3+ is able to account for context information at different scales. Other modifications to the basic encoder-decoder idea are adding parallel branches to process specific features such as edges/boundaries [39], processing videos instead of images [52], explicitly modelling long-range spatial and channel relations [29], separating the image into fore- and background [50], or point-wise affinity propagation to handle imbalanced class distributions with an unusually high number of background pixels [24].
More recent research also works on more complex problems, e.g., instance segmentation [15] and panoptic segmentation [45, 21, 47, 48]. In instance segmentation, pixel masks of detected objects, such as cars or persons, are predicted. Panoptic segmentation is a combination of semantic segmentation and instance segmentation: a class is predicted for each pixel in the input image while for all pixels of countable objects an additional instance ID is predicted. Models dealing with these more complex tasks still usually require a semantic segmentation as a component to compute the final prediction. AdaptIS [38] is such a model. When performing instance segmentation, it uses U-Net as a backbone while using ResNet and DeepLabv3+ when performing panoptic segmentation.
Our previous model, BSPSegNet [14], predicts BSP trees to perform a semantic segmentation. These trees are usually used in computer graphics [36] rather than computer vision. Each inner node of a BSP tree partitions a given region into two segments while the leaf nodes define each segment's content, i.e., its class in the case of a semantic segmentation. Since BSP trees are resolution-indeped, BSPSegNet's decoder does not have to perform any kind of upsampling. Furthermore, BSPSegNet is the only segmentation model which inherently semantically separates features into shape and content features. Similar works using computer graphics techniques for computer vision are PointRend [22] and models reconstructing 3D meshes [12, 28, 40, 41].
PointRend iteratively refines a coarse instance segmentation into a finer representation. BSPnet [5] also uses BSP, but instead of a segmentation, it tries to reconstruct polygonal 3D models. Additionally, rather than predicting a BSP tree hierarchy, it instead predicts a set of BSP planes.
As mentioned before, BSP trees are a data structure more common in computer graphics than in computer vision. Other commonly used data structures in computer graphics and other applications, e.g., data compression, are \(k\)-d trees [1], quadtrees [9], and bounding volume hierarchies [8]. They differ from BSP trees in that they perform a partitioning using an axis-aligned structure which makes slopes expensive to encode compared to BSP trees.
## 3 Semantic Segmentation through Spatial Partitioning
We will first briefly describe BSPSegNet's architecture in this section in order to be able to precisely discuss our contributions. The third subsection presents the first contribution mentioned in the introduction, which reduces the risk of vanishing gradients, while the fourth subsection presents our second contribution, the novel loss function. The fifth subsection then discusses the changes necessary to enable to the prediction of multiple partitioning trees simultaneously, our third contribution. In a final subsection we show how SegForest's differentiable BSP rendering generalizes to different signed distance functions (SDFs) and tree structures.
### _BSPSegNet_
Fig. 2 (top) shows a common encoder-decoder architecture used by state-of-the-art models. An encoder maps an input image to a feature map with a bottlenecked spatial resolution. This feature map is then decoded into a semantic segmentation at the same spatial resolution as the input image. Skip connections at different spatial resolutions help to maintain high frequency details.
The bottom of Fig. 2 shows BSPSegNet in comparison. As other state-of-the-art models, it uses an encoder-decoder architecture. However, it uses two separate decoders. The feature map computed by the encoder is split along the feature dimension into two separate maps: one for shape features (blue) and one for content features (orange). Each feature map is processed by a different decoder. While the decoders share the same architecture, they both have their own unique weights. The shape features are decoded into the inner nodes of the BSP trees, which represent the shape of the regions in the final segmentation, as shown in Fig. 3. The content features are decoded into the leaf nodes of the BSP trees, which represent each region's class logits.
Like in other semantic segmentation models, BSPSegNet uses the feature extraction part of a classification model like MobileNetv2 or Xception as an encoder. By removing pooling operations or reducing the stride of convolution operations, the encoder is modified such that the input image is downsampled by a factor of \(8\) along each spatial dimension. To keep in line with classification model design, the earlier downsampling operations are kept while the later ones are removed. This enables more efficient inference and training, as the spatial resolution is reduced early. Furthermore, the number of features in the final feature map computed by the encoder is reduced such that there is a bottleneck. The shape feature map has fewer features than the number of parameters required for the inner nodes, while the content feature map has fewer features than the parameters required for the leaf nodes.
BSPSegNet subdivides the input image into \(8\times 8\) blocks (the same as the downsampling factor in the encoder) and predicts a separate BSP tree for each block, i.e., the spatial resolution of the feature map is such that there is exactly one spatial unit per block. The two decoders keep the spatial resolution and only modify the number of features. This is done by concatenating three blocks, each consisting of a \(1\times 1\) convolution outputting \(256\) features, batch normalization and a LeakyReLU activation. These blocks are followed by one final \(1\times 1\) convolution to predict the final BSP tree parameters.
### _BSPSegNet's Differentiable BSP Renderer_
The differentiable BSP renderer is based on the idea of a region map **R**. For each pixel \(\textbf{p}=(x,y)\), the entry \(\textbf{R}_{iyx}\) is the probability that **p** is part of the \(i\)-th region (leaf node). This implies \(\sum_{i=1}^{k}\textbf{R}_{iyx}=1\) for all pixels **p** with \(k\) equal to the number of leaf nodes.
Initially, BSPSegNet starts with all \(\textbf{R}_{iyx}\) set to 1 and then updates the region map for every inner node. Each inner node consists of three parameters, two parameters for the normal vector **n** and one parameter \(d\). These parameters together define a line in the two-dimensional image space. BSPSegNet does neither normalize **n**, nor does it enforce
Fig. 3: A partitioning of a square region (right) defined by a BSP tree (center). Shape features are decoded into the parameters of the inner nodes (blue), which define lines (green) creating the partitioning. Content features are decoded into the parameters of the leaf nodes (orange), which are the class logits predicted for each partition.
Fig. 2: A comparison of state-of-the-art models (top) and BSPSegNet/SegForestNet (bottom; ours). All models use an encoder-decoder architecture, however, our models semantically splits the feature map (hexagon) into shape and content features. These are decoded into the inner nodes and leaf nodes of BSP trees respectively (see Fig. 3). Our models also needs a differentiable BSP renderer to enable end-to-end training. The renderer is a fixed function without learnable parameters.
\(|\mathbf{n}|=1\) through any kind of loss. Rather, it lets the network learn to predict an appropriately scaled \(d\). For every inner node and pixel \(\mathbf{p}\), the signed distance function \(f(\mathbf{p})=\mathbf{n}\cdot\mathbf{p}-d\) is computed. The equations
\[g(\mathbf{p})=\sigma(\lambda_{1}\cdot f(\mathbf{p})) \tag{1}\]
\[\mathbf{R}_{\mathit{iyx}}:=\mathbf{R}_{\mathit{iyx}}\cdot\lambda_{2}\cdot g( \mathbf{p}) \tag{2}\]
\[\mathbf{R}_{\mathit{iyx}}:=\mathbf{R}_{\mathit{iyx}}\cdot\lambda_{2}\cdot(1-g( \mathbf{p})) \tag{3}\]
with the sigmoid function \(\sigma\) are then used to update the region map \(\mathbf{R}\). For each inner node, equation 2 is used for all leaf nodes \(i\) reachable from the left child node while equation 3 is used for all \(i\) reachable by the right child node.
Depending on where the pixel \(\mathbf{p}\) lies, the function \(g\) either approaches \(0\) or \(1\). It approaches \(1\) if \(\mathbf{p}\) is reachable via the left child node and it approaches \(0\) if \(\mathbf{p}\) is reachable via the right child node. Therefore, \(\mathbf{R}_{\mathit{iyx}}\) is multiplied with a value close to \(0\) for those regions \(i\) which \(\mathbf{p}\) does not belong to. Thus, eventually, all \(\mathbf{R}_{\mathit{iyx}}\) for \(i\) to which \(\mathbf{p}\) does not belong will be close to \(0\). The one remaining entry \(\mathbf{R}_{\mathit{iyx}}\) will be close to \(\lambda_{2}^{D}\), where \(D\) is the depth of the BSP tree.
After iterating over all inner nodes, a Softmax operation across the region dimension \(i\) is applied. This ensures that each \(\mathbf{R}_{\mathit{iyx}}\) represents the aforementioned probability. The hyperparameter \(\lambda_{2}\) controls how close the respective final entries of \(\mathbf{R}\) will be to \(0\) or \(1\), while \(\lambda_{1}\) controls how large \(|f(\mathbf{p})|\) must be in order for \(\mathbf{p}\) to be assigned to one specific child node (\(g(\mathbf{p})\) close to \(0\) or \(1\)) instead of the model expressing uncertainty about which child region \(\mathbf{p}\) belongs to (\(g(\mathbf{p})\approx 0.5\)).
The final output \(h\) of the BSP renderer is
\[h(\mathbf{p})=\sum_{i=1}^{k}\mathbf{R}_{\mathit{iyx}}\cdot\mathbf{v}_{i} \tag{4}\]
with the vectors \(\mathbf{v}_{i}\) of class logits, one vector for each leaf node. The function \(h\) can be computed in parallel for all pixels \(\mathbf{p}\). BSP trees are resolution-independent, therefore the final output resolution is determined by how densely \(h\) is sampled, i.e., for how many pixels \(\mathbf{p}\) the function is computed. This is also the reason why BSPSegNet, in contrast to other segmentation models, does not have to perform any kind of upsampling in its decoders. By sampling \(h\) at the same resolution as the input image, a pixelwise semantic segmentation can be computed like for any other state-of-the-art model.
### _Refined Gradients_
As mentioned in the introduction, a two-phase training process is recommended for the original BSPSegNet. In the second phase, the differentiable BSP renderer is skipped altogether. We hypothesize that insufficient gradients are responsible: by pushing the computation of \(g\) (Eq. 1) into value ranges where its derivative is almost \(0\) gradients vanish. Additionally, the two feature decoders do not use residual connections, which would help gradients to propagate through the model. Our first contribution aims to improve on these two points in order to avoid the two-phase training process and to enable the use of decoders with more layers which can learn more complex mapping functions from features to BSP tree parameters. The goal is to enable true end-to-end training without the need to learn a mapping to an intermediate representation. This way, training is faster, allowing for more iterations during development, e.g., for hyperparameter optimization, or saving energy and thus reducing costs.
The first part of this contribution is a new feature decoder architecture. Again, we use the same architecture for both decoders, but they still do not share weights. Our new architecture, shown in Fig. 4, starts and ends with a \(1\times 1\) convolution, the first one maps the bottlenecked feature dimension coming from the encoder to an intermediate number of features, while the last convolution maps the intermediate features to the final number of tree parameters. In between these two convolutions are several residual blocks. Each block consists of a depthwise \(3\times 3\) convolution with zero padding and a \(1\times 1\) convolution. The number of features stays constant throughout the residual blocks. Each convolution in the decoders, except for the very last one mapping to the tree parameters, is followed by a batch normalization layer [17] and a LeakyReLU activation [46]. This architecture not only uses residual connections for improved gradient propagation, but it also includes spatial context for each block, i.e., for each BSP tree, instead of solely relying on the receptive field of the encoder.
The second part of our first contribution is an updated region map \(\mathbf{R}\) computation (Eqs. 1-3). We start by setting all \(\mathbf{R}_{\mathit{iyx}}\) to \(0\) initially instead of \(1\). For each inner node, we use the equations
\[g(\mathbf{p})=\lambda\cdot f(\mathbf{p}) \tag{5}\]
\[\mathbf{R}_{\mathit{iyx}}:=\mathbf{R}_{\mathit{iyx}}+\text{ReLU}(g(\mathbf{p})) \tag{6}\]
\[\mathbf{R}_{\mathit{iyx}}:=\mathbf{R}_{\mathit{iyx}}+\text{ReLU}(-g(\mathbf{p})) \tag{7}\]
for updating \(\mathbf{R}\) where Eq. 6 replaces Eq. 2 and Eq. 7 replaces Eq. 3. After processing all inner nodes, there still is a Softmax operation across the region dimension \(i\). Using our Eqs. 5-7, we avoid vanishing gradients by replacing the sigmoid function with ReLU. This necessitates moving from multiplications in Eqs. 2 and 3 to additions in Eqs. 6 and 7. As a nice side effect, we also have only one hyperparameter \(\lambda\) to optimize instead of two, thus simplifying the model. Our \(\lambda\) combines the meaning of the previous \(\lambda_{1}\) and \(\lambda_{2}\).
### _Region Map-Specific Loss_
Cross-entropy, the most common loss function used for semantic segmentation, is applied pixelwise and therefore gives the model to be trained a signal for each mispredicted pixel. However, from the point-of-view of BSPSegNet and SegForestNet, there is no distinction between wrong class
Fig. 4: Our refined decoder architecture using residual blocks. The initial and the last convolution change the number of features \(F\). The number of features are set such that there is a bottleneck with regard to the tree parameters, i.e., \(F_{encoder}<F_{tree}\).
logits in a leaf node and wrong line parameters in an inner node in this loss signal. We therefore present a novel loss function, our second contribution, which specifically punishes the later, i.e., wrong line parameters in inner nodes. We do so by formulating three additional losses which we combine with cross-entropy into a new SegForestNet-specific loss function. Each new loss is designed to punish a specific undesirable trait of the region map **R** which may occur in the model's predictions.
First, as an intermediate result used later by two of the new losses, we compute
\[\textbf{Y}_{\textbf{B}}^{i}=\sum_{(x,y)\in\textbf{B}}\textbf{Y}_{yx}\cdot \textbf{R}_{iyx} \tag{8}\]
for every \(8\times 8\) pixel block **B** and region/leaf node \(i\) where every \(\textbf{Y}_{yx}\) is a one-hot class vector. \(\textbf{Y}_{\textbf{B}}^{i}\) is therefore a weighted vector containing the number of pixels per class in region \(i\) of block **B**. This vector is weighted by **R**, i.e., when a pixel belongs only partially to a region \(i\) according to **R** it is counted only partially for \(\textbf{Y}_{\textbf{B}}^{i}\). From \(\textbf{Y}_{\textbf{B}}^{i}\) the size \(s_{\textbf{B}}^{i}\) of a region in pixels and the region's class probability distribution \(\textbf{P}_{\textbf{B}}^{i}\) can be easily computed:
\[s_{\textbf{B}}^{i}=\sum_{c=1}^{|C|}\textbf{Y}_{\textbf{B}}^{i}(c) \tag{9}\]
\[\textbf{P}_{\textbf{B}}^{i}=\frac{\textbf{Y}_{\textbf{B}}^{i}}{s_{\textbf{B} }^{i}} \tag{10}\]
where \(\textbf{Y}_{\textbf{B}}^{i}(c)\) is the number of pixels in region \(i\) of block **B** belonging to class \(c\).
Our first new loss specifically punishes region maps **R** which define segmentations with regions \(i\) in blocks **B** which contain more than one class according to the ground truth. We calculate
\[\mathcal{L}_{\textbf{Y}}=\frac{1}{N}\sum_{i,\textbf{B}}H(\textbf{P}_{\textbf{ B}}^{i}) \tag{11}\]
with the Gini impurity \(H(\textbf{P}_{\textbf{B}}^{i})=1-\sum_{c=1}^{|C|}\left(\textbf{P}_{\textbf{B}}^{i}(c) \right)^{2}\) and \(N\) equal to the number of blocks **B** multiplied by the number of leaf nodes \(i\) per BSP tree. \(\mathcal{L}_{\textbf{Y}}\) is the average Gini impurity across all regions \(i\) and blocks **B**. We use \(H\) as a proxy for the entropy. Using the actual entropy instead did not produce better results, however, due to including a logarithm, it was slower and less numerically stable to calculate. \(\mathcal{L}_{\textbf{Y}}\) ensures that every predicted region \(i\) contains only a single class according to the ground truth as this loss becomes smaller the closer \(\textbf{P}_{\textbf{B}}^{i}\) is to a one-hot vector, i.e., only one class is in the partition \(i\) of block **B**.
We also compute
\[\mathcal{L}_{s}=\frac{1}{N}\sum_{i,\textbf{B}}\max\{s_{\text{min}}-s_{\textbf {B}}^{i},0\} \tag{12}\]
with a hyperparameter \(s_{\text{min}}\) specifying a minimum desired region size. A minimum region size ensures that the predicted lines used for partitioning intersect the blocks they belong to. This is done to make sure that the model quickly learns to utilize all available regions rather then trying to find a solution that utilizes only a subset of the regions which may result in a larger error long-term but may present itself as an undesirable local minimum during training. We observed that most \(8\times 8\) blocks contain only a single class according to the ground truth. Therefore, using only a single or very few partitions/regions per block may present itself as a local minimum to the model that is hard to avoid or to get out of during training.
While we do not enforce constrains on the normal **n** and the distance \(d\) used to calculate the signed distance function \(f\) in Eq. 5, we still want our model to favor predictions which result in sharp boundaries between regions \(i\), rather than having lots of pixels which belong to multiple regions partially. The hyperparameter \(\lambda\) can address this issue in theory, however, for any adjustments made to \(\lambda\), the model may learn to adjust its predictions of **n** and \(d\) accordingly, rendering \(\lambda\) useless. We therefore also compute the loss
\[\mathcal{L}_{\textbf{R}}=\frac{1}{|\textbf{I}|}\sum_{(x,y)\in\textbf{I}}H( \textbf{R}_{yx}) \tag{13}\]
where **I** is the set of image pixels and \(\textbf{R}_{yx}\) is the probability vector defining how likely pixel \((x,y)\in\textbf{I}\) belongs to any of the regions \(i\). By minimizing \(H\), we make sure that our model favors predictions of \(\textbf{R}_{yx}\) which are one-hot, thus defining sharp boundaries between regions. Again, the Gini impurity \(H\) serves as a proxy to the entropy producing results of equal quality.
The full loss function we use for model training consists of cross-entropy \(\mathcal{L}_{\text{CE}}\) and the three additional region map **R** specific components just described:
\[\mathcal{L}_{\text{total}}=\mu_{1}\mathcal{L}_{\text{CE}}+\mu_{2}\mathcal{L}_{ \textbf{Y}}+\mu_{3}\mathcal{L}_{s}+\mu_{4}\mathcal{L}_{\textbf{R}}. \tag{14}\]
We set the loss weights \(\mu_{i}\) s.t. \(\sum_{i=1}^{4}\mu_{i}=1\). This constraint limits the range for hyperparameter optimization as any desired effective \(\mu_{i}\) can be achieved by adjusting the constrained \(\mu_{i}\) and the learning rate accordingly. \(\mathcal{L}_{\text{CE}}\) is the most important component of the loss as it is the only component affecting the leaf nodes and therefore the predicted class logits. All four loss components affect the region map **R** and therefore the inner nodes and the line parameters they contain. As initially described, \(\mathcal{L}_{\textbf{Y}}\), \(\mathcal{L}_{s}\) and \(\mathcal{L}_{\textbf{R}}\) are specifically designed to improve the predicted **R**.
### _Class-Specific Trees per Block_
In order to allow our model, SegForestNet to learn class-specific partitionings of \(8\times 8\) blocks into regions, we allow partitioning the classes \(C\) into subsets \(C_{j}\) and predicting a separate BSP tree for each subset. By creating a single subset with all classes our model can emulate the original BSPSegNet. By creating subsets which contain exactly one class we can create class-specific trees. Our approach generalizes to any partitioning of \(C\) though.
For each subset \(C_{j}\) we create a separate pair of decoders (shape and content). The encoder's output is split among all subsets s.t. there is no overlap in features going into any pair of two distinct decoders. Also, the feature dimensions and the split are specified s.t. \(F_{encoder}<F_{tree}\) for all decoders (see Fig. 4). The output dimension of the content decoder for each \(C_{j}\) is set to \(|C_{j}|\cdot N_{\text{leaves}}\) where \(N_{\text{leaves}}\) is the number of leaf nodes per tree, i.e., each content decoder only predicts class logits for the classes in its respective \(C_{j}\). For each \(C_{j}\) we obtain a semantic segmentation with a reduced number of classes. To obtain the final semantic segmentation with all
classes we simply concatenate the class logits predicted by all the trees.
The computation of \(\mathcal{L}_{\text{total}}\) so far only considered a single predicted tree. However, all loss components but \(\mathcal{L}_{\text{CE}}\) need to account for the fact that there are now multiple predicted trees. First, we calculate a separate \(\mathbf{Y}_{\mathbf{B}}^{i}\) (Eq. 8) for each \(C_{j}\). To do so, we split the one-hot vector \(\mathbf{Y}_{yx}\) into \(\mathbf{Y}_{yx}^{j}\), which consists of all the classes in \(C_{j}\), and \(\mathbf{Y}_{yx}^{j}\), which consists of all the other classes. From these two vectors we compute a new one-hot class vector \(\mathbf{Y}_{yx}^{j^{\prime}}\) by appending \(\sum_{c\in C_{-j}}\mathbf{Y}_{yx}^{\gamma j}(c)\) to \(\mathbf{Y}_{yx}^{j}\). The one-hot vector \(\mathbf{Y}_{yx}^{j^{\prime}}\) then consists of all classes in \(C_{j}\) with an extra dimension representing all other classes. This vector can be used to compute tree-specific versions of \(\mathbf{Y}_{\mathbf{B}}^{i}\) which can then be used for tree-specific loss components \(\mathcal{L}_{\mathbf{Y}}^{j}\), \(\mathcal{L}_{s}^{j}\) and \(\mathcal{L}_{\mathbf{R}}^{j}\). Finally, we use the equations
\[\mathcal{L}_{\mathbf{Y}}=\sum_{j}\frac{|C_{j}|}{|C|}\cdot\mathcal{L}_{\mathbf{ Y}}^{j} \tag{15}\]
\[\mathcal{L}_{s}=\sum_{j}\frac{|C_{j}|}{|C|}\cdot\mathcal{L}_{s}^{j} \tag{16}\]
\[\mathcal{L}_{\mathbf{R}}=\sum_{j}\frac{|C_{j}|}{|C|}\cdot\mathcal{L}_{\mathbf{ R}}^{j} \tag{17}\]
to calculate the loss components necessary for Eq. 14, i.e., we weight each tree-specific loss by the number of classes in its respective subset \(C_{j}\).
### _Generalized Differentiable Rendering of Trees_
The approach of using a region map \(\mathbf{R}\) for differentiable rendering generalizes to other signed distance functions \(f\) and tree structures. In this last subsection we discuss alternatives to \(f\) and how to implement other tree structures than binary space partitioning trees.
Eq. 5 can be used with any signed distance function \(f\), i.e., with any function which satisfies the following conditions:
1. \(f(\mathbf{p})=0\Leftrightarrow\) the point \(\mathbf{p}\) lies exactly on a boundary defined by \(f\)
2. \(\text{sgn}(f(\mathbf{p}))\) specifies on which side of the boundary defined by \(f\) the point \(\mathbf{p}\) lies
3. \(|f(\mathbf{p})|\) is the distance of \(\mathbf{p}\) to the boundary defined by \(f\)
The third condition can even be relaxed for the use in Eq. 5. It is sufficient if \(|f(\mathbf{p})|\) increases monotonically with the distance of \(\mathbf{p}\) to the boundary instead of it being the actual distance. Especially if \(\mathbf{p}\) is far from the boundary an approximate distance suffices.
So far, we only used lines as partition boundaries. Therefore, three values need to be predicted for each inner node: two values for the normal vector \(\mathbf{n}\) and one value for the distance \(d\) to the origin. The signed distance \(f_{1}(\mathbf{p})\) of a point \(\mathbf{p}\) to the line can then be computed as
\[f_{1}(\mathbf{p})=\mathbf{n}\cdot\mathbf{p}-d. \tag{18}\]
Fig. 5 shows partitionings defined by signed distance functions based on other geometric primitives. The corresponding equations are as follows. To compute the approximate signed distance \(f_{2}(\mathbf{p})\) to a square, its center \(\mathbf{x}\) (two values) and its size \(s\) (one value) need to be predicted:
\[f_{2}(\mathbf{p})=\max\{|\mathbf{x}_{1}-\mathbf{p}_{1}|,|\mathbf{x}_{2}- \mathbf{p}_{2}|\}-s. \tag{19}\]
In this equation \(\mathbf{x}_{i}\) and \(\mathbf{p}_{i}\) are the two components of the vectors \(\mathbf{x}\) and \(\mathbf{p}\). As with the square, the signed distance \(f_{3}(\mathbf{p})\) to a circle requires a predicted center \(\mathbf{x}\) and radius \(r\) (one value) as well:
\[f_{3}(\mathbf{p})=\|\mathbf{x}-\mathbf{p}\|_{2}^{2}-r. \tag{20}\]
This formulation is a pseudo signed distance which avoids the need to compute a square root and expects the model to learn to predict the square of the radius. The boundary of an ellipse can be defined by \(d_{1}+d_{2}=c\) where \(d_{i}\) are distances to fixed points and \(c\in\mathbb{R}^{+}\) is a constant. To compute an approximate signed distance \(f_{4}(\mathbf{p})\), two points \(\mathbf{x}\) and \(\mathbf{y}\), each consisting of two values, need to be predicted as well as one value for the constant \(c\). The signed distance then is
\[f_{4}(\mathbf{p})=\|\mathbf{x}-\mathbf{p}\|_{2}+\|\mathbf{y}-\mathbf{p}\|_{2 }-c. \tag{21}\]
Similar to an ellipse, a hyperbola can be defined by \(d_{1}-d_{2}=c\) with \(d_{i}\) and \(c\) as before. The corresponding approximate signed distance \(f_{5}(\mathbf{p})\) is
\[f_{5}(\mathbf{p})=\|\mathbf{x}-\mathbf{p}\|_{2}-\|\mathbf{y}-\mathbf{p}\|_{2 }|-c. \tag{22}\]
A parabola can be defined as the set points satisfying \(d_{1}=d_{2}\) where \(d_{1}\) is the distance to a fixed point and \(d_{2}\) is the distance to a line. Therefore, five values need to be predicted, two for a fixed point \(\mathbf{x}\), two for a normal vector \(\mathbf{n}\) and one for a distance \(d\) of the line to the origin. The approximate signed distance \(f_{6}(\mathbf{p})\) is
\[f_{6}(\mathbf{p})=\|\mathbf{x}-\mathbf{p}\|_{2}-f_{1}(\mathbf{p}) \tag{23}\]
All (pseudo) signed distance functions \(f_{i}\) except \(f_{2}\) use formulations s.t. that arbitrarily rotated, scaled and translated instances of the underlying geometric primitives used for partitioning can be predicted.
While BSPSegNet and SegForestNet only use BSP trees, differentiable rendering of other tree structures is possible by using the same region map \(\mathbf{R}\) approach. A \(k\)-d tree [1], as shown in Fig. 6, is spatial partitioning data structure similar to a BSP tree that be used to partition a space with an arbitrary number of dimensions. It is also a binary tree. Instead of using a signed distance function to decide on which side of a boundary a given point lies, each inner node of \(k\)-d tree uses a threshold and a dimension index to decide whether a point lies in the left or right subset. The threshold \(t\) is a value that needs to predicted while the dimension index \(i\) is a fixed value chosen when instantiating the model. Eqs. 5-7 can be used for \(k\)-d trees as well with the signed distance function
\[f_{7}(\mathbf{p})=t-\mathbf{p}_{i} \tag{24}\]
where \(\mathbf{p}_{i}\) is the \(i\)-th component of the vector \(\mathbf{p}\). An inner node of a \(k\)-d tree requires only a single parameter to be predicted as opposed to the three to five parameters required for the signed distance functions used by an inner node of a BSP tree. However, generally \(k\)-d trees need to be deeper, e.g., to be able to define slopes, which counteracts this advantage. This depth disadvantage can be mitigated
by making \(i\) a predicted parameter. We call a tree with such inner nodes a dynamic \(k\)-d tree. A downside of this mitigation strategy is that one or two additional parameters (depending on the implementation) need to be predicted per inner node to decide whether the first dimension \(i=1\) or the second dimension \(i=2\) shall be used for partitioning. Even with this extra flexibility a dynamic \(k\)-d tree still has a depth disadvantage in the worst case, e.g., in the aforementioned slope example.
A quadtree [9], shown in Fig. 7, is a spatial partitioning data structure specifically designed to partition two dimensional planes. In each inner node of a quadtree a point **x** is used to partition the space into four quadrants. Compared to a \(k\)-d tree, a quadtree inner node partitions a space along both dimensions instead of only a single fixed dimension \(i\) and it uses a different threshold \(t\) for each dimension by using the components of **x** as thresholds. The equations
\[t_{1}=\textbf{x}_{1}-\textbf{p}_{1},t_{2}=\textbf{x}_{2}- \textbf{p}_{2} \tag{25}\] \[t_{3}=\text{ReLU}(t_{1}),t_{4}=\text{ReLU}(-t_{1})\] (26) \[t_{5}=\text{ReLU}(t_{2}),t_{6}=\text{ReLU}(-t_{2})\] (27) \[\textbf{R}_{{\it{i}yx}}:=\textbf{R}_{{\it{i}yx}}+\lambda\cdot t_ {4}\cdot t_{5}\] (28) \[\textbf{R}_{{\it{i}yx}}:=\textbf{R}_{{\it{i}yx}}+\lambda\cdot t_ {3}\cdot t_{5}\] (29) \[\textbf{R}_{{\it{i}yx}}:=\textbf{R}_{{\it{i}yx}}+\lambda\cdot t_ {4}\cdot t_{6}\] (30) \[\textbf{R}_{{\it{i}yx}}:=\textbf{R}_{{\it{i}yx}}+\lambda\cdot t_ {3}\cdot t_{6} \tag{31}\]
are used to update the region map **R** for a quadtree at a location \(\textbf{p}=(x,y)\). Note: in Eqs. 28-31 \(i\) refers to indices of leaf nodes reachable from one of the child nodes. In Eq. 28\(i\) needs to be set to the indices of the leaf nodes covering the top-left partition, in Eq. 29\(i\) refers to the top-right partition, in Eq. 30\(i\) refers to the bottom-left partition and in Eq. 31\(i\) refers to the bottom-right partition.
This region map **R** based framework is general enough to even allow for trees in which different inner nodes are instances of different tree types (BSP, \(k\)-d, quadtree) or use different signed distance functions in the case of BSP tree inner nodes. When using class-specific trees, i.e., when partitioning the set of classes \(C\) into subsets \(C_{j}\), separate region maps specific to each of the subsets have to be computed anyway. Consequently, different tree configurations can be used for each of the subsets \(C_{j}\) if so desired.
## 4 Evaluation
In this section we evaluate our contributions. First, we describe the datasets and the test setup we used. We will then present a toy example which proves that our generalizations in section 3.6 may be helpful if the alternate signed distance functions or tree structures suit the data. We then explore good value ranges for the hyperparameters we introduced, followed by an ablation study. We finish the section with a comparison to state-of-the-art models.
### _Datasets_
We used seven different datasets for evaluation. We used aerial images, in particular true orthophotos, from the German cities Hannover, Buxehude, Nienburg, Vaihingen and Potsdam. The later two datasets are part of the ISPRS 2D Semantic Labeling Benchmark Challenge [18]. Additionally, we used images from Toulouse [33]. The Toulouse dataset also includes instance labels for buildings for performing panoptic segmentation or instance segmentation. However, we only used the semantic labels. Our last dataset was iSAID [42], which is actually an instance segmentation dataset based on the object detection dataset DOTA [44]. Again, we ignored the instance labels. However due to its origin, iSAID has very sparse non-background labels, i.e., it cannot be considered a true semantic segmentation datasets which have much denser non-background label coverage.
Hannover, Buxehude and Nienburg consist of 16 patches of \(2500\times 2500\) pixels each. We omitted one such patch from Hannover since it consisted almost entirely of trees. We randomly divided the image patches into subsets
Fig. 5: The decision boundaries created by signed distance functions based on different geometric primitives. From left to right: line, square, circle, ellipse, hyperbola, parabola. The blue area shows points for which the respective signed distance function is non-negative. As an example, in the inside of the circle \(f_{3}\) is negative whereas it is positive on the outside.
Fig. 6: Visualization of a \(k\)-d tree (left) and a region it partitions (right). The parameters of each inner node (blue) are a fixed dimension, indicated by the orientation of the line used for partitioning, and a predicted threshold, indicated by the position of the line along the fixed dimension.
Fig. 7: Visualization of a quadtree (left) and a region it partitions (right). The parameters of each inner node (blue) are the predicted coordinates of a point. This point partitions a given region into four quadrants which are then children of the associated inner node.
s.t. roughly \(70\%\) of pixels were used for training with the remainder split evenly into validation and test sets. Since Toulouse consists of only four images, we used two for training, one for validation and one for testing. iSAID is already pre-split into training and validation. We used the validation images as our test images and split the training set into training (\(95\%\) of pixels) and actual validation. For Vaihingen and Potsdam, we used the images originally released without ground truth as test images and used the remaining images as training (roughly \(90\%\) of pixels) and validation (roughly \(10\%\) of pixels).
The ground truth classes of the German city images are impervious surfaces, buildings, low vegetation, tree, car, and clutter/background, the later two being rare. The Toulouse dataset also contains sports venues and water as separate classes in addition to the classes used for segmentation of the German cities. In Toulouse the classes car, sports venues, water, and background are rare. Since the background class is so rare and also is used to express uncertainty about the true class in the case of at least Potsdam, we ignored this class for all metrics, including training losses. The exception is the iSAID. Due to its sparsity of non-background pixels, less than \(3\%\) of pixels belong to a non-background class, we did not ignore the background class in this case. iSAID has 15 non-background classes including different vehicle and sports venue categories.
For data augmentation, we randomly sampled 8000 input patches from the training images. The position (random translation) of the input patches was chosen s.t. input patches from the entire image could be sampled. The size of the square area being sampled was randomly scaled between \(70\%\) and \(130\%\) of \(224\times 224\) pixels, the actual input size, making objects randomly appear smaller or larger for augmentation. The same scaling was applied to both spatial dimensions. The random shearing was chosen independently for both axes, sampled uniformly from \([-6^{\circ},6^{\circ}]\). Random rotation depended on the dataset and was chosen from \([-50^{\circ},50^{\circ}]\) for Vaihingen and from \([-30^{\circ},30^{\circ}]\) for all other datasets. This dataset-specific augmentation for Vaihingen improved results on this dataset slightly. No flipping (horizontal or vertical) was used. All channels were normalized to zero mean and a standard deviation of 1. After normalization, noise sampled from a normal distribution with standard deviation \(0.1\) was added. The noise was sampled for each pixel and each channel individually. Furthermore, we used bilinear filtering. For the ground truth, we used the bilinear interpolation coefficients as voting weights to determine the discrete class for each pixel. Validation and test images were partitioned into axis-aligned input patches with no overlap and stride equal to the input patch size. We performed this input patch creation once for each dataset. Additionally, we computed the NDVI [34] for all datasets but iSAID, which lacks the necessary infrared channel. Other exceptions made for iSAID were sampling 28000 input patches and ensuring that the entropy of the class distribution of each sample was at least \(0.04\) to ensure that the patches actually contained non-background pixels.
We used all available input channels and therefore adapted the number of input channels of the very first convolution of every model to the dataset used. All datasets except Vaihingen, which has no green channel, consist of at least RGB images with a few grayscale images in iSAID which we converted to RGB. The German cities all also have infrared channels and a digital surface model ("depth"). Toulouse actually consists of multispectral images with eight channels, including RGB and infrared.
### _Training and Test Setup_
We used PyTorch v1.10 with CUDA 10.2 to train models on nVidia GeForce RTX 2080 Ti GPUs. Depending on the backbone used, we set the mini-batch size to 36 (MobileNetv2 [35]) or 18 (all other backbones). As an optimizer we used AdamW [20, 27] with a cosine annealing learning rate schedule [26]. As loss function we either used categorical cross entropy or our novel loss function (Eq. 14). For the cross entropy part, we set class weights s.t. rarer classes have higher weights based on the class distribution in the training images.
We used random search for hyperparameter optimization for all models, in particular optimizing their learning rates. For this, we computed the mean Intersection-over-Union (mIoU) on the validation subset of Buxethude. We then used the maximum validation mIoU across all epochs of a run with a randomly chosen value for the hyperparameter being optimized to evaluate the performance of this run and thus hyperparameter value. The final hyperparameters used can be found in our published code2. After hyperparameter optimization, we ran each experiment ten times and computed means and standard deviations from these ten runs. For the final comparison of our contributions to the state-of-the-art, we again used the maximum validation mIoU to identify the epoch after which each model performed best and then evaluated these particular model parameters on the test data.
Footnote 2: core/defaults.yaml and cfgs/semseg.yaml
We also made some small modifications to state-of-the-art models and backbones to either improve their performance slightly or simplify them while keeping their performance the same: we use LeakyReLUs [46] instead of ReLUs and we removed all atrous/dilated convolutions. For DeepLabv3+ we modified the strides of the backbone s.t. the spatial bottleneck had a downsampling factor of \(8\). Training all models from scratch, we changed the first convolution in each backbone to accept all input channels offered by each respective dataset.
For the decoder of SegForestNet we used eight residual blocks with \(F_{intermediate}=96\) (see Fig. 4). For the shape features, \(F_{encoder}\) and \(F_{tree}\) were set to \(8\) and \(9\) respectively. For the content features, we chose \(F_{encoder}=16\) and \(F_{tree}=4\cdot|C|\) with \(|C|\) being the number of classes in the dataset. The numbers for \(F_{tree}\) are the result of using BSP trees with depth \(=2\), i.e., trees with three inner nodes and four leaf nodes.
### _Toy Experiment_
While the generalization to different signed distance functions and different tree structures does not provide benefits for the semantic segmentation of aerial images, we still performed two experiments using toy datasets to show that
these generalization may prove useful in other contexts. In street scenes captured by cameras attached to cars or in industrial applications rounded shapes, e.g., tires, pipes or drums, may occur frequently and thus a signed distance function capturing such shapes more accurately, e.g. \(f_{3}\) (Eq. 20), may prove beneficial.
We created the first toy dataset by generating \(128\times 128\) pixel RGB images showing circles as depicted in Fig. 8. We only used the eight corners of the RGB cube as colors and assigned a different class to each corner/color. One color was used as background and circles of random translation, size and color were drawn onto the background. The background color was never used for any of the random circles. We used a model which subdivides the image into blocks of size \(32\times 32\) instead of \(8\times 8\). For each block a shallow BSP tree of depth one was predicted. We heavily limited the prediction capability of the model to make the visual difference more clear. We trained the model ten times using \(f_{1}\) (Eq. 18) in the inner nodes and ten times using \(f_{3}\) (Eq. 20) instead. Using \(f_{1}\), the model achieves an average validation mIoU of \(63.3\%\) (\(\pm 0.9\%\)) across the ten trainings, while the \(f_{3}\)-based model achieves \(64.9\%\) (\(\pm 1.9\%\)). The model with the more data-appropriate signed distance function, \(f_{3}\), performed better.
While a tree depth of one, i.e., only a single inner node in each tree, might suggest that every \(32\times 32\) block may contain only two different classes, there are blocks with three or more classes. An example is visible in the bottom-left of the \(f_{1}\)-based prediction in the last column of Fig. 8. This is due to the linear combination of the class logit predictions in the leaf nodes. To understand this phenomenon, suppose the predicted class logits are represented by the vectors \((v_{1},v_{2},0)\) and \((0,v_{2},v_{3})\) with \(v_{1}>v_{2}>0\) and \(v_{3}>v_{2}>0\) with the five missing dimensions for the other classes all being \(0\) in both vectors. For any pixel in the segmentation the region map consists of a vector of the form \((\alpha,1-\alpha)\), i.e., the predicted class logits are \(\alpha\cdot(v_{1},v_{2},0)+(1-\alpha)\cdot(0,v_{2},v_{3})=(\alpha v_{1},v_{2},(1-\alpha)v_{3})\). Depending on the value of \(\alpha\in(0,1)\), which itself is dependent on the signed distance of the pixel to the decision boundary, the largest class logit may be \(\alpha v_{1}\), \(v_{2}\), or \((1-\alpha)v_{3}\), thus resulting in three regions with different classes despite there being only a single decision boundary. With other class logit distributions even more distinct regions are possible.
The toy dataset for the second experiment was created by partitioning \(128\times 128\) pixel RGB vertically into two segments and then partitioning each segment horizontally resulting in examples as shown in Fig. 9. As before, we used the corners of the RGB cube as colors and classes. The resulting images can be perfectly segmented by a BSP tree of depth two using \(f_{1}\) as signed distance function as well as by a \(k\)-d tree of depth two. However, the \(k\)-d tree needs fewer parameters to describe the inner nodes, thus is more efficient. For both tree structures, BSP trees and \(k\)-d trees, we trained ten models using this toy dataset. Both models eventually learned to segment the toy images perfectly, with the \(k\)-d trees having the negligible advantage of reaching this state an epoch or two sooner. The experiment still proves that region map-based differentiable rendering generalizes to other tree structures which may be desirable for types of data other than aerial images.
### _New Hyperparameters_
We introduced several new hyperparameters throughout our contributions, \(\lambda\) in Eq. 5 and \(\mu_{1}\) to \(\mu_{4}\) in Eq. 14. To study them, we used SegForestNet with MobileNetv2 as backbone, BSP trees using \(f_{1}\) (Eq. 18) as signed distance function and a tree depth of \(2\). As we expect \(\lambda\) to have no real effect anyway, especially since we introduced \(\mu_{4}\), we simply set \(\lambda=1\) and did not study it further. We ran random searches with 100 to 200 iterations to optimize each of the different \(\mu_{i}\). We optimized \(\mu_{3}\) and \(\mu_{4}\) together in the same random search. We set \(s_{\text{min}}=8\) (Eq. 12), i.e., each of the four regions should use at least one eighth of the area of each block. We chose this as a compromise, so that each region in a \(8\times 8\) block contributes a significant part without restricting the model too much by setting \(s_{\text{min}}\) too high.
The results of our random searches are shown in Fig. 10. The impact of cross-entropy, which was controlled by \(\mu_{1}\), was the most significant. It also had a rather large range of \(0.91\) to \(0.95\) in which it produced optimal results. The other \(\mu_{i}\) produced a slight increase in performance
Fig. 8: Toy dataset showing randomly created circles on a solid background. The ground truth visualization is equal to the input images, as there was a one-to-one correspondence of colors to classes. Using a signed distance function more suitable to the data (third row) produces more accurate predictions compared to using the default line-based signed distance function (second row). The restricted models in rows two and three were chosen to make the difference more visually obvious. The last row shows the prediction of a less restricted model to prove that the alternate signed distance functions can produce good results.
Fig. 9: Toy dataset for investigating the impact of different tree structures. Again, the ground truth visualization is equal to the input images. Both tested tree structures can segment the images in the dataset perfectly eventually if the model is trained sufficiently long.
as they were increased, especially when considering the lower bound of the performance, with rather sharp drops in performance beyond a certain threshold. For \(\mu_{2}\) that threshold was \(0.035\), while it was \(0.01\) for \(\mu_{3}\) and \(\mu_{4}\). We therefore chose values for \(\mu_{2}\) to \(\mu_{4}\) which were just below that threshold: \(\mu_{2}=0.034\) and \(\mu_{3}=\mu_{4}=0.0095\). We then computed \(\mu_{1}=1-(\mu_{2}+\mu_{3}+\mu_{4})=0.947\) which is still in the optimal range for \(\mu_{1}\). We therefore picked these values for the hyperparameters in Eq. 14.
In order to verify that the different loss components have the desired effects, we created a visualization of the region map \(\mathbf{R}\) and the Gini impurity \(H\) (Eq. 11). To create these visualizations, we used a variant of Eq. 4 in which we used colors or \(H(\mathbf{P_{B}^{i}})\) in place of the class logits \(\mathbf{v}_{i}\). Using colors (red, green, blue, and cyan) instead of class logits created a visualization showing which pixel belongs to which of the four regions in a block (Fig. 11, middle row). Using \(H(\mathbf{P_{B}^{i}})\) instead created a visualization highlighting pixels in gray or even white which belong to regions which contain more than one class according to the ground truth (Fig. 11, bottom row). Using our novel loss with optimized values for \(\mu_{i}\) created region maps with the desired minimum region size and sharp region boundaries. Furthermore, the average \(H(\mathbf{P_{B}^{i}})\) are smaller compared to using cross-entropy. Cross-entropy also created blurry boundaries and seemed to associate certain regions with certain classes: the green region was used for buildings (blue in the ground truth), the cyan region was used for trees (red in the ground truth) and the red region for the remaining classes. Using our novel loss but with unoptimized \(\mu_{i}\), in particular setting \(\mu_{2}\) (pure regions containing only class according to the ground truth) and \(\mu_{3}\) (minimum region size) to \(0\), showed that the individual losses have the desired effects. The average \(H(\mathbf{P_{B}^{i}})\) is slightly higher (hard to see in Fig. 11) and certain regions (blue and cyan) are never used. Still, the region boundaries are sharp since \(\mu_{4}\) is sufficiently high.
### _Ablation Study_
In an ablation study, shown in Table I, we examined the effect of each contribution individually. As before, we used SegForestNet with MobileNetv2 as backbone. The new region map computation (Eqs. 5 to 7) had the greatest impact. Not only did it improve performance and reduce the variance, it was also required for the novel loss to actually provide a benefit. Without the new region map computation the new decoder architecture also provided a significant performance improvement but without the reduction in variance. With the new region map computation there was still a small performance improvement provided by the new decoder architecture. Furthermore, the new decoder architecture reduced the total model size by \(5\%\) due to each decoder using only \(85.3\)k parameters instead of \(137.7\)k. As mentioned before, the novel loss only was beneficial in combination with the new region map computation and even then the improvements were rather small. Overall, there was an improvement of almost \(1\%\) when using all contributions. We did not include class-specific trees in this ablation study as their benefit is dataset dependant which we show in the next section.
### _Comparison to State-Of-The-Art_
Finally, we compared SegForestNet to state-of-the-art models. Results are shown in Table II and Fig. 12 with samples of predicted segmentations in Fig. 13. We grouped models according to their number of parameters (Table III) in order to be able to compare models of similar size and complexity with each other. We created four groups, small models (less than five million parameters), medium size models (around ten million parameters), large models (20 to 30 million parameters) and very large models (more than 30 million parameters). Since all tested models except U-Net and U-Net++ have interchangeable backbones we tested all models in a smaller variant, using MobileNetv2 as a backbone, and a larger variant, using Xception as a backbone. With these two backbones, SegForestNet appears in the small model and the large model group. Additionally, SegForestNet was tested in two variants, the base variant in which only a single BSP tree per \(8\times 8\) block was predicted, and a variant, SegForestNet*, in which one BSP tree per class per block was predicted. Some older models, e.g., U-Net, still show competitive performance and thus we chose a selection of
\begin{table}
\begin{tabular}{c c c|c} new decoder & new region & novel & \\ & map computation & loss & mIoU \\ \hline ✓ & ✓ & ✓ & \(77.4\%\pm 0.2\%\) \\ ✓ & ✓ & & \(77.3\%\pm 0.1\%\) \\ ✓ & & ✓ & \(76.8\%\pm 1.5\%\) \\ ✓ & & & \(77.3\%\pm 0.3\%\) \\ \hline & ✓ & ✓ & \(77.3\%\pm 0.2\%\) \\ & ✓ & & \(77.1\%\pm 0.2\%\) \\ & & ✓ & \(73.1\%\pm 1.9\%\) \\ & & & \(76.5\%\pm 1.5\%\) \\ \end{tabular}
\end{table} TABLE I: Ablation study. The top-most row is SegForestNet with all our contributions, while the bottom-most row is the original BSPsegNet. The last column shows max. validation mIoU on Buxtheude.
Fig. 10: Validation mIoU on Buxtheude across \(\mu_{i}\) (Eq. 14). As all \(\mu_{i}\) except \(\mu_{3}\) and \(\mu_{4}\) were optimized in separate random searches, the corresponding mIoU ranges differ. However, as only finding the proper settings for \(\mu_{i}\) is important, the exact mIoU ranges do not matter anyway. The highlighted intervals show the optimal range for each \(\mu_{i}\).
older and newer models to compare SegForestNet to: FCN [25], DeepLab v3+ [4], (S-)RA-FCN [29], PFNet [24], FarSeg [50], U-Net++ [51] and U-Net [32].
We trained all models ourselves: we used optimized parameters suggested by the respective authors, except that we optimized the learning rates via random search. All models were trained ten times. To avoid overfitting, we used the validation mIoU for each training run to identify the epoch in which the model performed best and then used the parameters from this best epoch to determine the mIoU on the test set. All results shown are the average test mIoUs.
There is no clear winner between SegForestNet and its variant SegForestNet*. Sometimes, e.g, when using the MobileNetv2 backbone to train a model on Potsdam or iSAID, SegForestNet performs better while at other times, e.g., same backbone but training on Hannover or Toulouse, SegForestNet* performs better. The situation may even differ for different backbones on the same dataset, e.g., when using the Xception backbone to train a model on Potsdam, SegForestNet outperforms SegForestNet. Therefore, no clear recommendation whether to use class-specific segmentations (SegForestNet*) or not (SegForestNet) can be given. Rather, this is has to be determined based on the backbone and dataset to be used.
On all datasets except iSAID most models performed well, even older models. The exceptions were RA-FCN and FarSeg. RA-FCN throughout performed noticably worse than all other models. It also has scaling issues, as its number of parameters increased far more when going from the MobileNetv2 backbone to the Xception backbone while at the same time its performance became worse instead of better. FarSeg's performance with MobileNetv2 was decent (when ignoring the full model's relatively large size) but its performance became very unstable when using Xception. Due to these noticable performance deficits, we excluded RA-FCN and FarSeg in Fig. 12.
Generally, the best performing models appeared to be SegForestNet (in at least one variant), PFNet, U-Net++ and U-Net. In the small and large model brackets the best
Fig. 11: Visualization of the region map **R** and the Gini impurity \(H\) (Eq. 11) for different loss functions and different values for \(\mu_{i}\) (Eq. 14). The second to last column used optimized values for \(\mu_{i}\), while the last column used \(\mu_{1}=0.95\), \(\mu_{2}=\mu_{3}=0\), and \(\mu_{4}=0.05\). These visualizations show that our novel loss achieves the desired effects of sharp region boundaries and minimum region sizes (middle row) and pure regions containing just a single class according to the ground truth (bottom row).
Fig. 12: Visualization of the columns for Potsdam and Hannover from Table II. Dots show means and bars indicate the standard deviation. The symbol \(\dagger\) is used as in Table II.
\begin{table}
\begin{tabular}{c|c} \# of model parameters [M] \\ \hline FCN & 1.8 \\ SegForestNet & 2.0 \\ DeepLab v3+ & 2.3 \\ RA-FCN & 2.3 \\ SegForestNet* & 2.9 - 4.7 \\ PFNet & 4.9 \\ \hline Farseg & 9.1 \\ U-Net++ & 9.2 \\ \hline FCN & 20.9 \\ SegForestNet\({}^{\dagger}\) & 21.1 \\ DeepLab v3+ & 22.2 \\ SegForestNet\({}^{\dagger}\) & 22.2 - 24.4 \\ PFNet\({}^{\dagger}\) & 26.4 \\ FarSeg\({}^{\dagger}\) & 29.2 \\ \hline U-Net & 31.4 \\ RA-FCN\({}^{\dagger}\) & 40.0 \\ \end{tabular}
\end{table} TABLE IV: Count of how often each model appears in the Pareto front of a given dataset (compare Fig. 12). SegForestNet is a combination of SegForest or SegForestNet: if either of the later two is in the Pareto front of a given dataset, the count for SegForestNet is increased by 1. The symbol \(\dagger\) is used as in Table II.
\begin{table}
\begin{tabular}{c|c c c c c c c c} & Hannover & Nienburg & Bustrehude & Potsdam & Vaihingen & Toulouse & iSAID \\ \hline FCN & 72.6\(\pm 0.2\%\) & 73.2\(\pm 0.4\%\) & 76.2\(\pm 0.2\%\) & 78.1\(\pm 0.2\%\) & 72.0\(\pm 0.2\%\) & 54.1\(\pm 2.9\%\) & 38.3\(\pm 0.9\%\) \\ SegForestNet & 72.9\(\pm 0.5\%\) & 78.3\(\pm 0.3\%\) & 76.2\(\pm 0.3\%\) & 78.9\(\pm 0.3\%\) & 72.9\(\pm 0.3\%\) & 52.9\(\pm 3.2\%\) & 44.5\(\pm 0.7\%\) \\ DeepLab v3+ & 72.9\(\pm 0.3\%\) & 73.8\(\pm 0.3\%\) & 76.5\(\pm 0.3\%\) & 77.8\(\pm 0.8\%\) & 72.2\(\pm 0.3\%\) & 52.8\(\pm 3.8\%\) & 34.9\(\pm 1.2\%\) \\ RA-FCN & 71.0\(\pm 0.4\%\) & 71.0\(\pm 0.4\%\) & 74.1\(\pm 0.8\%\) & 74.1\(\pm 1.1\%\) & 70.3\(\pm 0.3\%\) & 49.4\(\pm 2.2\%\) & 35.4\(\pm 4.1\%\) \\ SegForestNet & **73.6\(\pm 0.2\%\)** & 74.1\(\pm 0.3\%\) & 76.2\(\pm 0.2\%\) & 78.8\(\pm 0.3\%\) & **72.9\(\pm 0.2\%\)** & **54.2\(\pm 1.6\%\)** & 42.8\(\pm 0.5\%\) \\ PFNet & 73.0\(\pm 0.4\%\) & **74.2\(\pm 0.3\%\)** & **76.8\(\pm 0.4\%\)** & **78.9\(\pm 0.3\%\)** & 72.6\(\pm 0.2\%\) & 53.9\(\pm 1.3\%\) & **45.8\(\pm 0.4\%\)** \\ \hline FarSeg & 72.9\(\pm 0.3\%\) & 74.2\(\pm 0.2\%\) & 76.7\(\pm 0.2\%\) & **77.6\(\pm 0.5\%\)** & **72.2\(\pm 0.3\%\)** & 54.5\(\pm 1.9\%\) & 43.4\(\pm 0.8\%\) \\ U-Net+ & **73.8\(\pm 0.3\%\)** & **75.6\(\pm 0.2\%\)** & **77.4\(\pm 0.2\%\)** & 77.3\(\pm 0.3\%\) & 71.7\(\pm 0.2\%\) & **56.8\(\pm 0.2\%\)** & **45.4\(\pm 0.6\%\)** \\ \hline FCN & 73.4\(\pm 0.1\%\) & 74.1\(\pm 0.2\%\) & 76.4\(\pm 0.2\%\) & 79.4\(\pm 0.2\%\) & 72.7\(\pm 0.2\%\) & 54.3\(\pm 1.1\%\) & 48.4\(\pm 0.5\%\) \\ SegForestNet\({}^{\dagger}\) & 73.7\(\pm 0.5\%\) & 74.5\(\pm 0.3\%\) & 76.8\(\pm 0.1\%\) & 79.1\(\pm 0.3\%\) & 73.1\(\pm 0.4\%\) & 55.1\(\pm 0.7\%\) & 50.4\(\pm 0.6\%\) \\ DeepLab v3+ & 73.5\(\pm 0.2\%\) & 74.7\(\pm 0.2\%\) & 76.6\(\pm 0.2\%\) & 78.8\(\pm 0.2\%\) & 72.7\(\pm 0.3\%\) & 55.2\(\pm 0.6\%\) & 48.9\(\pm 0.3\%\) \\ SegForestNet\({}^{\dagger}\) & **74.0\(\pm 0.3\%\)** & 74.3\(\pm 0.5\%\) & 76.9\(\pm 0.2\%\) & 79.5\(\pm 0.2\%\) & 72.9\(\pm 0.3\%\) & 53.8\(\pm 1.3\%\) & 49.2\(\pm 0.8\%\) \\ PFNet\({}^{\dagger}\) & 73.9\(\pm 0.2\%\) & **74.9\(\pm 0.2\%\)** & **77.2\(\pm 0.2\%\)** & **79.6\(\pm 0.4\%\)** & **73.3\(\pm 0.3\%\)** & **55.2\(\pm 1.6\%\)** & **50.6\(\pm 0.2\%\)** \\ FarSeg\({}^{\dagger}\) & 54.3\(\%\) & 29.6\(\%\) & 61.7\(\pm 25.9\%\) & 56.3\(\pm 29.6\%\) & 64.3\(\pm 27.6\%\) & 59.5\(\pm 25.9\%\) & 54.5\(\pm 1.8\%\) & 46.3\(\pm 3.2\%\) \\ \hline U-Net & **74.4\(\pm 0.1\%\)** & **75.8\(\pm 0.2\%\)** & **77.2\(\pm 0.5\%\)** & **78.2\(\pm 0.5\%\)** & **71.5\(\pm 0.3\%\)** & **57.3\(\pm 0.9\%\)** & **47.8\(\pm 0.3\%\)** \\ RA-FCN\({}^{\dagger}\) & \(67.0\pm 1.5\%\) & 69.1\(\pm 0.3\%\) & 71.4\(\pm 0.8\%\) & 72.6\(\pm 0.8\%\) & 65.5\(\pm 1.5\%\) & 48.2\(\pm 1.9\%\) & 34.0\(\pm 1.4\%\) \\ \end{tabular}
\end{table} TABLE II: Average test mIoU across ten runs for various models. Our models are SegForestNet and SegForestNet. Models marked with \(\dagger\) use Xception as the backbone, while all others use MobileNetv2 instead. The two U-Net variants are an exception, since they have a fixed architecture rather then exchangeable backbones. The models are grouped according to their size (Table III) with the best model in each bracket highlighted in bold.
\begin{table}
\begin{tabular}{c|c} \# of model parameters [M] \\ \hline FCN & 1.8 \\ SegForestNet & 2.0 \\ DeepLab v3+ & 2.3 \\ RA-FCN & 2.3 \\ SegForestNet* & 2.9 - 4.7 \\ PFNet & 4.9 \\ \hline Farseg & 9.1 \\ U-Net++ & 9.2 \\ \hline FCN & 20.9 \\ SegForestNet\({}^{\dagger}\) & 21.1 \\ DeepLab v3+ & 22.2 \\ SegForestNet* & 22.2 - 24.4 \\ PFNet\({}^{\dagger}\) & 26.4 \\ FarSeg\({}^{\dagger}\) & 29.2 \\ \hline U-Net & 31.4 \\ RA-FCN\({}^{\dagger}\) & 40.0 \\ \end{tabular}
\end{table} TABLE III: Size of the tested models in terms of number of model parameters as a proxy for model complexity. We grouped the models in four brackets: small (less than SM), medium (about 10M), large (20-30M), and very large (more than 30M). The symbol \(\dagger\) is used as in Table II.
the dataset and the available computational budget.
Fig. 13 shows a qualitative comparison of the best models. All of these models provide good results, while no model is perfect. SegForestNet generally performs better for cars. The other models more often overlook cars entirely and tend to have multiple distinct car segments merge into each other. Areas in shadow proof difficult for all shown models with sealed surfaces sometimes mistaken as buildings or low vegetation/grass mistaken as sealed surface. SegForestNet tends to have straighter lines where appropriate, e.g., buildings and cars, while still being able to predict sufficiently rounded shapes, e.g., trees. However, when errors occur, e.g., when low vegetation in shadow gets mistaken as a sealed surface, SegForestNet produces visible block artefacts.
## 5 Conclusion
In this paper we present a model for the semantic segmentation of aerial images using binary space partitioning trees, as well as three modifications to this model to improve its
Fig. 13: Samples of validation and test examples from Hannover. All models except U-Net use Xception as backbone.
performance. The first modifications, a refined decoder and new region map computation strategy, are aimed at improving gradients during backpropagation, while the second is a novel loss function improving the shape of the predicted segments and the last modification is an extension which enables class-specific segmentations. Taking all modifications together, our model achieves state-of-the-art performance while using up to \(60\%\) fewer model parameters when using a small backbone model or up to \(20\%\) fewer model parameters when using a large backbone model. In the future, we want to investigate how to let our model learn by itself what the optimal type of tree, signed distance function in each inner node, and number of trees is for a given dataset. This would reduce the number of design decisions having to be made when applying our model. Additionally, we want to expand our model to be able to predict instance IDs in order to perform instance and/or panoptic segmentation.
|
2310.01794 | GNNX-BENCH: Unravelling the Utility of Perturbation-based GNN Explainers
through In-depth Benchmarking | Numerous explainability methods have been proposed to shed light on the inner
workings of GNNs. Despite the inclusion of empirical evaluations in all the
proposed algorithms, the interrogative aspects of these evaluations lack
diversity. As a result, various facets of explainability pertaining to GNNs,
such as a comparative analysis of counterfactual reasoners, their stability to
variational factors such as different GNN architectures, noise, stochasticity
in non-convex loss surfaces, feasibility amidst domain constraints, and so
forth, have yet to be formally investigated. Motivated by this need, we present
a benchmarking study on perturbation-based explainability methods for GNNs,
aiming to systematically evaluate and compare a wide range of explainability
techniques. Among the key findings of our study, we identify the Pareto-optimal
methods that exhibit superior efficacy and stability in the presence of noise.
Nonetheless, our study reveals that all algorithms are affected by stability
issues when faced with noisy data. Furthermore, we have established that the
current generation of counterfactual explainers often fails to provide feasible
recourses due to violations of topological constraints encoded by
domain-specific considerations. Overall, this benchmarking study empowers
stakeholders in the field of GNNs with a comprehensive understanding of the
state-of-the-art explainability methods, potential research problems for
further enhancement, and the implications of their application in real-world
scenarios. | Mert Kosan, Samidha Verma, Burouj Armgaan, Khushbu Pahwa, Ambuj Singh, Sourav Medya, Sayan Ranu | 2023-10-03T04:42:44Z | http://arxiv.org/abs/2310.01794v3 | GnnX-Bench: Unravelling the Utility of Perturbation-based Gnn Explainers through In-depth Benchmarking
###### Abstract
Numerous explainability methods have been proposed to shed light on the inner workings of Gnns. Despite the inclusion of empirical evaluations in all the proposed algorithms, the interrogative aspects of these evaluations lack diversity. As a result, various facets of explainability pertaining to Gnns, such as a comparative analysis of counterfactual reasoners, their stability to variational factors such as different Gnn architectures, noise, stochasticity in non-convex loss surfaces, feasibility amidst domain constraints, and so forth, have yet to be formally investigated. Motivated by this need, we present a benchmarking study on perturbation-based explainability methods for Gnns, aiming to systematically evaluate and compare a wide range of explainability techniques. Among the key findings of our study, we identify the Pareto-optimal methods that exhibit superior efficacy and stability in the presence of noise. Nonetheless, our study reveals that all algorithms are affected by stability issues when faced with noisy data. Furthermore, we have established that the current generation of counterfactual explainers often fails to provide feasible recourses due to violations of topological constraints encoded by domain-specific considerations. Overall, this benchmarking study empowers stakeholders in the field of Gnns with a comprehensive understanding of the state-of-the-art explainability methods, potential research problems for further enhancement, and the implications of their application in real-world scenarios.
## 1 Introduction and Related Work
Gnns have shown state-of-the-art performance in various domains including social networks [32; 13], drug discovery [56; 35; 36], modeling of physical systems [44; 8; 9; 10], event detection [12; 25], and recommendation engines [58]. Unfortunately, like other deep-learning models, Gnns are black boxes due to lacking transparency and interpretability. This lack of interpretability is a significant barrier to their adoption in critical domains such as healthcare, finance, and law enforcement. In addition, the ability to explain predictions is critical towards understanding potential flaws in the model and generate insights for further refinement. To impart interpretability to Gnns, several algorithms to explain the inner workings of Gnns have been proposed. The diversified landscape of Gnn explainability research is visualized in Fig. 1. We summarize each of the categories below:
* **Model-level:** Model-level or global explanations [60, 19, 54] are concerned about the overall behavior of the model and searches for patterns in the set of predictions made by the model.
* **Instance-level:** Instance-level or local explainers [59, 30, 40, 62, 18, 61, 29, 43, 27, 6, 1, 50] provide explanations for specific predictions made by a model. For instance, these explanations reason about why a particular instance or input is classified or predicted in a certain way.
* **Gradient-based:** The gradient-based explainers [33, 7] follow the idea of the rate of change being represented by gradients. Additionally, the gradient of the prediction with respect to the input represents the sensitivity of the prediction with respect to the input. This sensitivity gives the importance scores and helps in finding explanations.
* **Decomposition-based:** These explainers [33, 7, 39, 38] consider the prediction of the model to be decomposed and distributed backwards in a layer by layer fashion and the score of different parts of the input can be construed as its importance to the prediction.
* **Perturbation-based:** These methods [59, 30, 62, 18, 29, 43, 27, 6, 31, 1, 50] utilize perturbations of the input to identify important subgraphs that serve as factual or counterfactual explanations.
* **Surrogate:** Surrogate methods [64, 47, 18] use the generic intuition that in a smaller range of input values, the relationship between input and output can be approximated by interpretable functions. The methods fit a simple and interpretable surrogate model in the locality of the prediction.
In addition to the methodology employed by explanation algorithms, the type of explanation offered represents a crucial component. Explanations can be broadly classified into two categories: _factual_ reasoning [59, 30, 40, 62, 18] and _counterfactual_ reasoning [29, 43, 31, 6, 1, 50].
* **Factual explanations** provide insights into the rationale behind a specific prediction by identifying the minimal subgraph that is sufficient to yield the same prediction as the entire input graph.
* **Counterfactual explanations**, on the other hand, elucidate why a particular prediction was not made by presenting alternative scenarios that could have resulted in a different decision. In the context of graphs, this involves identifying the smallest perturbation to the input graph that alters the prediction of the Gnn. Perturbations typically involves the removal of edges or modifications to node features. Counterfactual reasoners possess an additional advantage compared to factual reasoning, as they provide a recourse mechanism [46]. For instance, in the domain of drug discovery [21, 52], mutagenicity represents an undesirable property that impedes a molecule's potential as a drug [23]. While factual explainers can attribute the subgraph responsible for mutagenicity, counterfactual reasoners can identify this subgraph along with the alterations required to render the molecule non-mutagenic.
### Existing Benchmarking Studies on Gnn Explainability
GraphFrameX [4] and GraphXAI [2] represent two notable benchmarking studies. While both investigations have contributed valuable insights into Gnn explainers, certain unresolved investigative aspects persist.
* **Inclusion of counterfactual explainability:** GraphFrameX and GraphXAI have focused on factual explainers for GNNs. [34] has discussed methods and challenges, but benchmarking on counterfactual explainers remains underexplored.
* **Achieving Comprehensive coverage:** Existing literature encompasses seven perturbation-based factual explainers. However, GraphFrameX and GraphXAI collectively assess only GnnExplainer [59], PGExplainer [30], and SubgraphX [62].
Figure 1: Structuring the space of the existing methods on Gnn explainability as follows. **Gradient:** SA [7], Guided-BP [7], Grad-CAM [33]; **Decomposition:** Excitation-BP [33], GNN-LRP [38], CAM [33]; **Perturbation:** GNNExplainer [59], PGExplainer [30], SubgraphX [62], GEM [27], TAGExplainer [51], CF\({}^{2}\)[43], RCExplainer [6],CF-GNNExplainer [29], CLEAR [31]; **Surrogate:** GraphLime [18], Relex [64], PGM-Explainer [47]; **Global:** XGNN [60], GLG-Explainer [5], Xuanyuan et al. [54], GCFExplainer [19].
* **Empirical investigations:** How susceptible are the explanations to topological noise, variations in Gnn architectures, or optimization stochasticity? Do the counterfactual explanations provided align with the structural and functional integrity of the underlying domain? To what extent do these explainers elucidate the Gnn model as opposed to the underlying data? Are there standout explainers that consistently outperform others in terms of performance? These are critical empirical injuries that necessitate attention.
### Contributions
In this benchmarking study, we systematically study perturbation-based factual and counter-factual explainers and identify their strengths and limitations in terms of their ability to provide accurate, meaningful, and actionable explanations for Gnn predictions. Overall, we make the following key contributions:
* **Comprehensive evaluation encompassing counterfactual explainers:** The benchmarking study encompasses seven factual explainers and four counterfactual explainers. The proposed work is the first benchmarking study on counterfactual explainers for Gnns.
* **Novel insights:** The findings of our benchmarking study unveil stability to noise and variational factors, and generating feasible counterfactual recourses as two critical technical deficiencies that naturally lead us towards open research challenges.
* **Codebase:** As a by-product, a meticulously curated, publicly accessible code base is provided ([https://github.com/Armagaan/gnn-x-bench](https://github.com/Armagaan/gnn-x-bench)).
To keep the benchmarking study focused, we investigate only the perturbation-based explainability methods (highlighted in green in Fig. 1). While model-level explainers operate on the Gnn model, other forms of instance-level explainers yield diverse outputs spanning Directed Acyclic Graphs in PGMExplainer [47], model weights in Graph-Lime [18], node sets in Grad-CAM [33], among others. Consequently, enforcing a standardized set of inquiries across all explainers is not meaningful.
## 2 Preliminaries and Background
We use the notation \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to represent a graph, where \(\mathcal{V}\) denotes the set of nodes and \(\mathcal{E}\) denotes the set of edges. Each node \(v_{i}\in\mathcal{V}\) is associated with a feature vector \(x_{i}\in\mathbb{R}^{d}\). We assume there exists a Gnn \(\Phi\) that has been trained on \(\mathcal{G}\) (or a set of graphs). Existing works has predominantly assumed the Gnn model \(\Phi\) to be a message-passing architecture such as Gcn[24], GraphSage[17], Gat[45], and Gin[53]. Therefore, we will base our subsequent discussion on this assumption.
The literature on Gnn explainability has primarily focused on _graph classification_ and _node classification_, and hence the output space is assumed to be categorical. In graph classification, we are given a set of graphs as input, each associated with a class label. The task of the Gnn \(\Phi\) is to correctly predict this label. In the case of node classification, class labels are associated with each node and the predictions are performed on nodes. In a message passing Gnn of \(\ell\) layers, the embedding on a node is a function of its \(\ell\)-hop neighborhood. We use the term _inference subgraph_ to refer to this \(\ell\)-hop neighborhood. Hence forth, we will assume that graph refers to the inference subgraph for node classification. Factual and counterfactual reasoning over Gnns are defined as follows.
**Definition 1** (Perturbation-based Factual Reasoning): _Let \(\mathcal{G}\) be the input graph and \(\Phi(\mathcal{G})\) the prediction on \(\mathcal{G}\). Our task is to identify the smallest subgraph \(\mathcal{G}_{S}\subseteq\mathcal{G}\) such that \(\Phi(\mathcal{G})=\Phi(\mathcal{G}_{S})\). Formally, the optimization problem is expressed as follows:_
\[\mathcal{G}_{S}=\arg\min_{\mathcal{G}^{\prime}\subseteq\mathcal{G},\;\Phi( \mathcal{G})=\Phi(\mathcal{G}^{\prime})}||\mathcal{A}(\mathcal{G}_{S})|| \tag{1}\]
_Here, \(\mathcal{A}(\mathcal{G}_{S})\) denotes the adjacency matrix of \(\mathcal{G}_{S}\), and \(||\mathcal{A}(\mathcal{G}_{S})||\) is its L1 norm which is equivalent to the number of edges. Note that if the graph is undirected, the number of edges is half of the L1 norm. Nonetheless, the optimization problem remains the same._
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Method** & **Subgraph Extraction Strategy** & **Scoring function** & **Constraints** & **NFE** & **Task** & **Nature** \\ \hline GNNExplainer [57] & Continuous relaxation & Mutual Information & Size & Yes & GC+NC & Transductive \\ PGEplainer [30] & Parametrized edge selection & Mutual Information & Size and/or connectivity & No & GC+NC & Inductive \\ TAGExplainer [51] & Sampling & Mutual Information & Size, Entropy & No & GC+NC & Inductive \\ GEM [27] & Granger Causality+Autoencoder & Causal Contribution & Size, Connectivity & No & GC+NC & Inductive \\ SubgraphX [62] & Monte Carlo Tree Search & Shapley Value & Size, connectivity & No & GC & Transductive \\ Gastx2 [63] & Monte Carlo sampling & HR-value & Size & No & GC & Inductive \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key highlights of the _perturbation-based_ factual methods. The “NFE” column implies _Node Feature Explanation_. “GC” and “NC” indicate whether the dataset is used for graph classification and node classification respectively.
While subgraph generally concerns only the topology of the graph, since graphs in our case may be annotated with features, some algorithms formulate the minimization problem in the joint space of topology and features. Specifically, in addition to identifying the smallest subgraph, we also want to minimize the number of features required to characterize the nodes in this subgraph.
**Definition 2** (Counterfactual Reasoning): _Let \(\mathcal{G}\) be the input graph and \(\Phi(\mathcal{G})\) the prediction on \(\mathcal{G}\). Our task is to introduce the minimal set of perturbations to form a new graph \(\mathcal{G}^{*}\) such that \(\Phi(\mathcal{G})\neq\Phi(\mathcal{G}^{*})\). Mathematically, this entails to solving the following optimization problem._
\[\mathcal{G}^{*}=\arg\min_{\mathcal{G}^{\prime}\in\mathbb{G},\ \Phi(\mathcal{G})\neq\Phi(\mathcal{G}^{\prime})}dist(\mathcal{G},\mathcal{G}^ {\prime}) \tag{2}\]
_where \(dist(\mathcal{G},\mathcal{G}^{\prime})\) quantifies the distance between graphs \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) and \(\mathbb{G}\) is the set of all graphs one may construct by perturbing \(\mathcal{G}\)._
Typically, distance is measured in terms of number of edge perturbations while keeping the node set fixed. Under this assumption, \(dist(\mathcal{G},\mathcal{G}^{\prime})\) and \(\mathbb{G}\) are defined as:
\[\mathbb{G} =\{\mathcal{G}^{\prime}(\mathcal{V},\mathcal{E}^{\prime})\ |\ \mathcal{E}^{\prime}\subseteq\mathcal{V}\times\mathcal{V}\}\] \[dist(\mathcal{G},\mathcal{G}^{\prime}) =\|\mathcal{A}_{\mathcal{G}}-\mathcal{A}_{\mathcal{G}^{\prime}}\|\]
where \(\mathcal{A}_{\mathcal{G}}\) denotes as adjacency matrix of \(\mathbb{G}\).
### Review of Perturbation-based Gnn Reasoning
**Factual [61; 22]:** The perturbation schema for factual reasoning usually consists of two crucial components: the subgraph extraction module and the scoring function module. Given an input graph \(\mathcal{G}\), the subgraph extraction module extracts a subgraph \(\mathcal{G}_{s}\); and the scoring function module evaluates the model predictions \(\Phi(\mathcal{G}_{s})\) for the subgraphs, comparing them with the actual predictions \(\Phi(\mathcal{G})\). For instance, **GNNExplainer**[57] identifies an explanation in the form of a subgraph that have the maximum influence on the prediction. In a follow-up work, **PGExplainer**[30] extends the same idea with an additional assumption of the graph to be a random Gilbert graph. Unlike the existing explainers, **TAGExplainer**[51] takes a two-step approach where the first step has an embedding explainer trained using a self-supervised training framework without any information of the downstream task. A causality-based method **GEM [27]** uses the _Granger causality_ to generate ground-truth explanations which are used to train the explainer. **SubgraphX**[62] and **GStarX**[63] use cooperative game theoretic techniques. In particular, SubgraphX applies the Shapley value [41] to measure the importance of the subgraphs and GStarX uses HN values [16], to compute importance scores of a node for both graph and node classification tasks. Table 1 summarizes the key highlights.
**Counterfactual [61; 22]: CF-GNNExplainer**[29] aims to perturb the computational graph by using a binary mask matrix. The corresponding loss function quantifies the accuracy of the produced counterfactual, and captures the distance (or similarity) between the counterfactual graph and the original graph. In follow-up work, **CF\({}^{2}\)**[43] extends this method by including a contrastive loss that jointly optimizes the quality of both the factual and the counterfactual explanation. **RCExplainer**[6], being both factual and counterfactual method, aims to identify a resilient subset of edges to remove such that it alters the prediction of the remaining graph. Finally **CLEAR**[31] generates counterfactual graphs by using a graph variational autoencoder. Table 2 summarizes the key highlights.
## 3 Benchmarking Framework
In this section, we outline the investigations we aim to conduct and the rationale behind them.
**Comparative Analysis:** We evaluate algorithms for both factual and counterfactual reasoning across a set of carefully chosen benchmark datasets (SS 4). Based on our holistic evaluation, we identify the pareto-optimal methods in terms of their performance, elucidating the trade-offs between different explainability techniques.
**Stability:** Stability of explanations, when faced with minor variations in the evaluation framework, is a crucial aspect that ensures their reliability and trustworthiness. Stability is quantified by taking the _Jaccard similarity_ between the set of edges in the original explanation vs. those obtained after introducing the variation (details in SS 4). In order to evaluate this aspect, we consider the following perspectives:
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Method** & **Explanation Type** & **Task** & **Target/Method** & **Nature** \\ \hline RCExplainer [6] & Instance level & GC+NC & Neural Network & Inductive \\ CF\({}^{2}\)[43] & Instance level & GC+NC & Original graph & Transductive \\ CF-GNNExplainer [29] & Instance level & NC & Inference subgraph & Transductive \\ CLEAR [31] & Instance level & GC+NC & Variational Autoencoder & Inductive \\ \hline \hline \end{tabular}
\end{table}
Table 2: Key highlights of the counterfactuals methods.“GC” and “NC” indicate whether the dataset is used for graph classification and node classification respectively.
* **Perturbations in topological space:** If we inject minor perturbations to the topology through a small number of edge deletions or additions, then that should not affect the explanations.
* **Model parameters:** The explainers are deep-learning models themselves and optimize a non-convex loss function. As a consequence of non-convexity, when two separate instances of the explainer starting from different seeds are applied to the same Gnn model, they generate dissimilar explanations. Our benchmarking study investigates the impact of this stochasticity on the quality and consistency of the explanations produced.
* **Model architectures:** Message-passing Gnns follow a similar computation framework, differing mainly in their message aggregation functions. We explore the stability of explanations under variations in the model architecture.
**Neecessity and Reproducibility:** The objective in this experiment is to quantify how central the explanation subgraph is for the Gnn towards making the prediction. We approach this question from the perspectives of necessity and reproducibility. Factual explanations are _necessary_ if the removal of the explanation subgraph from the graph results in a significant decrease in prediction accuracy. Reproducibility [26], on the other hand, measures if the Gnn is retrained on the residual graph following the removal of the explanation, can it recover the original prediction?
**Feasibility:** One notable characteristic of counterfactual reasoning is its ability to offer recourse options. Nonetheless, in order for these recourses to be effective, they must adhere to the specific domain constraints. For instance, in the context of molecular datasets, the explanation provided must correspond to a valid molecule. Likewise, if the domain involves consistently connected graphs, the recourse must maintain this property. The existing body of literature on counterfactual reasoning with Gnns has not adequately addressed this aspect, a gap we address in our benchmarking study.
## 4 Empirical Evaluation
In this section, we execute the investigation plan outlined in SS 3. Unless mentioned specifically, the base black-box Gnn is a Gcn. Details of the set up (e.g., hardware) are provided in App. A.
**Datasets:** Table 3 showcases the principal statistical characteristics of each dataset employed in our experiments, along with the corresponding tasks evaluated on them. The Tree-Cycles, Tree-Grid, and BA-Shapes datasets serve as benchmark graph datasets for counterfactual analysis. These datasets incorporate ground-truth explanations [43; 27; 29]. Each dataset contains an undirected base graph to which predefined motifs are attached to random nodes, and additional edges are randomly added to the overall graph. The class label assigned to a node determines its membership in a motif. For more comprehensive information regarding the datasets, please refer to Appendix A.1.
**Methods:** The methods used in this benchmarking are delineated in the green branch at Figure 1.
**Metrics:** We use the following metrics in our study:
* **Size:** In actual explanations, size denotes the number of edges in the explanation. In counterfactual, size denotes the number of edges perturbed to flip the label. Regardless of the explanation type, it is desirable for the explanation to be small.
* **Sufficiency (Fidelity):** Sufficiency encodes the ratio of graphs for which the prediction derived from the explanation matches the prediction obtained from the complete graph [43]. Its value spans between 0 and 1. For factual explanations, higher values indicate superior performance, while in counter-factual lower is better since the objective is to flip the class label. Some works have used the term _fidelity_ instead of sufficiency. In addition, some papers have reported _necessity_ which is simply 1-sufficiency/fidelity.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \#Graphs & \#Nodes & \#Edges & \#Features & \#Classes & Task & F/CF \\ \hline Mutagenicity [37; 23] & 4337 & 131488 & 133447 & 14 & 2 & GC & F+CF \\ Proteins [11; 15] & 1113 & 43471 & 81044 & 32 & 2 & GC & F+CF \\ IMDB-B [55] & 1000 & 19773 & 96531 & 136 & 2 & GC & F+CF \\ AIDS [20] & 2000 & 31385 & 32390 & 42 & 2 & GC & F+CF \\ MUTAO [20] & 188 & 3371 & 3721 & 7 & 2 & GC & F+CF \\ NCI1 [48] & 4110 & 122747 & 132753 & 37 & 2 & GC & F \\ Graph-SST2 [61] & 70042 & 714325 & 644283 & 768 & 2 & GC & F \\ DD [15] & 1178 & 334925 & 843046 & 89 & 2 & GC & F \\ REDDT-B [55] & 2000 & 895294 & 995508 & 3063 & 2 & GC & F \\ ogg-mobility [3] & 41127 & 1049163 & 259376 & 9 & 2 & GC & CF \\ Tree-Cycles [57] & 1 & 871 & 1950 & 10 & 2 & NC & CF \\ Tree-Grid [57] & 1 & 1231 & 3410 & 10 & 2 & NC & CF \\ BA-Shapes [57] & 1 & 700 & 4100 & 10 & 4 & NC & CF \\ \hline \hline \end{tabular}
\end{table}
Table 3: The statistics of the datasets. Here, “F” and “CF” in the column “X-type” indicates whether the dataset is used for Factual or Counterfactual reasoning. “GC” and “NC” in the _Task_ column indicates whether the dataset is used for graph classification and node classification respectively.
* **Accuracy:** Accuracy represents the percentage of correct explanations. Computing accuracy is feasible only when ground-truth explanations are available, which in our case restricts us to node classification in the three datasets of Tree-Cycles, Tree-Grid, and BA-Shapes. In line with the standards set by Cf\({}^{2}\), CF-GnnExplainer, and Gem, this metric pertains to the percentage of edges within the counterfactual that belong to the motif which decides the class.
### Comparative Analysis
**Factual Explainers:** Fig. 2 illustrates the sufficiency analysis of various factual reasoners in relation to size. Each algorithm assigns a score to edges, indicating their likelihood of being included in the factual explanation. To control the size, we adopt a greedy approach by selecting the highest-scoring edges. Both Cf\({}^{2}\) and RCExplainer necessitate a parameter to balance factual and counterfactual explanations. We set this parameter to \(1\), corresponding to solely factual explanations.
Across the majority of datasets, PGExplainer consistently delivers inferior results compared to other baseline methods. This behavior is more prominently visible in the spidepplot of the same data (See Fig. N in Appendix). However, it is challenging to identify a definitive best factual explainer, as no single technique dominates across all datasets. For instance, while RCExplainer performs exceptionally well in the Mutag dataset, it exhibits subpar performance in Imdb-B and GraphSST2. Similar observations are also made for GnnExplainer in Reddit-B vs. Mutag and Nci1. Overall, we recommend using either RCExplainer or GNNExplainer as the preferred choices. The spider plot in Fig. N more prominently substantiates this suggestion.
In Fig. 2, the sufficiency does not always increase monotonically with explanation size (such as PGExplainer in Mutag). This behavior arises due to the combinatorial nature of the problem. Specifically, the impact of adding an edge to an existing explanation on the Gnn prediction is a function of both the edge being added and the edges already included in the explanation. An explainer seeks to learn a proxy function that mimics the true combinatorial output of a set of edges. When this proxy function fails to predict the marginal impact of adding an edge, it could potentially select an edge that exerts a detrimental influence on the explanation's quality.
**Counterfactual Explainers:** We present separate analyses for explanations generated over graph classification and node classification tasks.
Table 4 presents the results on graph classification. RCExplainer is the best-performing explainer across majority of the datasets and metrics. However, it is important to acknowledge that RCExplainer's sufficiency, when objectively evaluated, consistently remains high which is undesired. For
Figure 2: Sufficiency of the factual explainers against the explanation size. For factual explanations, higher is better. We omit those methods for a dataset that threw an out of memory (OOM) error.
instance, in the case of AIDS, the sufficiency of RCExplainer reaches a value of \(0.9\), signifying its inability to generate counterfactual explanations for \(90\%\) of the graphs. This observation suggests that there exists considerable potential for further enhancement. We also note that while Clear achieves the best (lowest) sufficiency in AIDS, the number of perturbations it requires (size) is exorbitantly high to be useful in practical use-cases.
Table 5 presents the results on node classification. We observe that CF-GnnExplainer consistently outperforms \(\text{Cr}^{2}\) (\(\alpha=0\) indicates the method to be entirely counterfactual). We note that our result contrasts with the reported results in \(\text{Cr}^{2}\)[43], where \(\text{Cr}^{2}\) was shown to outperform CF-GnnExplainer. Finally, when compared to graph classification, the sufficiency produced by the best methods in the node classification task is significantly lower indicating that it is an easier task. One possible reason might be the space of counterfactuals is smaller in node classification.
### Stability
We next examine the stability of the explanations against topological noise, model parameters, and the choice of Gnn architecture. Given a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), let \(\mathcal{E}_{X}\subset\mathcal{E}\) and \(\mathcal{E}_{X}^{\prime}\subseteq\mathcal{E}\) be the set of edges in the original and perturbed explanations respectively. We measure stability by computing the _Jaccard similarity_ between the two sets, i.e, \(\frac{|\mathcal{E}_{X}\cap\mathcal{E}_{X}^{\prime}|}{|\mathcal{E}_{X}\cup \mathcal{E}_{X}^{\prime}|}\). In App. B, we present the impact of the above mentioned factors on other metrics of interest such as sufficiency and explanation size. In addition, we also present impact of feature perturbation and topological adversarial attack in App. B.
**Factual-stability against topological noise:** Fig. 3 illustrates the Jaccard coefficient as a function of the noise volume. Similar to Fig.2, edge selection for the explanation involves a greedy approach that prioritizes the highest score edges. RCExplainer (executed at \(\alpha=1\)) and PGExplainer consistently exhibit higher stability. This consistent performance reinforces the claim that RCExplainer is the preferred factual explainer. The stability of RCExplainer can be attributed to its strategy of selecting a subset of edges that is resistant to changes, such that the removal of these edges significantly impacts the prediction made by the remaining graph [6]. PGExplainer also incorporates a form of inherent stability within its framework. It builds upon the concept introduced in GNNExplainer through the assumption that the explanatory graph can be modeled as a random Gilbert graph, where the probability distribution of edges is conditionally independent and can be parameterized. This generic assumption holds the potential to enhance the stability of the method. Conversely, TagExplainer exhibits the lower stability than RCExplainer and PGExplainer, likely due to its reliance solely on gradients in a task-agnostic manner [51]. The exclusive reliance on gradients makes it more susceptible to overfitting, resulting in reduced stability.
**Factual-Stability against explainer instances:** Table 6 presents the stability of explanations provided across three different explainer instances on the same black-box Gnn. A similar trend is observed, with RCExplainer remaining the most robust method, while GnnExplainer exhibits the least stability. For GnnExplainer, the Jaccard coefficient hovers around \(0.5\), indicating significant variance in explaining the same Gnn. This lack of stability may hinder the practical adoption of factual explainers for real-world use cases.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Mage**} & \multicolumn{3}{c}{**Mage**} & \multicolumn{3}{c}{**Mage**} & \multicolumn{3}{c}{**Allages**} & \multicolumn{3}{c}{**Factual**} & \multicolumn{3}{c}{**Mage**} \\ \hline \hline
**Model/Mayers** & **Multilayer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** & **Non-layer** \\ \hline
**FCxplainer** (\(\lambda=0\)) & 0.9480.022 & 1.10,0.22 & 0.6840.005 & 1.0038.005 & 0.5140.004 & 1.0038.005 & 0.5038.002 & 0.5038.002 & 0.5037.0042 & 1.0038.005 & 0.5038.002 & 0.5038.002 & 0.5038.002 & 0.5038.002 & 0.5038.002 \\
**CFxplainer** (\(\lambda=0\)) & 0.9880.01 & 1.003.004 & 2.703.004 & 2.703.004 & 0.5153.003 & 1.003.004 & 0.513.003 & 0.5038.002 & 0.5038.002 & 0.5038.002 & 0.5038.002 \\
**CIxplainer** & 0.589.01 & 1.719.014 & 0.004 & 0.004 & 0.004 & 1.003.005 & 1.003.004 & 0.5038.002 & 1.0038.002 & 0.5038.002 & 0.5038.002 & 0.5038.002 \\ \hline \end{tabular}
\end{table}
Table 4: Sufficiency and size of counterfactual explainers on graph classification. Lower values are better for both metrics. CF-GnnExplainer is omitted since it is limited to node classification only. OOM indicates that the technique ran out of memory.
Figure 3: Stability of factual explainers in terms of Jaccard similarity of explanations against topological noise.
**Factual-Stability against Gnn architectures:** Finally, we explore the stability of explainers across different Gnn architectures in Table 7, which has not yet been investigated in the existing literature. For each combination of architectures, we assess the stability by computing the Jaccard coefficient between the explained predictions of the indicated Gnn architecture and the default Gcn model. One notable finding is that the stability of explainers exhibits a strong correlation with the dataset used. Specifically, in five out of six datasets, the best performing explainer across all architectures is unique. However, it is important to highlight that the Jaccard coefficients across architectures consistently remain low indicating stability against different architectures is the hardest objective due to the variations in their message aggregating schemes.
**Stability of counterfactual explainers:** Table 8 provides an overview of the stability exhibited among explainer instances trained using three distinct seeds. Notably, we observe a substantial Jaccard index, indicating favorable stability, in the case of RCExplainer and Cr\({}^{2}\) explainers. Conversely, CLEAR fails to demonstrate comparable stability. These findings align with the outcomes derived from Table 4. Specifically, when RCExplainer and Cr\({}^{2}\) are successful in identifying a counterfactual, the resultant counterfactual graphs are obtained through a small number of perturbations. Consequently, the counterfactual graphs exhibit similarities to the original graph, rendering them akin to one another. However, this trend does not hold true for Clear, as it necessitates a significantly greater number of perturbations.
Similar observations are made concerning stability in the presence of topological noise and various Gnn architectures, owing to the aforementioned reasons. For detailed results, please refer to App. B.
### Necessity and Reproducibility
Recall the definitions of necessity and sufficiency from SS 3. In both cases, we evaluate performance using sufficiency as the metric. Sufficiency, in this context, measures the ratio of graphs for which the Gnn prediction on the residual graph is the same as in the original graph. Following the removal of explanation, we expect sufficiency to be low.
The results are presented in the form of plots in App. C (due to space limitations). While sufficiency is indeed low in necessity, this is not the case in reproducibility. These findings suggest that while current factual explainers effectively explain the model, they do not provide a comprehensive explanation of the underlying data. The fact that the Gnn can regain prediction accuracy when retrained indicates the presence of other signals that the initial factual explanation failed to capture.
### Feasibility
Counterfactual explanations serve as recourses and are expected to generate graphs that adhere to the feasibility constraints of the pertinent domain. Given that the benchmarked algorithms are
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{4}{c}{Proc} **Troc-Code** & \multicolumn{4}{c}{**Bk-Shape**} \\ \hline Method / Metric & **Suffleberry** & **Size** \(\downarrow\) & **Acc.(\%)** \(\uparrow\) & **Suffleberry** \(\downarrow\) & **Size** \(\downarrow\) & **Acc.(\%)** \(\uparrow\) & **Suffleberry** \(\downarrow\) & **Size** \(\downarrow\) & **Acc.(\%)** \(\uparrow\) \\ \hline
**CF-GnnExplainer** & 0.540.08 & 1.03 0.16 & 100.00 0.00 & 0.09 0.06 & 1.42 0.05 & 92.00 4.99 & 0.37 0.05 & 1.37 0.39 & 91.5 84.36 \\
**Cr\({}^{2}\) (red)** & 0.76 0.06 & 4.55 4.18 & 74.71 \(\pm\)18.70 & 0.99 0.02 & 7.0 0.40 & 14.29 \(\pm\)0.0 & 0.25 40.08 & 4.24 \(\pm\)1.70 & 60.89 \(\pm\)12.28 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of counterfactual explainers on node classification. Shaded cells indicate the best result in a column. Note that only CF-GnnExplainer and Cr\({}^{2}\) can explain node classification.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{4}{c}{PGExplainer} & \multicolumn{4}{c}{TAGExplainer} & \multicolumn{4}{c}{CF\({}^{2}\)} & \multicolumn{4}{c}{RCExplainer} & \multicolumn{4}{c}{GNNExplainer} \\ \hline Dataset / Seeds & 1\(vs2\) & 2\(vs3\) & 2\(vs3\) & 1\(vs2\) & 1\(vs3\) & 2\(vs3\) & 1\(vs2\) & 1\(vs3\) & 2\(vs3\) & 1\(vs2\) & 1\(vs3\) & 2\(vs3\) & 1\(vs2\) & 1\(vs3\) & 2\(vs3\) & 2\(vs3\) \\ \hline Mutagenicity & 0.69 & 0.75 & 0.62 & 0.76 & 0.78 & 0.74 & 0.77 & 0.77 & 0.77 & 0.75 & 0.71 & 0.71 & 0.46 & 0.47 & 0.47 \\ Proteins & 0.38 & 0.51 & 0.38 & 0.55 & 0.48 & 0.46 & 0.34 & 0.34 & 0.35 & 0.88 & 0.85 & 0.81 & 0.81 & 0.28 & 0.28 \\ Mutag & 0.5 & 0.54 & 0.51 & 0.36 & 0.43 & 0.72 & 0.78 & 0.79 & 0.79 & 0.86 & 0.92 & 0.74 & 0.57 & 0.57 & 0.58 \\ IMDB-B & 0.67 & 0.76 & 0.67 & 0.67 & 0.60 & 0.56 & 0.32 & 0.32 & 0.32 & 0.75 & 0.73 & 0.70 & 0.18 & 0.19 & 0.18 \\ AIDS & 0.88 & 0.87 & 0.82 & 0.81 & 0.83 & 0.87 & 0.85 & 0.85 & 0.85 & 0.96 & 0.74 & 0.80 & 0.80 & 0.80 \\ NCI1 & 0.58 & 0.55 & 0.64 & 0.69 & 0.84 & 0.65 & 0.60 & 0.60 & 0.60 & 0.74 & 0.71 & 0.46 & 0.44 & 0.44 & 0.44 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Stability in explanations provided by factual explainers across runs. We fix the size to 10 for all explainers. The most stable explainer for each dataset (row) corresponding to the three categories of \(1vs2\), \(1vs3\) and \(2vs3\) are highlighted through gray, yellow and cyan shading respectively.
agnostic to specific domains, our initial evaluation entails examining whether these algorithms preserve the topological properties of the test set. To achieve this, we compare the number of graphs forming a single connected component in the test set with those in their corresponding counterfactual explanations. Connectedness is a significant aspect of consideration, particularly in domains such as molecules, where disconnected graphs are rare occurrences and might not be meaningful. The results for RCExplainer are presented in Table 9.
We use RCExplainer for this experiment since it is the most robust and well-performing algorithm among those we have benchmarked. Notably, we observe statistically significant deviations from the expected values in two out of four molecular datasets. This suggests a heightened probability of predicting counterfactuals that do not correspond to feasible molecules. Importantly, this finding underscores a limitation of counterfactual explainers, which has received limited attention within the research community.
### Visualization-based Analysis
We include visualization based analysis of the explanations in App. D. Due to space limitations, we omit the details here. Our analysis reveal that a statistically good performance do not always align with human judgement indicating an urgent need for datasets annotated with ground truth explanations. Furthermore, the visualization analysis reinforces the need to incorporate feasibility as desirable component in counterfactual reasoning.
## 5 Concluding Insights
Our benchmarking study has yielded several insights that can streamline the development of explanation algorithms. We summarize the key findings below.
* **Performance and Stability:** Among the explainers evaluated, RCExplainer consistently outperformed others in terms of efficacy and stability to noise and variational factors (SS 4.1 and SS 4.2).
* **Stability Concerns:** Most factual explainers demonstrated significant deviations across explainer instances, vulnerability to topological perturbations and produced significantly different set of explanations across different Gnn architectures. These stability notions should therefore be embraced as desirable factors along with other performance metrics.
* **Model Explanation vs. Data Explanation:** Our experiments on reproducibility (SS 4.3) revealed that even without the factual explanation, the Gnn model predicted accurately on the residual graph. This suggests that explainers only capture specific signals learned by the Gnn and do not encompass all underlying data signals.
* **Feasibility Issues:** Counterfactual explanations showed deviations in topological distribution from the original graphs, raising feasibility concerns (SS 4.4).
We hope that the aforementioned insights would offer new directions for advancing Gnn explainers, allowing researchers to address limitations and enhance the overall quality and interpretability of GNNs.
**Limitations.** Our study only focuses on perturbation-based instance-specific methods. We hope to replicate the same study on the branches shown in Fig. 1.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{PCExplainer} & \multicolumn{3}{c}{TAGExplainer} & \multicolumn{3}{c}{CF\({}^{2}\)} & \multicolumn{3}{c}{RCExplainer} & \multicolumn{3}{c}{GNNExplainer} \\ \hline Dataset / Architecture & GAT & GNN & SAGE & GAT & GNN & SAGE & GAT & GNN & SAGE & GAT & GNN & SAGE & GAT & GNN & SAGE \\ \hline Mutagenicity & 0.63 & 0.65 & 0.1 & 0.24 & 0.25 & 0.32 & 0.52 & 0.47 & 0.54 & 0.56 & 0.52 & 0.46 & 0.43 & 0.42 & 0.43 \\ Proteins & 0.22 & **0.47** & 0.38 & 0.45 & 0.41 & 0.18 & 0.28 & 0.28 & 0.28 & 0.37 & 0.41 & 0.42 & 0.28 & 0.28 & 0.28 \\ Mutge & 0.57 & 0.58 & 0.26 & **0.60** & 0.65 & 0.64 & 0.58 & 0.56 & 0.62 & 0.47 & 0.76 & 0.54 & 0.55 & 0.57 & 0.55 \\ IMDB-B & 0.48 & 0.45 & 0.49 & 0.44 & 0.35 & 0.47 & 0.17 & 0.23 & 0.17 & 0.30 & 0.33 & 0.26 & 0.17 & 0.17 & 0.17 \\ AIDS & 0.81 & 0.85 & 0.88 & 0.83 & 0.83 & 0.84 & 0.80 & 0.80 & 0.80 & 0.81 & 0.85 & 0.81 & 0.8 & 0.8 \\ NCI1 & 0.39 & 0.41 & 0.37 & 0.45 & 0.17 & 0.28 & 0.37 & 0.38 & 0.38 & 0.49 & 0.55 & 0.52 & 0.37 & 0.38 & 0.39 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Stability of factual explainers against the Gnn architecture. We fix the size to \(10\). We report the Jaccard coefficient of explanations obtained for each architecture against the explanation provided over Gcn. The best explainers for each dataset (row) are highlighted in gray, yellow and cyan shading for GAT, Gin, and GraphSAGE, respectively. GraphSAGE is denoted by SAGE.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Expected Count & Observed Count & \(p\)-value \\ \hline Mutagenicity & 233.05 & 70 & 0.00001 \\ Proteins & 11.68 & 12 & 0.93 \\ Mutag & 11.0 & 9.0 & 0.55 \\ AIDS & 17.6 & 8 & 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 9: **Feasibility:** Assessing the statistical significance of deviations in the number of connected graphs between the test set and their corresponding counterfactual explanations on molecular datasets. Statistically significant deviations with \(p\)-value\(<0.05\) are highlighted. |
2305.05744 | Neck pinch singularities and Joyce conjectures in Lagrangian mean
curvature flow with circle symmetry | In this article we consider the Lagrangian mean curvature flow of compact,
circle-invariant, almost calibrated Lagrangian surfaces in hyperk\"ahler
4-manifolds with circle symmetry. We show that this Lagrangian mean curvature
flow can be continued for all time, through finite time singularities, and
converges to a chain of special Lagrangians, thus verifying various aspects of
Joyce's conjectures in this setting. We show that the singularities of the flow
are neck pinches in the sense conjectured by Joyce. We also give examples where
such finite time singularities are guaranteed to occur. | Jason D. Lotay, Goncalo Oliveira | 2023-05-09T19:51:18Z | http://arxiv.org/abs/2305.05744v1 | Neck pinch singularities and Joyce conjectures in Lagrangian mean curvature flow with circle symmetry
###### Abstract.
In this article we consider the Lagrangian mean curvature flow of compact, circle-invariant, almost calibrated Lagrangian surfaces in hyperkahler \(4\)-manifolds with circle symmetry. We show that this Lagrangian mean curvature flow can be continued for all time, through finite time singularities, and converges to a chain of special Lagrangians, thus verifying various aspects of Joyce's conjectures [11] in this setting. We show that the singularities of the flow are neck pinches in the sense conjectured by Joyce [11]. We also give examples where such finite time singularities are guaranteed to occur.
###### Contents
* 1 Introduction
* 2 The Gibbons-Hawking ansatz
* 3 Lagrangian mean curvature flow and modified curve shortening flow
* 4 Finite time singularities
* 5 Flow through singularities and long-time behaviour
* 6 Monotonicity of the Lagrangian angles
## 1. Introduction
### Context
A standing conjecture of Thomas [14], motivated by mirror symmetry, asserts that there is a stability condition for compact graded Lagrangians in a Calabi-Yau manifold, which is expected to determine the existence (and uniqueness) of a special Lagrangian in a given Hamiltonian isotopy class. However, this stability condition is hard to work with and the conjecture has so far remained unproven in its full generality. In [1], the authors proved the Thomas conjecture for circle-invariant Lagrangians in a large class of hyperkahler \(4\)-manifolds which includes all complete examples with finite topology obtained from the Gibbons-Hawking ansatz. This contains all ALE and ALF hyperkahler \(4\)-manifolds admitting a tri-Hamiltonian circle action.
Soon after stating the above conjecture, Thomas-Yau [13] proposed that a similar stability condition on compact graded Lagrangians controls the long-time existence and convergence of the Lagrangian mean curvature flow. Later developments of Neves [14] showed explicitly that the initial Lagrangian must at least be almost calibrated, which had implicitly been assumed in [13], otherwise finite time singularities of the flow are inevitable. Under the almost calibrated assumption the authors proved, also in [1], the circle-invariant Thomas-Yau conjecture in the same class of examples mentioned before.
Both the Thomas and Thomas-Yau conjectures pre-date Bridgeland's definition of stability conditions on triangulated categories [1, 2], which can be applied to the study of Lagrangians in Calabi-Yau manifolds by using Fukaya categories where special Lagrangians become (semi-)stable objects. Using this perspective, Joyce [11] updated the Thomas/Thomas-Yau conjectures by making use of the notion of Bridgeland stability condition. As a way to tackle this conjecture, Joyce
Introduction
### Background
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with \(X\) a smooth smooth manifold \(X\). Let \(\{L_{t}\}_{t\in[0,T)}\) be the unique smooth solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), so that \(L_{t}\) has the same properties as \(L\) for each \(t\in[0,T)\), for some \(T>0\).
Let \(X\) be an \(\mathbb{C}^{2}\)-manifold with
Our second main result shows that, in our setting, there are a finite number of singular times along the flow and, moreover, a Lagrangian mean curvature flow _through singularities_ exists for all time and converges to a finite union of special Lagrangian spheres (with possibly different phases). This proves, in our setting, several parts of the program in [12, Section 3.2 & Conjecture 3.9].
**Theorem 1.4** (Decomposition into special Lagrangians).: _Let \(X\) be an ALE or ALF hyperkahler \(4\)-manifold admitting a tri-Hamiltonian circle action and \(L\) a compact, connected, embedded, circle-invariant, almost calibrated Lagrangian in \(X\). There is a continuous family \(\{L_{t}\}_{t\in[0,+\infty)}\) of almost calibrated, circle-invariant, Lagrangian integral currents with \(L_{0}=L\) so that the following holds._
1. _There is a finite number of singular times_ \(0<T_{1}\leq\ldots\leq T_{l}<\infty\) _such that the family_ \(\{L_{t}\}_{t\in[0,\infty)\setminus\{T_{1},\ldots,T_{l}\}}\) _satisfies Lagrangian mean curvature flow._
2. _At each singular time_ \(T_{i}\) _the flow undergoes a "neck pinch" singularity as in Theorem_ 1.1_._
3. _There is_ \(k\in\mathbb{N}\) _and an_ \(A_{k}\)_-chain of embedded, special Lagrangians spheres_ \(\{L_{1}^{\infty},\ldots,L_{k}^{\infty}\}\) _such that_ \(L_{t}\) _converges uniformly to_ \(\cup_{j=1}^{k}L_{j}^{\infty}\) _as_ \(t\to+\infty\) _and we have current convergence:_ \[\lim_{t\to+\infty}L_{t}=L_{1}^{\infty}+\ldots+L_{k}^{\infty}.\]
4. _If the grading on_ \(L\) _is a perfect Morse function, then the phases_ \(\theta_{j}\) _of the special Lagrangians_ \(L_{j}^{\infty}\) _from_ (c) _can be chosen to satisfy_ \(\theta_{1}\geq\ldots\geq\theta_{k}\)_._
_Furthermore, in_ (c) _we have that \(k=1\) if \(L\) is flow stable, and \(k>1\) if \(L\) is flow unstable._
**Remark 1.5**.: The uniform convergence in (c) can always be improved to smooth convergence unless in the \(A_{k}\)-chain of spheres \(\{L_{1}^{\infty},\ldots,L_{k}^{\infty}\}\) we have that the phase of two adjacent spheres are equal. In this case there is the possibility of an infinite time singularity of the flow: this is the so-called "semi-stable" case. Therefore, if we assume that we are in the setting where no three singularities of the potential \(\phi\) lie on a line, then we will always have smooth convergence. We can always make a small perturbation of the hyperkahler structure on \(X\) so that we are in this regime.
**Acknowledgements.** The first author would like to thank Dominic Joyce for useful conversations. The first author was partially supported by EPSRC grant EP/T012749/1 during the course of this project. The second author wants to thank the members of Hausel group at IST Austria who hosted him as part of this project was being performed. The second author is currently funded by FCT 2021.02151.CEECIND and previously by the NOMIS foundation. Both authors would like to thank the Simons Laufer Mathematical Sciences Institute (formerly known as MSRI), Berkeley for hospitality during the latter stages of this project.
## 2. The Gibbons-Hawking ansatz
We provide a short description of the Gibbons-Hawking ansatz, and circle-invariant Lagrangians and notions of stability in this context, as necessary for our study. For further details, we refer the reader to [1].
### Hyperkahler 4-manifolds with tri-Hamiltonian circle action
Let \((X^{4},g)\) be hyperkahler and equipped with a circle action preserving the three Kahler forms \(\omega_{1},\omega_{2},\omega_{3}\), associated with the orthogonal complex structures \(I_{1},I_{2},I_{3}\) on \(X\) satisfying the quaternionic relations. Denote by \(\xi\) the infinitesimal generator of the circle action and let \(\hat{X}=\{x\in X\mid\xi_{x}\neq 0\}\) be the open dense set where the action is free. Then, \(\hat{X}\) is a \(\mathrm{U}(1)\)-bundle
\[\pi:\hat{X}\to Y^{3}\]
over an open \(3\)-manifold \(Y^{3}\). Equip this bundle with a connection \(\eta\in\Omega^{1}(\hat{X},\mathbb{R})\) whose horizontal spaces \(\ker(\eta)=\xi^{\perp}\) are \(g\)-orthogonal to \(\xi\) and so that \(\eta(\xi)=1\). Note that \(\iota_{\xi}d\eta=0\).
Consider the positive \(\mathrm{U}(1)\)-invariant function \(\phi:\hat{X}\to\mathbb{R}\) determined by \(\phi^{-1}=|\xi|_{g}^{2}\) and define the \(1\)-forms \(\alpha_{i}:=I_{i}(\phi^{-1}\eta)\), for \(i=1,2,3\). The hyperkahler metric \(g\) may then be written on \(\hat{X}\) as
\[g=\phi^{-1}\eta^{2}+\alpha_{1}^{2}+\alpha_{2}^{2}+\alpha_{3}^{2} \tag{2.1}\]
and the associated Kahler forms are given by:
\[\omega_{1}=\phi^{-1/2}\eta\wedge\alpha_{1}+\alpha_{2}\wedge\alpha_{3},\quad \omega_{2}=\phi^{-1/2}\eta\wedge\alpha_{2}+\alpha_{3}\wedge\alpha_{1},\quad \omega_{3}=\phi^{-1/2}\eta\wedge\alpha_{3}+\alpha_{1}\wedge\alpha_{2}. \tag{2.2}\]
With the orientation induced by the volume form \(\phi^{-1}\eta\wedge\alpha_{1}\wedge\alpha_{2}\wedge\alpha_{3}\), the forms \(\omega_{1},\omega_{2},\omega_{3}\) give a trivialization of \(\Lambda_{+}^{2}\hat{X}\), the bundle of self-dual \(2\)-forms on \(\hat{X}\). Conversely, if we define \(2\)-forms \(\omega_{1},\omega_{2},\omega_{3}\) as in (2.2) and fix the volume form \(\phi^{-1}\eta\wedge\alpha_{1}\wedge\alpha_{2}\wedge\alpha_{3}\), we can recover the metric \(g\) as in (2.1), and it follows from [1, Lemma 4.1] that \(g\) is hyperkahler if and only if \(d\omega_{i}=0\) for \(i=1,2,3\). Using this characterization we have the following (cf. [1, Proposition 2.1]).
**Proposition 2.1**.: _Using the notation above, the metric \(g\) in (2.1) equips \(X^{4}\) with a hyperkahler structure so that the \(\mathrm{U}(1)\) action generated by \(\xi\) preserves \(g\) and \(\omega_{1},\omega_{2},\omega_{3}\) in (2.2) if and only if the following hold._
* _The symmetric_ \(2\)_-tensor_ \[g_{E}=\phi^{-2}(\alpha_{1}^{2}+\alpha_{2}^{2}+\alpha_{3}^{2})\] _is the pullback of a flat metric on_ \(Y^{3}\)_._
* _The pair_ \((\eta,\phi)\) _is a Dirac monopole on_ \(Y^{3}\)_, i.e._ (2.3) \[*_{E}d\eta=-d\phi,\] _where_ \(*_{E}\) _denotes the Hodge star operator associated with the metric_ \(g_{E}\) _on_ \(Y^{3}\) _from (_a_)._
* _There are local coordinates_ \((\mu_{1},\mu_{2},\mu_{3})\) _on_ \(Y^{3}\) _such that_ \(\alpha_{i}=\phi^{\frac{1}{2}}d\mu_{i}\)_, and the hyperkahler metric can be written as_ (2.4) \[g=\frac{1}{\phi}\eta^{2}+\phi\left(d\mu_{1}^{2}+d\mu_{2}^{2}+d\mu_{3}^{2} \right).\] _Moreover,_ (2.5) \[\omega_{1}=\eta\wedge d\mu_{1}+\phi d\mu_{2}\wedge d\mu_{3},\quad\omega_{2}= \eta\wedge d\mu_{2}+\phi d\mu_{3}\wedge d\mu_{1},\quad\omega_{3}=\eta\wedge d \mu_{3}+\phi d\mu_{1}\wedge d\mu_{2}.\]
We shall consider the case when \(Y^{3}\) is simply connected, in which case the coordinates \((\mu_{1},\mu_{2},\mu_{3})\) can be taken to be global and form the hyperkahler moment map
\[\pi:X\to\mathbb{R}^{3}.\]
In [1, Section 2.2] we give examples of hyperkahler manifolds arising from this construction including the flat, Taub-NUT, Eguchi-Hanson, Ooguri-Vafa and Anderson-Kronheimer-LeBrun examples. Here we shall simply recall the multi-Taub-NUT and multi-Eguchi-Hanson examples.
**Example 2.2**.: Choose \(k\geq 1\) points \(p_{1},...,p_{k}\) in \(\mathbb{R}^{3}\) and \(m\geq 0\). Set \(Y=\mathbb{R}^{3}\backslash\{p_{1},...,p_{k}\}\) and
\[\phi=m+\sum_{i=1}^{k}\frac{1}{2|x-p_{i}|}, \tag{2.6}\]
where the norm is the Euclidean metric on \(\mathbb{R}^{3}\). Then there is a connection \(\eta\) satisfying (2.3) and the metric \(g\) given by (2.4) extends smoothly across the points \(p_{1},\ldots,p_{k}\) where the circle action collapses, thus completing \(\hat{X}=\pi^{-1}(\mathbb{R}^{3}\backslash\{p_{1},\ldots,p_{k}\})\) to a hyperkahler manifold \((X^{4},g)\) by adding in \(k\) points.
Suppose that \(m=0\). In this case, when \(k=1\) we obtain the flat metric on \(\mathbb{R}^{4}\) and for \(k=2\) we get the Eguchi-Hanson metric. For \(k>2\) the resulting metric is called the multi-Eguchi-Hanson metric. We always have \(\lim_{r\to\infty}\phi=0\) (where \(r\) is the distance to a fixed point, say the origin,
in \(\mathbb{R}^{3}\)) and thus \(g\) is asymptotic to the flat metric on \(\mathbb{R}^{4}/\mathbb{Z}_{k}\), so \((X^{4},g)\) is _asymptotically locally Euclidean_ (ALE).
Suppose now that \(m>0\). When \(k=1\) we obtain the Taub-NUT metric on \(\mathbb{R}^{4}\) and so if \(k>1\) we call the resulting metric multi-Taub-Nut. (Note that if we allowed \(k=0\) we would obtain the product metric on \(\mathbb{S}^{1}\times\mathbb{R}^{3}\).) In this situation \(\lim_{r\to\infty}\phi=m>0\) and so the circle generated by \(\xi\) has finite length at infinity. The metric \(g\) is therefore asymptotic to one on a circle bundle over \(\mathbb{R}^{3}\) with the fibers having constant length proportional to \(m^{-1/2}\). Such metrics are called _asymptotically locally flat_ (ALF).
**Remark 2.3**.: Example 2.2 describes all of the ALE and ALF gravitational instantons which arise from the Gibbons-Hawking ansatz.
### Notation
Throughout this article we will be working on a hyperkahler \(4\)-manifold \(X^{4}\) given by the Gibbons-Hawking ansatz which is an ALE or ALF gravitational instanton. We will therefore use the notation in Proposition 2.1 and Example 2.2 for the remainder of the article.
### Lagrangian submanifolds
On any hyperkahler \(4\)-manifold, the twistor sphere defines a \(2\)-sphere of Kahler structures for which one can study Lagrangians. It is important to understand this \(2\)-sphere explicitly in our setting so that we may identify circle-invariant Lagrangians in \(X\) with certain curves in the base of the fibration \(\pi:X\to\mathbb{R}^{3}\). Throughout, we shall only consider connected Lagrangians.
To this end, we see that the twistor sphere of \(X\) can be identified with the unit \(2\)-sphere \(\mathbb{S}^{2}\subseteq\mathbb{R}^{3}\) as follows. Given \(v=(v_{1},v_{2},v_{3})\in\mathbb{S}^{2}\subseteq\mathbb{R}^{3}\), we have a complex structure \(I_{v}\) on \(X\) such that the associated \(2\)-form using the hyperkahler metric is (recalling (2.5))
\[\omega_{v}=\sum_{i=1}^{3}v_{i}\left(\eta\wedge d\mu_{i}+\phi\ d\mu_{j}\wedge d \mu_{k}\right), \tag{2.7}\]
with \((i,j,k)\) denoting a cyclic permutation of \((1,2,3)\). For example, if \(v=(0,0,1)\) then \(\omega_{v}=\omega_{3}\).
Any circle-invariant surface \(L\) in \(X\) corresponds, via \(\pi:X\to\mathbb{R}^{3}\), to a curve \(\gamma\subseteq\mathbb{R}^{3}\). For \(v\in\mathbb{S}^{2}\), a short computation (cf. [1, Section 5.1]) yields, for \(L=\pi^{-1}(\gamma)\),
\[\omega_{v}|_{L}=\langle\gamma^{\prime},v\rangle\mathrm{vol}_{L},\]
where \(\gamma^{\prime}\) is the velocity of \(\gamma\) with respect to Euclidean arclength and \(\langle.,.\rangle\) is the Euclidean inner product. We deduce the following.
**Lemma 2.4**.: _In the notation above, a circle-invariant surface \(L^{2}=\pi^{-1}(\gamma)\) in \(X^{4}\) for a curve \(\gamma\subseteq\mathbb{R}^{3}\) is Lagrangian with respect to \(\omega_{v}\) in (2.7) if and only if \(\gamma\) lies in a plane orthogonal to \(v\)._
Let \(S_{\phi}=\{p_{1},\ldots,p_{k}\}\subseteq\mathbb{R}^{3}\) be the set of singularities of \(\phi\) as in Example 2.2. We then see that for \(L=\pi^{-1}(\gamma)\) to be compact and embedded \(\gamma\) cannot have self intersections and is either:
* a simple closed curve not intersecting \(S_{\phi}\), in which case \(L=\pi^{-1}(\gamma)\cong T^{2}\); or
* a simple arc with end points \(p_{i},p_{j}\in S_{\phi}\) with \(i\neq j\) and otherwise not meeting \(S_{\phi}\), in which case \(\pi^{-1}(\gamma)\cong S^{2}\).
**Remark 2.5**.: In our setting \(S_{\phi}\) is finite, but more generally it can be infinite as in the Ooguri-Vafa and Anderson-Kronheimer-LeBrun metrics (see [1, Examples 2.7 & 2.8]).
We will only be interested in graded Lagrangians, as we shall now define.
**Definition 2.6**.: Given a Calabi-Yau structure \((\omega,\Omega)\) on \(X\), determined by a Kahler form \(\omega\) and holomorphic volume form \(\Omega\), an oriented Lagrangian \(L\) in \(X\) is said to be _graded_ by \(\theta:L\to\mathbb{R}\) if the restriction of \(\Omega\) to \(L\) satisfies
\[e^{-i\theta}\Omega|_{L}=\mathrm{vol}_{L}, \tag{2.8}\]
where \(L\) is the Riemannian volume form associated with the induced metric. We will denote a Lagrangian \(L\) graded by \(\theta\) as \((L,\theta)\) where necessary and we will refer to \(\theta\) as the grading. The choice of grading \(\theta\) is also called the _Lagrangian angle_ of \(L\).
**Remark 2.7**.: Notice, in particular, that for a compact graded Lagrangian \(L\) the quantity
\[\arg\int_{L}\Omega\]
is well-defined up to integer multiples of \(2\pi\). Moreover, given such an \(L\) we can always multiply \(\Omega\) by a unit complex number so that \(\arg\int_{L}\Omega=0\) modulo \(2\pi\).
It will be useful to study a distinguished subclass of the graded Lagrangians as follows.
**Definition 2.8**.: Let \((\omega,\Omega)\) be a Calabi-Yau structure on \(X\). An oriented Lagrangian \(L\) in \((X,\omega,\Omega)\) is _almost calibrated_ if there is a choice of grading \(\theta\) on \(L\) such that, for some \(\delta>0\),
\[\sup_{L}\theta-\inf_{L}\theta\leq\pi-\delta.\]
If \(L\) is compact, \(L\) is almost calibrated if and only if there is a constant \(\theta_{0}\) so that \(\operatorname{Re}(e^{-i\theta_{0}}\Omega)|_{L}>0\).
In our situation, given \(v\in\mathbb{S}^{2}\) and \(\omega_{v}\) as in (2.7), there is a circle of holomorphic volume forms \(\Omega_{v}\) so that \((X,\omega_{v},\Omega_{v})\) is Calabi-Yau, namely
\[\Omega_{v}=\omega_{v_{1}}+i\omega_{v_{2}}, \tag{2.9}\]
where \(\{v,v_{1},v_{2}\}\) is a positively oriented orthonormal basis for \(\mathbb{R}^{3}\). For example, if \(v=(0,0,1)\) then \(\Omega_{v}=e^{i\alpha}(\omega_{1}+i\omega_{2})\) for some \(e^{i\alpha}\in\mathbb{S}^{1}\). We see that
\[\Omega_{v}|_{L}=\big{(}\langle\gamma^{\prime},v_{1}\rangle+i\langle\gamma^{ \prime},v_{2}\rangle\big{)}\operatorname{vol}_{L}\]
and so the Lagrangian angle of \(L\) satisfies \(\cos\theta=\langle\gamma^{\prime},v_{1}\rangle\) and \(\sin\theta=\langle\gamma^{\prime},v_{2}\rangle\), i.e. \(\theta\) coincides (mod \(2\pi\)) with the angle that \(\gamma^{\prime}\) makes with \(v_{1}\). See [LO, Section 5] for more details. As a consequence, any compact, embedded, graded Lagrangian of the form \(L=\pi^{-1}(\gamma)\) must have \(\gamma\) be a simple arc joining two singularities of \(\phi\) and meeting no other singularities, in which case \(L\) is a \(2\)-sphere. Moreover, such an \(L\) is almost calibrated if and only if the variation of the angle which \(\gamma^{\prime}\) makes with \(v_{1}\) is strictly less than \(\pi\): we shall call such curves almost calibrated, by abuse of notation. Note that almost calibrated curves are automatically embedded.
**Remark 2.9**.: Suppose that \(L=\pi^{-1}(\gamma)\) is an embedded, compact, almost calibrated, circle-invariant Lagrangian in \(X\), for some choice of Calabi-Yau structure \((\omega,\Omega)=(\omega_{v},\Omega_{v})\) for some \(v\in\mathbb{S}^{2}\) as above. With no loss of generality we can suppose that \(\gamma\) is perpendicular to the \(\mu_{3}\)-axis and \(\Omega=\omega_{1}+i\omega_{2}\). Furthermore, up to a translation we may set the initial point of \(\gamma\) to be \((0,0,0)\) and denote its final point by \((x,y,0)\). Then,
\[\int_{L}\Omega=2\pi\int_{\gamma}(d\mu_{1}+id\mu_{2})=2\pi(x+iy),\]
and so \(\arg\int_{L}\Omega\) is the angle between the straight-line \(\overline{\gamma}\) connecting the endpoints of \(\gamma\) and the \(\mu_{1}\)-axis. Moreover,
\[\left|\int_{L}\Omega\right|=2\pi\sqrt{x^{2}+y^{2}}=2\pi\text{Length}(\overline {\gamma}),\]
and \(2\pi\text{Length}(\overline{\gamma})=\text{Area}(\overline{L})\) where \(\overline{L}=\pi^{-1}(\overline{\gamma})\) is the area-minimizer in the homology class of \(L\). We also notice that \(\arg\int_{L}\Omega=0\) if and only if \(y=0\) and \(x>0\), i.e. the endpoint of \(\gamma\) lies on the positive \(\mu_{1}\)-axis.
### Further notation
We shall from now on fix a circle-invariant Calabi-Yau structure \((\omega,\Omega)=(\omega_{v},\Omega_{v})\) on \(X\) for some \(v\in\mathbb{S}^{2}\) as in (2.7) and (2.9). We see from our discussion above that for there to be any compact, embedded, graded, circle-invariant Lagrangians in \((X,\omega,\Omega)\) we require there to be at least \(2\) singularities of \(\phi\), i.e. we need \(k>1\) in the notation of Example 2.2.
### Stability and flow stability
We shall be using the notions of stability for Lagrangians introduced by Thomas [10] and Thomas-Yau [17]. After the introduction of the notion of Bridgeland stability conditions [11] such notions may be modified in order to use such a framework, but we shall not pursue this here. A key reason for this is that it is not yet known whether Bridgeland stability conditions exist in some version of the Fukaya category relevant for our study. Moreover, it is part of Joyce's programme [10] that one should use Lagrangian mean curvature flow to even define the Bridgeland stability condition. For further discussion of the possible relation between Bridgeland stability conditions and (modifications of) the Thomas-Yau conjecture, we refer the reader to [10, 14].
**Definition 2.10** (Stability).: Let \((L,\theta)\) be a compact graded Lagrangian in \((X,\omega,\Omega)\). Then, \(L\) is _unstable_ if its Hamiltonian isotopy class can be decomposed as a graded connect sum \(L_{1}\#L_{2}\), where \(L_{1},L_{2}\) are compact graded Lagrangians with variations of their gradings less than \(2\pi\) and
\[\arg\int_{L_{1}}\Omega\geq\arg\int_{L_{2}}\Omega.\]
Moreover, if strict inequality can be achieved \((L,\theta)\) is said to be _strictly unstable_. If only equality occurs, then \((L,\theta)\) is called _semi-stable_.
Finally, the compact graded Lagrangian \((L,\theta)\) is called _stable_ if it is not unstable.
**Definition 2.11** (Flow stability).: Let \((L,\theta)\) be a compact, almost calibrated Lagrangian in \((X,\omega,\Omega)\) satisfying \(\arg\int_{L}\Omega=0\).2 Then \((L,\theta)\) is _flow stable_ if for any possible decomposition of the Hamiltonian isotopy class of \((L,\theta)\) as a graded Lagrangian connect sum \(L_{1}\#L_{2}\) for \(L_{1}\) and \(L_{2}\) compact and almost calibrated, we have
Footnote 2: Recall by Remark 2.7 that there is essentially no loss of generality here.
* \(\left[\arg\int_{L_{1}}\Omega,\arg\int_{L_{2}}\Omega\right]\nsubseteq(\inf_{L} \theta,\sup_{L}\theta)\), or
* \(\operatorname{Area}(L)<|\int_{L_{1}}\Omega|+|\int_{L_{2}}\Omega|\).
Note that the condition (b) implies that \(\operatorname{Area}(L)<\operatorname{Area}(L_{1})+\operatorname{Area}(L_{2})\). We say that \((L,\theta)\) is _flow unstable_ if it is not flow stable (i.e. (a) and (b) are both violated), and _strictly flow unstable_ if \(\arg\int_{L_{1}}\Omega<\arg\int_{L_{2}}\Omega\) in (a) or the reverse of the inequality in (b) holds strictly. If \((L,\theta)\) is not strictly flow unstable we say it is _flow semi-stable_.
Recall the notation from Remark 2.9, where \(L=\pi^{-1}(\gamma)\) is a compact, almost calibrated, embedded, circle-invariant Lagrangian in \(X\) with projection \(\pi:X\to\mathbb{R}^{3}\). We shall now recast the notions of stability/flow stability in Definitions 2.10 and 2.11 in this circle-invariant setting, i.e. in terms of curves.
Consider any decomposition of the curve \(\gamma\) as \(\gamma_{1}\#\gamma_{2}\), for \(\gamma_{1},\gamma_{2}\) almost calibrated, and denote by \(\overline{\gamma_{1}},\overline{\gamma_{2}}\) the straight-lines with the same endpoints and orientations as \(\gamma_{1}\) and \(\gamma_{2}\) respectively. With no loss of generality suppose that the endpoints of \(\gamma\) are on the \(\mu_{1}\)-axis, oriented so the \(\mu_{1}\)-coordinate increases from one endpoint of \(\gamma\) to the other. Let \(\theta\), \(\overline{\theta}_{1}\), \(\overline{\theta}_{2}\) denote the angles that \(\gamma\), \(\overline{\gamma}_{1}\), \(\overline{\gamma}_{2}\) make with the \(\mu_{1}\)-axis respectively. Then we have the following observation.
**Lemma 2.12**.: _The graded curve \(\gamma\), equivalently \(L=\mu^{-1}(\gamma)\), is stable if for all decompositions \(\gamma=\gamma_{1}\#\gamma_{2}\) into graded curves we have (in the notation above)_
\[\overline{\theta}_{1}<\overline{\theta}_{2}.\]
_Similarly, an almost calibrated curve \(\gamma\) is flow stable if for all decompositions \(\gamma=\gamma_{1}\#\gamma_{2}\) we have (in the notation above)_
1. \([\min\{\overline{\theta}_{1},\overline{\theta}_{2}\},\max\{\overline{\theta}_{1 },\overline{\theta}_{2}\}]\nsubseteq(\inf_{\gamma}\theta,\sup_{\gamma}\theta)\)_, or_
2. \(\operatorname{Length}(\gamma)<\operatorname{Length}(\overline{\gamma_{1}})+ \operatorname{Length}(\overline{\gamma_{2}})\)_._
The notions of flow unstable, strictly flow unstable and flow semi-stable clearly extend from Definition 2.11 to graded curves, following Lemma 2.12.
## 3. Lagrangian mean curvature flow and modified curve shortening flow
The goal of this section is to give some preliminary general results regarding the evolution of circle-invariant Lagrangians under mean curvature flow, or equivalently curves under the modified curve shortening flow (3.1) below.
Recall, from [11, Proposition 4.5], that a circle-invariant Lagrangian \(L_{t}=\pi^{-1}(\gamma_{t})\) in \(X\) evolves through the Lagrangian mean curvature flow if and only if the planar curve \(\gamma_{t}\) satisfies
\[\partial_{t}\gamma_{t}=\phi^{-1}\partial_{s}^{2}\gamma_{t}, \tag{3.1}\]
where \(s\) the Euclidean arc-length parameter and \(\phi\) is the potential on \(X\) as in Proposition 2.1. We will assume for ease of notation that \(\gamma_{t}\subset\{0\}\times\mathbb{R}^{2}\subseteq\mathbb{R}^{3}\) and we identify \(\{0\}\times\mathbb{R}^{2}\cong\mathbb{R}^{2}\cong\mathbb{C}\).
### Evolution of the grading and curvature
Recall that for a graded Lagrangian \((L,\theta)\) the Lagrangian angle \(\theta\) satisfies (2.8). Under Lagrangian mean curvature flow, the grading \(\theta_{t}\) of \(L_{t}\) evolves via
\[\frac{\partial\theta_{t}}{\partial t}=-\Delta\theta_{t}=-d^{*}d\theta_{t}. \tag{3.2}\]
(Here, and throughout, we will use the "geometer's Laplacian" \(\Delta=d^{*}d\), so that the evolution equation (3.2) for \(\theta\) is the heat equation.) Using (3.2) we quickly see that
\[\left(\frac{\partial}{\partial t}+\Delta\right)|d\theta_{t}|^{2}=-2|\nabla d \theta_{t}|^{2}.\]
It turns out that, in our setting, the grading \(\theta\) of a circle-invariant Lagrangian \(L=\pi^{-1}(\gamma)\) can be related to the curvature \(\kappa\) of \(\gamma\), defined by \(\partial_{s}^{2}\gamma=\kappa N\), where \(\{\partial_{s}\gamma,N\}\) is an oriented orthonormal basis of the plane containing \(\gamma\). Viewing \(\theta\) as a function on \(\gamma\), by [11, Lemma 5.4] we have that
\[\kappa=\partial_{s}\theta. \tag{3.3}\]
Hence, the evolution equation (3.2) for the grading \(\theta_{t}\) of \(L_{t}=\pi^{-1}(\gamma_{t})\) should yield an equation for the evolution of the curvature \(\kappa_{t}\) of \(\gamma_{t}\). This is stated in the next result.
**Proposition 3.1**.: _Let \(\gamma_{t}\) be a solution of (3.1) with curvature \(\kappa_{t}\). Then,_
\[\partial_{t}\kappa_{t}=\partial_{s}^{2}(\phi^{-1}\kappa_{t})+\phi^{-1}\kappa_ {t}^{3}, \tag{3.4}\]
_and_
\[\partial_{t}(\phi^{-1}\kappa_{t})\geq\phi^{-1}\partial_{s}^{2}(\phi^{-1} \kappa_{t})+(\kappa_{t}-2\phi)(\phi^{-1}\kappa_{t})^{2}. \tag{3.5}\]
Proof.: Recall that the evolution equation (3.1) can be written as
\[\partial_{t}\gamma_{t}=\phi^{-1}\kappa_{t}N_{t}\]
where, if \(I\) denotes multiplication by \(i\) in \(\mathbb{C}\) and \({}^{\prime}=\partial_{s}\),
\[N_{t}=I\gamma_{t}^{\prime}\quad\text{and}\quad\kappa_{r}=\langle\gamma_{t}^{ \prime\prime},N_{t}\rangle.\]
Now, in order to commute derivatives we use a fixed parameter \(x(s)\) of \(\gamma_{t}\) independent of \(t\). Furthermore, this may be done so that at a fixed space-time point \((t_{0},x_{0})\) we have \(x^{\prime}(s)=1\) and \(x^{\prime\prime}(s)=0\), or equivalently \(|\partial_{x}\gamma_{t}|=1\) and \(\langle\partial_{x}^{2}\gamma_{t},\partial_{x}\gamma_{t}\rangle=0\). Using such a parametrization we have
\[\kappa_{t}=|\partial_{x}\gamma_{t}|^{-2}\langle\partial_{x}^{2}\gamma_{t},N_{t}\rangle\]
and so
\[\partial_{t}\kappa_{t}=|\partial_{x}\gamma_{t}|^{-2}\langle\partial_{t}\partial _{x}^{2}\gamma_{t},N_{t}\rangle+|\partial_{x}\gamma_{t}|^{-2}\langle\partial_{ x}^{2}\gamma_{t},\partial_{t}N_{t}\rangle-2|\partial_{x}\gamma_{t}|^{-2} \langle\partial_{x}\gamma_{t},\partial_{t}\partial_{x}\gamma_{t}\rangle\kappa _{t}. \tag{3.6}\]
Before continuing we make a few elementary observations which will prove useful during the computation. As \(\langle\partial_{x}\gamma_{t},N_{t}\rangle=0\), we find that
\[\langle\partial_{x}^{2}\gamma_{t},N_{t}\rangle+\langle\partial_{x}\gamma_{t}, \partial_{x}N_{t}\rangle=0.\]
Since \(\langle N_{t},\partial_{x}N_{t}\rangle=0\), we deduce that
\[\partial_{x}N_{t}=-\kappa_{t}\partial_{x}\gamma_{t}. \tag{3.7}\]
We now compute each term of (3.6) separately at the point \((t_{0},x_{0})\) in question. For the first term we have, using (3.1) and (3.7),
\[\langle\partial_{t}\partial_{x}^{2}\gamma_{t},N_{t}\rangle =\langle\partial_{x}^{2}(\phi^{-1}\kappa_{t}N_{t}),N_{t}\rangle\] \[=\partial_{x}\langle\partial_{x}(\phi^{-1}\kappa_{t}N_{t}),N_{t} \rangle-\langle\partial_{x}(\phi^{-1}\kappa_{t}N_{t}),\partial_{x}N_{t}\rangle\] \[=\partial_{x}\left(\partial_{x}\langle\phi^{-1}\kappa_{t}N_{t},N_ {t}\rangle-\langle\phi^{-1}\kappa_{t}N_{t},\partial_{x}N_{t}\rangle\right)- \phi^{-1}\kappa_{t}^{3}\] \[=\partial_{x}^{2}(\phi^{-1}\kappa_{t})-\phi^{-1}\kappa_{t}^{3}\]
at \((t_{0},x_{0})\). Since \(\langle N_{t},\partial_{t}N_{t}\rangle=0\) and \(\partial_{x}^{2}\gamma_{t}\) is a multiple of \(N_{t}\) at \((t_{0},x_{0})\), we see that the second term in (3.6) vanishes there:
\[\langle\partial_{x}^{2}\gamma_{t},\partial_{t}N_{t}\rangle=0.\]
For the last term we again use (3.1) and (3.7) and find that at \((t_{0},x_{0})\) we have
\[\langle\partial_{x}\gamma_{t},\partial_{t}\partial_{x}\gamma_{t}\rangle= \langle\partial_{x}\gamma_{t},\partial_{x}(\phi^{-1}\kappa_{t}N_{t})\rangle= \langle\partial_{x}\gamma_{t},\phi^{-1}\kappa_{t}\partial_{x}N_{t}\rangle=- \phi^{-1}\kappa_{t}^{2},\]
so the last term in (3.6) is given by \(2\phi^{-1}\kappa_{t}^{3}\). Inserting all these formulae gives (3.4).
For the estimate (3.5) we observe first that, since
\[\phi=m+\sum_{i=1}^{k}\frac{1}{2r_{i}}\]
where \(r_{i}\) is the Euclidean distance to \(p_{i}\), we have
\[|\nabla\phi|=|\sum_{i=1}^{k}\frac{dr_{i}}{2r_{i}^{2}}|\leq\sum_{i=1}^{k}\frac{1 }{2r_{i}^{2}}\leq 2\phi^{2}\]
because \(|dr_{i}|=1\) for all \(i\). Therefore, using this estimate together with (3.1) and (3.4) we compute
\[\partial_{t}(\phi^{-1}\kappa_{t}) =-\phi^{-2}(\partial_{t}\phi)\kappa_{t}+\phi^{-1}\partial_{t} \kappa_{t}\] \[=-\phi^{-2}(\nabla_{\partial_{t}\gamma}\phi)\kappa_{t}+\phi^{-1}( (\phi^{-1}\kappa_{t})^{\prime\prime}+\phi^{-1}\kappa_{t}^{3})\] \[=-\phi^{-3}(\nabla_{N_{t}}\phi)\kappa_{t}^{2}+\phi^{-1}(\phi^{-1} \kappa_{t})^{\prime\prime}+\phi^{-2}\kappa_{t}^{3}\] \[\geq-2\phi^{-1}\kappa_{t}^{2}+\phi^{-1}(\phi^{-1}\kappa_{t})^{ \prime\prime}+\phi^{-2}\kappa_{t}^{3}.\]
This gives (3.5).
We want to show the relationship between Proposition 3.1 and convexity of curves along the flow (3.1). To do this, we make a definition.
**Definition 3.2**.: Let \(\gamma\subseteq\mathbb{R}^{3}\) be an embedded connected planar curve. We say that \(\gamma\) is _convex_ if it is a subset of a curve bounding a convex region in the plane containing \(\gamma\), which is equivalent to saying that the curvature \(\kappa\) of \(\gamma\) is either everywhere non-negative or everywhere non-positive. We say that \(\gamma\) is _strictly convex_ if \(|\kappa|>0\) at every interior point of \(\gamma\).
**Remark 3.3**.: Let \(\gamma\subseteq\mathbb{R}^{3}\) be an embedded planar arc connecting two singularities of \(\phi\) and meeting no other singularities of \(\phi\). By (3.3), we see that \(\gamma\) is strictly convex if and only if the Lagrangian angle \(\theta\) is a perfect Morse function on \(L=\pi^{-1}(\gamma)\).
**Proposition 3.4**.: _Let \(\gamma_{0}\subseteq\mathbb{R}^{3}\) be an embedded planar arc connecting two singularities \(p_{1},p_{2}\) of \(\phi\) and which meets no other singularities of \(\phi\). Suppose further that \(\gamma_{0}\) is strictly convex._
_Let \(\{\gamma_{t}\}_{t\in[0,T]}\), for \(T>0\), be a smooth solution of (3.1) with fixed endpoints at \(p_{1},p_{2}\) which meets no other singularities of \(\phi\). Then \(\gamma_{t}\) is strictly convex for all \(t\in[0,T]\)._
Proof.: Suppose without loss of generality that \(\kappa_{0}>0\) away from \(p_{1},p_{2}\). Since \(\phi^{-1}>0\) away from \(p_{1},p_{2}\), we see that in the interior of the arc \(\gamma_{t}\) we have \(\kappa_{t}>0\) if and only if \(f_{t}=\phi^{-1}\kappa_{t}>0\).
Notice that we can rewrite (3.5) as (using \({}^{\prime}=\partial_{s}\))
\[\partial_{t}f_{t}\geq\phi^{-1}f_{t}^{\prime\prime}+\kappa_{t}(\phi^{-1}\kappa _{t}-2)f_{t}.\]
Since we are assuming that \(\gamma_{t}\) is a smooth solution to (3.1), we know that there exists some \(c>0\) such that
\[\kappa_{t}(\phi^{-1}\kappa_{t}-2)\geq-c\quad\text{for all $t\in[0,T]$}.\]
Therefore, on the region of space-time where \(f_{t}\geq 0\) (which includes \(t=0\) by assumption), we have
\[\partial_{t}f_{t}\geq\phi^{-1}f_{t}^{\prime\prime}-cf_{t}. \tag{3.8}\]
Let \(\varepsilon>0\) and define
\[f_{t}^{\varepsilon}=f_{t}+\varepsilon t.\]
We see from (3.8) that
\[\partial_{t}f_{t}^{\varepsilon} \geq\phi^{-1}(f_{t}^{\varepsilon})^{\prime\prime}-cf_{t}+\varepsilon\] \[=\phi^{-1}(f_{t}^{\varepsilon})^{\prime\prime}-cf_{t}^{ \varepsilon}+\varepsilon(ct+1) \tag{3.9}\] \[>\phi^{-1}(f_{t}^{\varepsilon})^{\prime\prime}-cf_{t}^{\varepsilon}.\]
Notice that \(f_{0}^{\varepsilon}=f_{0}>0\) away from \(p_{1},p_{2}\). Suppose that \((t_{0},x_{0})\) is the first space-time point away from \(p_{1},p_{2}\) where \(f_{t}^{\varepsilon}=0\). Then \(f_{t}^{\varepsilon}>0\) away from \(p_{1},p_{2}\) for all \(t<t_{0}\) and \(f_{t_{0}}^{\varepsilon}\geq 0\) with \(f_{t_{0}}(x_{0})=0\). Therefore, we must have that \(\partial_{t}f_{t}^{\varepsilon}\leq 0\) at \((t_{0},x_{0})\) whereas \(x_{0}\) is a local minimum of \(f_{t_{0}}^{\varepsilon}\) and so \((f_{t}^{\varepsilon})^{\prime\prime}\geq 0\) at \((t_{0},x_{0})\). However, we see from (3.9) that, at \((t_{0},x_{0})\),
\[0\geq\partial_{t}f_{t}^{\varepsilon}>\phi^{-1}(f_{t}^{\varepsilon})^{\prime \prime}-cf_{t}^{\varepsilon}\geq 0,\]
which is a contradiction.
We deduce that
\[f_{t}^{\varepsilon}=f_{t}+\varepsilon t>0\]
away from \(p_{1},p_{2}\) for all \(t\in[0,T]\) for all \(\varepsilon>0\). Letting \(\varepsilon\) tend to \(0\) gives that \(f_{t}\) and hence \(\kappa_{t}\) is non-negative on \([0,T]\).
Now we know that \(f_{t}\geq 0\) everywhere on \([0,T]\) we have that the inequality (3.9) holds for all \(t\in[0,T]\). This is a parabolic inequality away from \(p_{1},p_{2}\) and so if \(f_{0}>0\) away from \(p_{1},p_{2}\) then by the strong maximum principle we have that \(f_{t}>0\) away from \(p_{1},p_{2}\), which gives the result.
For possible future study we also record the following (easier) convexity result which is immediate from Proposition 3.1 and the strong parabolic maximum principle.
**Proposition 3.5**.: _Let \(\gamma_{0}\subseteq\mathbb{R}^{3}\) be an embedded planar curve meeting no singularities of \(\phi\) so that the curvature \(\kappa_{0}\) of \(\gamma_{0}\) is non-negative. Let \(\{\gamma_{t}\}_{t\in[0,T]}\), for \(T>0\), be a smooth solution of (3.1) which meets no other singularities of \(\phi\). Then the curvature \(\kappa_{t}\) of \(\gamma_{t}\) is positive for all \(t\in[0,T]\)._
### Evolution of area of bounding holomorphic disks
We shall now consider a special situation which will be of interest in setting up the analysis of singularities. Suppose we have a connected immersed minimal Lagrangian \(L_{\infty}\) (in particular, it could be the union of two special Lagrangians with different phases intersecting at a point) and a solution \(L_{t}\) to Lagrangian mean curvature flow, with grading \(\theta_{t}\), which intersects \(L_{\infty}\) at two points \(p_{+}\) and \(p_{-}\) for all \(t\). Let \(D\) be the unit disk in \(\mathbb{C}\) and write
\[\partial D\setminus\{1,-1\}=\partial D^{+}\sqcup\partial D^{-}\]
as the disjoint union of two connected components with \(D^{\pm}\) contained in the upper/lower half-plane. Suppose further that \(\sigma_{t}:D\to X\) is a family of holomorphic disks with two marked points \(\pm 1\) such that
\[\gamma_{t}=\sigma_{t}(\partial D^{+})\subseteq L_{t},\quad\gamma_{\infty}= \sigma_{t}(\partial D^{-})\subseteq L_{\infty},\quad\sigma_{t}(\pm 1)=p_{\pm},\]
where \(\gamma_{\infty}\) is independent of \(t\). (We could allow \(p_{\pm}\) and \(\gamma_{\infty}\) to vary inside \(L_{\infty}\) and we will obtain the same answer, but this is not required for our purposes.) The situation of particular interest to us is shown in Figure 3.1, where we call \(\sigma_{t}(D)\) a "holomorphic pacman disk" (for obvious reasons).
We may then compute the evolution of the area of the holomorphic disks \(\sigma_{t}(D)\) as follows.
**Lemma 3.6**.: _Using the notation above, the area of the holomorphic disks \(\sigma_{t}(D)\), which is given by_
\[A(t):=\int_{D}\sigma_{t}^{*}\omega, \tag{3.10}\]
_satisfies_
\[\dot{A}(t)=\theta_{t}(p_{+})-\theta_{t}(p_{-}). \tag{3.11}\]
Proof.: Differentiating (3.10) with respect to time yields (using the fact that \(d\omega=0\) and \(\gamma_{\infty}\) is fixed)
\[\dot{A}=\int_{D}\sigma_{t}^{*}d(\iota_{\partial_{t}\sigma_{t}}\omega)=\int_{ \partial D}\sigma_{t}^{*}(\iota_{\partial_{t}\sigma_{t}}\omega)=\int_{\gamma _{t}}\iota_{\partial_{t}\gamma_{t}}\omega.\]
Recall that \(L_{t}\) is evolving via mean curvature flow
\[\partial_{t}L_{t}=H_{t}=J\nabla\theta_{t}.\]
Therefore, as \(\gamma_{t}\subseteq L_{t}\), the component of \(\partial_{t}\gamma_{t}\) normal to \(L\) must be \(J\nabla\theta_{t}\). Moreover, we do not see the component of \(\partial_{t}\gamma_{t}\) tangential to \(L_{t}\) in \(\iota_{\partial_{t}\gamma_{t}}\omega\) since \(L_{t}\) is Lagrangian for all \(t\). Therefore,
\[\iota_{\partial_{t}\gamma_{t}}\omega=\omega(J\nabla\theta_{t},\cdot)=-g( \nabla\theta_{t},\cdot)=-d\theta_{t}.\]
By the fundamental theorem of calculus, since we oriented \(\gamma_{t}\) in the anticlockwise direction so that it is compatible with the orientation induced by \(D\), we find that
\[\dot{A}=-\left[\theta_{t}(\gamma_{t}(-1))-\theta_{t}(\gamma_{t}(1))\right]= \theta_{t}(p_{+})-\theta_{t}(p_{-}).\]
Figure 3.1. Evolving holomorphic pacman disk.
as claimed.
**Remark 3.7**.: In the simplest case, we shall be interested in applying this result when \(L_{t}\) is Hamiltonian isotopic through circle-invariant Lagrangians to \(L_{-}\#L_{+}\) for two special Lagrangians \(L_{-}\) and \(L_{+}\) intersecting transversely at a point \(p\) and \(\sigma_{t}(D)\) has boundary components on \(L_{t}\) and \(L_{\infty}=L_{-}\cup L_{+}\), as suggested by Figure 3.1. Notice that only the first of these boundary components needs to be smooth for the computation in the proof of Lemma 3.6.
## 4. Finite time singularities
This section establishes conditions under which finite time singularities of the circle-invariant Lagrangian mean curvature flow develop and proves, in this setting, a conjecture of Joyce on the local structure of such singularities. This is stated as Theorem 4.6.
Along the way, we shall also prove, in Theorem 4.13, that the Lagrangian mean curvature flow starting at a strictly unstable Lagrangian \(L=\pi^{-1}(\gamma)\) with \(\gamma\) convex develops finite time singularities. This will be improved later in the article by dropping the hypothesis that \(\gamma\) be convex.
### Location of singularities
Before we begin we use some key results from [LO] to provide some important tools for our singularity analysis. The first is the following general result.
**Proposition 4.1**.: _If the curve \(\gamma_{t}\subseteq\mathbb{R}^{3}\) solves (3.1) then \(\phi^{-1}\kappa_{t}\) is the projection of the mean curvature of \(L_{t}=\pi^{-1}(\gamma_{t})\) to \(\mathbb{R}^{3}\) and the flow exists as long as the norm of the second fundamental form \(L_{t}\) is bounded, which is equivalent to requiring that_
\[\phi^{-1/2}|\kappa_{t}|\quad\text{and}\quad\phi^{-1/2}|\nabla^{\perp}_{ \mathbb{R}^{3}}\log\phi|\]
_are bounded on \(\gamma_{t}\), where norms and \(\nabla^{\perp}_{\mathbb{R}^{3}}\) are taken with respect to the Euclidean metric on \(\mathbb{R}^{3}\)._
Proof.: This follows from [LO, Propositions 4.5 and 4.6], though it should be noted that the factor of \(\phi^{-1/2}\) was omitted from the second term there, though this does not affect any of the arguments in that reference.
Using Proposition 4.1 we have the following theorem based on [LO, Section 6].
**Proposition 4.2**.: _Let \(p_{1},\dots,p_{k}\in\mathbb{R}^{3}\) for \(k>1\) be the singularities of \(\phi\). Let \(\gamma_{0}\subseteq\mathbb{R}^{3}\) be a compact, almost calibrated, planar arc connecting \(p_{1},p_{2}\) and meeting no other singularities of \(\phi\). Let \(\gamma_{t}\) be the solution of (3.1) starting at \(\gamma_{0}\) with the fixed endpoints \(p_{1},p_{2}\) and suppose that \(\gamma_{t}\) has a finite time singularity at a point \(p\). Then \(p=p_{i}\) for some \(i>2\) and \(\phi^{-1}|\kappa_{t}|\) tends to zero at the singularity while \(\phi^{-1/2}|\nabla^{\perp}_{\mathbb{R}^{3}}\log\phi|\) blows up._
Proof.: Suppose that \(p\neq p_{i}\) for any \(i>2\). By [LO, Lemma 6.6], the almost calibrated assumption ensures that \(|\nabla^{\perp}_{\mathbb{R}^{3}}\log\phi|\) remains bounded as long as \(\gamma_{t}\) never reaches any other singularities of \(\phi\). Hence, by Proposition 4.1, we must have that \(\phi^{-1/2}|\kappa_{t}|\) blows up at \(p\).
However, [LO, Lemmas 6.8, 6.9 and 6.10] can then be used to reach a contradiction (these results do not require the stability assumption, only that \(\phi^{-1/2}|\kappa_{t}|\) blows up). Specifically, Lemma 6.8 states that \(p\) must be either \(p_{1}\) or \(p_{2}\), but then this possibility is ruled out by Lemmas 6.9 and 6.10 which respectively show that in this situation we cannot have \(\phi^{-1}|\kappa_{t}|\to\infty\) at \(p_{1},p_{2}\) or \(\phi^{-1}|\kappa_{t}|\) bounded at \(p_{1},p_{2}\). This finishes the proof that \(p=p_{i}\) for some \(i>2\).
We now see that, near \(p=p_{i}\), we have that \(\log\phi\sim\log|x-p|\) and so, for points \(\gamma_{t}(s)\) near \(p\),
\[\phi(\gamma_{t}(s))^{-1/2}|\nabla^{\perp}_{\mathbb{R}^{3}}\log \phi(\gamma_{t}(s))| \sim\frac{1}{|\gamma_{t}(s)-p|^{1/2}}|\langle\frac{\gamma_{t}(s)-p }{|\gamma_{t}(s)-p|},I\gamma_{t}^{\prime}(s)\rangle| \tag{4.1}\] \[\leq\frac{1}{|\gamma_{t}(s)-p|^{1/2}}\sim\phi(\gamma_{t}(s))^{1/2}.\]
with equality if and only if we are at a closest point to \(p\) on \(\gamma_{t}\). Hence \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\) blows up at the finite time singularity at \(p=p_{i}\).
We now show that \(\phi^{-1}|\kappa_{t}|\) tends to zero at the singularity. We begin with the following.
**Lemma 4.3**.: _The quantity \(\phi^{-1}|\kappa|\) is bounded at the singularity._
Proof.: Suppose that we have a sequence of spacetime points \((s_{i},t_{i})\) with \(x_{i}:=\gamma_{t_{i}}(s_{i})\to p\) as \(i\to\infty\) and
\[|\kappa_{t_{i}}(s_{i})||x_{i}-p|\to\infty\quad\text{as $i\to\infty$}. \tag{4.2}\]
We deduce from (4.1) that \(|\kappa_{t_{i}}(s_{i})|\) dominates \(\phi(\gamma_{t}(s))^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi(\gamma_{t_ {i}}(s))|\) as \(i\to\infty\) if the ratio
\[\frac{|x_{i}-p|}{|\gamma_{t_{i}}(s)-p|} \tag{4.3}\]
is bounded as \(i\to\infty\).
We can now adapt the argument of [10, Lemma 6.9] to reach a contradiction. If we consider the balls \(B_{i}=B(x_{i},|x_{i}-p|/2)\), then we see that \(p\) does not lie in \(B_{i}\) (and nor does any other singularity of \(\phi\) for \(i\) sufficiently large) and the ratio (4.3) is bounded for points \(\gamma_{t_{i}}(s)\in B_{i}\). Translating \(x_{i}\) to the origin and rescaling by \(|\kappa_{t_{i}}(s_{i})|\) we obtain balls which exhaust \(\mathbb{R}^{3}\) as \(i\to\infty\) by (4.2) and contain the image of the curve \(\gamma_{t_{i}}\), whose curvature is \(1\) at the origin.
Recall that the norm of the second fundamental form of the Lagrangian \(\pi^{-1}(\gamma)\) is controlled by \(\phi^{-1/2}|\kappa|\) (notice the extra factor of \(\phi^{-1/2}\) which tends to zero near \(p\)) and \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\). Since the ratio (4.3) is bounded on \(B_{i}\) as \(i\to\infty\), we deduce that the norm of the second fundamental form of \(\pi^{-1}(\gamma_{t_{i}})\) on \(B_{i}\) is bounded above by \(|\kappa_{t_{i}}(s_{i})||\gamma_{t_{i}}(s_{i})-p|^{1/2}\). Dividing by \(|\kappa_{t_{i}}(s_{i})|\) then gives an upper bound which tends to zero as \(i\to\infty\).
We may then take a subsequential limit to obtain a non-straight curve \(\gamma_{\infty}\) (since the curvature is \(1\) at the origin), which is the projection of a totally geodesic circle-invariant Lagrangian \(L_{\infty}\) in \(\mathbb{R}^{4}\), giving our required contradiction.
Suppose now that \(\phi^{-1}|\kappa|\) does not tend to zero at the singularity. By Lemma 4.3, we know that, after possibly choosing a subsequence and relabeling, there is a sequence \(x_{i}=\gamma_{t_{i}}(s_{i})\) tending to \(p\), maximizing \(\phi^{-1}|\kappa_{t_{i}}|\), and a constant \(\delta>0\) such that
\[|\kappa_{t_{i}}(s_{i})||x_{i}-p|\to\delta>0\quad\text{as $i\to\infty$}. \tag{4.4}\]
Notice that this forces
\[|\kappa_{t_{i}}(s_{i})||x_{i}-p|^{1/2}\to\infty\quad\text{as $i\to\infty$}. \tag{4.5}\]
We then consider the annuli
\[A_{i}=B(x_{i},|x_{i}-p|^{1/2})\setminus\overline{B}(p,|x_{i}-p|/2). \tag{4.6}\]
On these annuli, for large \(i\), we have that \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\) is bounded above by \(|x_{i}-p|^{-1/2}\), up to multiplication by a uniform constant. Hence, if we translate \(x_{i}\) to \(0\) and scale by \(\kappa_{t_{i}}(s_{i})\) we see that \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\) is then bounded above by \(|\kappa_{t_{i}}(s_{i})|^{-1}|x_{i}-p|^{-1/2}\), which tends to zero by (4.5). We also see that by our choice of spacetime points as maximizers of \(\phi^{-1}|\kappa_{t_{i}}|\) and our annuli in (4.6) we can now bound \(\phi^{-1/2}|\kappa|\) for points \(\gamma_{t_{i}}(s)\in A_{i}\) for large \(i\) as follows:
\[\phi^{-1/2}(\gamma_{t_{i}}(s))|\kappa_{t_{i}}(s)| \sim|\gamma_{t_{i}}(s)-p|^{1/2}|\kappa_{t_{i}}(s)|\] \[=\frac{|x_{i}-p|}{|\gamma_{t_{i}}(s)-p|^{1/2}}|\kappa_{t_{i}}(s_{i })|\frac{|\gamma_{t_{i}}(s)-p||\kappa_{t_{i}}(s)|}{|x_{i}-p||\kappa_{t_{i}}(s_{ i})|}\leq\sqrt{2}|x_{i}-p|^{1/2}|\kappa_{t_{i}}(s_{i})|.\]
Hence, after dividing by \(|\kappa_{t_{i}}(s_{i})|\), we have an upper bound by a constant multiple of \(|x_{i}-p|^{1/2}\), which also tends to zero as \(i\to\infty\).
We observe that the outer radius of the rescaled and translated annuli now tends to infinity after the rescaling by \(|\kappa_{t_{i}}(s_{i})|\) by (4.5), while after passing to a subsequence the inner ball converges to \(\overline{B}=\overline{B}(x,\delta/2)\) for some point \(x\neq 0\) (the subsequential limit of \(|\kappa_{t_{i}}(s_{i})|(p-x_{i})\)) by (4.4). Notice that \(0\) cannot lie in \(\overline{B}\) by construction of \(A_{i}\) in (4.6).
Therefore, after possibly passing to a further subsequence, we again obtain a (possibly disconnected) non-straight limit curve \(\gamma_{\infty}\) (now on \(\mathbb{R}^{3}\setminus\overline{B}\) with a point of curvature \(1\) at the origin) which is the projection of a totally geodesic circle-invariant Lagrangian. This is another contradiction and so \(\phi^{-1}|\kappa|\) does indeed tend to zero as claimed.
We can also deal with the case of some non-compact curves as follows.
**Proposition 4.4**.: _Suppose that \(p_{1},\dots,p_{k}\in\mathbb{R}^{3}\) for \(k\geq 1\) are the singularities of \(\phi\) and let \(\ell_{+},\ell_{-}\) be rays starting at \(p_{1}\) and meeting no other singularities of \(\phi\). Let \(\gamma_{0}\subseteq\mathbb{R}^{3}\) be an almost calibrated planar arc which lies in the same plane as \(\ell_{-}\cup\ell_{+}\) and is asymptotic to \(\ell_{-}\cup\ell_{+}\) at infinity in the sense that, outside a compact set, \(\gamma_{0}\) is a smooth graph of a function \(u\) over \(\ell_{-}\cup\ell_{+}\) so that \(|u|\to 0\) at infinity. Suppose further that \(\gamma_{0}\) meets no singularities of \(\phi\)._
_There is a unique short-time solution \(\gamma_{t}\) of (3.1) starting at \(\gamma_{0}\) which remains asymptotic to \(\ell_{-}\cup\ell_{+}\) at infinity. Moreover, if \(\gamma_{t}\) has a finite time singularity at a point \(p\) then \(p=p_{i}\) for some \(i\), \(\phi^{-1}|\kappa_{t}|\) tends to zero at the singularity but \(\phi^{-1/2}|\nabla^{\perp}_{\mathbb{R}^{3}}\log\phi|\) blows up._
Proof.: Since there are only finitely many singularities of \(\phi\), outside of some compact set, the flow (3.1) is uniformly equivalent to the usual curve shortening flow. Hence, we may apply the theory of the curve shortening flow, including pseudolocality, to deduce that a unique solution \(\gamma_{t}\) to (3.1) exists and remains asymptotic to \(\ell_{-}\cup\ell_{+}\) at infinity.
The proof now proceeds exactly as for Proposition 4.2 because all the analysis of finite time singularities is local and there are no finite time singularities outside of a compact set in \(\mathbb{R}^{3}\).
**Remark 4.5**.: The existence and uniqueness of the flow \(\gamma_{t}\) in Proposition 4.4 can also be deduced by considering the Lagrangian mean curvature flow \(L_{t}=\pi^{-1}(\gamma_{t})\) and using methods from [20].
### Neck pinches
We now turn to the local structure at the finite time singularity of Lagrangian mean curvature flow as considered thus far in this article. We shall use the results in the previous subsection to show that by rescaling up a finite size neighbourhood of the singular point, the Lagrangian mean curvature flow approaches a fixed Lawlor neck. Consequently, the family of rescaled Lawlor necks gives a first order approximation for the evolution of the flow as it develops a finite time singularity. This result, given in the two theorems below, proves Theorem 1.1(a). Recall the notation that \(X\) is a hyperkahler \(4\)-manifold with a circle action as described in Subsection 2.2.
**Theorem 4.6**.: _Let \(L\) be an embedded, almost calibrated, circle-invariant Lagrangian in \(X\) which is either compact or asymptotic at infinity to a pair of planes. Suppose that \(\{L_{t}\}_{t\in[0,T)}\) is an embedded, almost calibrated, circle-invariant solution to Lagrangian mean curvature flow in \(X\) starting at \(L\), that is also either compact or asymptotic to pair of planes respectively, which develops a finite time singularity at \(p\in X\) when \(t\to T<\infty\)._
_Then, for any sequence of times \(t_{i}\nearrow T\) as \(i\to\infty\), after passing to a subsequence which we also call \(t_{i}\), there are:_
* _open neighbourhoods_ \(U\) _of_ \(p\) _in_ \(X\) _and_ \(V\) _of_ \(0\) _in_ \(T_{p}X\cong\mathbb{C}^{2}\)_;_
* _a pointed isomorphism_ \(\varphi:U\to V\) _at_ \(p\)_;_
* _a nullsequence_ \(\varepsilon_{i}\searrow 0\)__
_such that \(\varepsilon_{i}^{-1}\varphi(L_{t_{i}}\cap U)\) converges on compact subsets of \(\mathbb{C}^{2}\) to a Lawlor neck \(\hat{L}=\pi_{H}^{-1}(\hat{\gamma})\), where \(\pi_{H}:\mathbb{C}^{2}\to\mathbb{R}^{3}\) is the radially extended Hopf fibration and \(\hat{\gamma}\) is a straight line at distance \(1\) from the origin._
Proof.: By assumption we know that \(L_{t}=\pi^{-1}(\gamma_{t})\) where \(\{\gamma_{t}\}_{t\in[0,T)}\) is a family of planar curves in \(\mathbb{R}^{3}\) which is a solution to (3.1) of the type considered in Proposition 4.2 or 4.4. Hence, if \(\{\gamma_{t}\}_{t\in[0,T)}\) develops a finite time singularity at time \(T\), this must occur at a point \(p=p_{i}\in\mathbb{R}^{3}\) for \(i>2\) and \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\) blows up there whilst \(\phi^{-1}|\kappa_{t}|\) stays bounded. It is also useful to recall the estimate (4.1) for \(\phi^{-1/2}|\nabla_{\mathbb{R}^{3}}^{\perp}\log\phi|\).
Parametrize the curves \(\{\gamma_{t}\}_{t\in[0,T)}\) using their respective arc-length parameters and take a sequence of space-time points \((s_{i},t_{i})\) such that \(t_{i}\nearrow T\) as \(i\to\infty\) and
\[\lambda_{i}:=|\gamma_{t_{i}}(s_{i})-p|^{-1}=\max\{|\gamma_{t}(s)-p|^{-1}\ :\ s \geq 0,\ t\leq t_{i}\}. \tag{4.7}\]
In particular, \(\gamma_{t_{i}}(s_{i})\) is a closest point to \(p\) on \(\gamma_{t_{i}}\). We have that \(\lambda_{i}\nearrow+\infty\) and so we can find \(c>0\) such that the ball \(B_{c}(p)\subseteq\mathbb{R}^{3}\) has the property that \(\gamma_{t_{i}}\cap B_{c}(p)\) is non-empty for all \(i\) sufficiently large and the metric on \(\pi^{-1}(B_{c}(p))\) is close to the Euclidean metric on \(\mathbb{R}^{4}\). The latter property of \(B_{c}(p)\) is equivalent to saying that \(2\phi(x)\sim|x-p|^{-1}\) on \(B_{c}(p)\).
For \(t\in[-\lambda_{i}^{2}t_{i},0)\) and \(i\) sufficiently large we consider the following curves on \(B_{c\lambda_{i}}(0)\subseteq\mathbb{R}^{3}\):
\[\hat{\gamma}_{t}^{i}:=\lambda_{i}(\gamma_{t_{i}+\lambda_{i}^{-2}t}-p). \tag{4.8}\]
We denote the corresponding Lagrangians in \(\mathbb{C}^{2}\) by \(\hat{L}_{t}^{i}=\pi_{H}^{-1}(\hat{\gamma_{t}}^{i})\). Notice that we are blowing up balls centered at \(p\) rather than balls centered at \(\gamma_{t_{i}}(s_{i})\).
Observe that the curvature \(\hat{\kappa}_{t}^{i}\) of \(\hat{\gamma}_{t}^{i}\) satisfies
\[|\hat{\kappa}_{t}^{i}|=\lambda_{i}^{-1}|\kappa_{t_{i}+\lambda_{i}^{-2}t}|\sim \frac{|\gamma_{t_{i}}(s_{i})-p|}{|\gamma_{t_{i}+\lambda_{i}^{-2}t}-p|}\phi( \gamma_{t_{i}+\lambda_{i}^{-2}t})^{-1}|\kappa_{t_{i}+\lambda_{i}^{-2}t}| \tag{4.9}\]
by definition of \(\lambda_{i}\) in (4.7) and our assumptions about the choice of ball \(B_{c}(p)\). By choice of \(\gamma_{t_{i}}(s_{i})\) in (4.7) we see that the quotient on the right-hand side of (4.9) is bounded above by \(1\), since \(t<0\). By Propositions 4.2 and 4.4, we deduce from (4.9) that for all \(i\) sufficiently large we have that \(|\hat{\kappa}_{i}^{t}|\) is bounded above by a uniform constant and tends to \(0\) as \(i\to\infty\).
As a result, after passing to a subsequence, we may extract a smooth limit \(\hat{\gamma}_{t}^{\infty}\) of the sequence \(\hat{\gamma}_{t}^{i}\) of (4.8) which must be a smooth embedded curve with curvature \(0\) and so is a single straight line \(\hat{\gamma}\). Furthermore, under the translation and scaling of \(B_{c}(p)\), we see that \(p\) gets mapped to \(0\) and we let \(y_{i}\) be the image of \(\gamma_{t_{i}}(s_{i})\). Then,
\[|y_{i}|=\lambda_{i}^{-1}|\gamma_{t_{i}}(s_{i})-p|=1\]
and by definition of \((s_{i},t_{i})\) in (4.7) we find that \(y_{i}\) is the closest point to the origin in \(\hat{\gamma}_{t}^{i}\). Therefore, \(\hat{\gamma}\) is a straight line at distance \(1\) from the origin. We deduce that \(\hat{L}=\pi_{H}^{-1}(\hat{\gamma})\) is a Lawlor neck, which is the limit of the sequence \(\hat{L}_{t}^{i}\). The result then follows.
We can now use work from [10] to improve Theorem 4.6 as follows.
**Theorem 4.7**.: _Let \(L\subseteq X\), \(\{L_{t}\}_{t\in[0,T)}\), \(p\in X\) and \(\pi_{H}:\mathbb{C}^{2}\to\mathbb{R}^{3}\) be as in Theorem 4.6. There are_
* _open neighbourhoods_ \(U\) _of_ \(p\) _in_ \(X\) _and_ \(V\) _of_ \(0\) _in_ \(T_{p}X\cong\mathbb{C}^{2}\)_;_
* _a pointed isomorphism_ \(\varphi:U\to V\) _at_ \(p\)_;_
* _a small_ \(\delta>0\) _and a smooth function_ \(\varepsilon:(T-\delta^{2},T)\to(0,\delta)\)_, with_ \(\varepsilon(t)\searrow 0\) _as_ \(t\nearrow T\)_,_
_such \(\varepsilon(t)^{-1}\varphi(L_{t}\cap U)\) converges on compact subsets of \(\mathbb{C}^{2}\) to a unique Lawlor neck \(\hat{L}=\pi_{H}^{-1}(\hat{\gamma})\), where \(\hat{\gamma}\) is a unique straight line at distance \(1\) from the origin._
Proof.: Theorem 4.6 shows that at least one tangent flow at the singularity at \(p\) is a special Lagrangian union of two transverse planes in \(\mathbb{C}^{2}\), which are the asymptotics of the Lawlor neck \(\hat{L}\) in the statement. Noting that \(L_{t}\) is almost calibrated, and thus zero Maslov, and also exact for all \(t\), this is precisely a finite time singularity of Lagrangian mean curvature flow as studied in [10].
As stated, the results of [10] only apply to Lagrangian mean curvature flow in \(\mathbb{C}^{2}\) or in a compact Calabi-Yau 2-fold. Since our analysis of the flow here takes place solely within a fixed small neighbourhood of \(p\) in \(X\), the analysis from [10] carries over to this setting.
In particular, [10, Theorem 8.2] shows that the tangent flow at \(p\) is in fact unique. Then [10, Theorem 8.3] shows that for all times \(t\) near \(T\) we can find the scalings \(\varepsilon(t)\) so that the rescaled flow \(\varepsilon(t)^{-1}\varphi(L_{t}\cap U)\) is a small \(C^{1}\) graph over a unique Lawlor neck of a given scale. This Lawlor neck must therefore be \(\hat{L}\) given in Theorem 4.6.
The only difference between the statements in [10, Theorem 8.3] and the one claimed is that we work with the fixed ball centred at \(p\), rather than working with balls with different centres. This can be achieved since it can be done for one sequence of times \(t_{i}\nearrow T\) by Theorem 4.6.
**Remark 4.8**.: Theorem 4.7 shows that a unique Lawlor neck \(\hat{L}\) models \(\{L_{t}\}_{t\in[0,T)}\) in a fixed size neighbourhood of \(p\) after rescaling by \(\varepsilon(t)^{-1}\). The neighbourhood in question is \(\pi^{-1}(B_{c}(p))\), where \(c>0\) is fixed and \(\pi:X\to\mathbb{R}^{3}\) is the hyperkahler moment map, which is locally modelled on \(\pi_{H}\) near \(p\). Alternatively, we can view \(\varphi^{-1}(\varepsilon(t)\hat{L}\cap V)\) as modelling \(L_{t}\) on \(U\), i.e. the scaled down Lawlor necks give an approximation to the flow, once we identify a neighbourhood of \(0\) in \(\mathbb{C}^{2}\) with a neighbourhood of \(p\) in \(X\).
### Finite time singularities and the area of pacman disks
In this subsection we show that strictly unstable Lagrangians develop finite time singularities under Lagrangian mean curvature flow. The main part of the argument uses the maximum principle and a barrier construction. To formulate the barriers, it is useful to introduce the following concepts.
**Definition 4.9**.: Let \(p_{-},p_{+},p\) be singularities of \(\phi\) lying in a plane and consider a configuration of two oriented embedded planar curves \(\gamma_{-}\), \(\gamma_{+}\) respectively starting and ending at \(p_{-}\), \(p\) and \(p\), \(p_{+}\), such that \(\gamma_{\pm}\) are unions of straight lines connecting singularities of \(\phi\). Let \(\ell\) denote the oriented straight line from \(p_{-}\) to \(p_{+}\). If we have an oriented embedded curve \(\gamma\) starting at \(p_{-}\) and ending at \(p_{+}\) so that the interior of the region bounded by \(\gamma\cup\gamma_{-}\cup\gamma_{+}\) is connected and \(p\) lies in the interior of the region bounded by \(\gamma\cup\ell\), we shall call the triple \((\gamma_{-},\gamma_{+},\gamma)\) a _triad with vertices_\((p_{-},p_{+},p)\): see Figure 4.1.
Given such a triad, we let \(\ell_{-},\ell_{+}\) denote the oriented straight lines from \(p_{-}\) to \(p\) and \(p\) to \(p_{+}\) respectively and let \(\theta_{\pm}\) denote the angles made by \(\ell_{\pm}\) with \(\ell\). In Figure 4.1 we have that \(\ell_{\pm}=\gamma_{\pm}\).
**Definition 4.10**.: Let \((\gamma_{-},\gamma_{+},\gamma)\) be a triad with vertices \((p_{-},p_{+},p)\) as in Definition 4.9. The interior of the planar region enclosed by \(\gamma\) and \(\gamma_{\infty}=\gamma_{-}\cup\gamma_{+}\) will be called a _pacman disk_ associated with the triad: see Figure 4.1.
We shall now show how the area of a pacman disk varies when the curve \(\gamma\) in the triad evolves via the flow (3.1), utilizing Lemma 3.6.
Figure 4.1. The triad and its associated pacman disk.
**Proposition 4.11**.: _Let \((\gamma_{-},\gamma_{+},\gamma_{0})\) be a triad with vertices \((p_{-},p_{+},p)\) and let \(\gamma_{t}\) evolve via the flow (3.1). Let \(\theta_{\pm}\) be as in Definition 4.9._
_Then \((\gamma_{-},\gamma_{+},\gamma_{t})\) remains a triad with vertices \((p_{-},p_{+},p)\) as long the flow does not meet another singularity of \(\phi\). If \(A(t)\) is the area of the pacman disk associated with this triad as in Definition 4.10, we have_
\[\dot{A}\leq\theta_{+}-\theta_{-}. \tag{4.10}\]
_In particular, if \(\theta_{+}<\theta_{-}\) then \(A(t)\leq A(0)-(\theta_{-}-\theta_{+})t\) and so \(\gamma_{t}\) meets \(p\) in finite time \(T\leq A(0)/(\theta_{-}-\theta_{+})\), if it does not reach some other singularity of \(\phi\) before \(t=T\)._
Proof.: We first observe that the conditions on \((\gamma_{-},\gamma_{+},\gamma_{0})\) being a triad mean that \(\gamma_{0}\) does not intersect \(\gamma_{\infty}=\gamma_{-}\cup\gamma_{+}\) except at \(p_{-},p_{+}\). Since \(\gamma_{\infty}\) is a union of straight lines, it remains stationary along the flow (3.1). Hence, by the maximum principle, \(\gamma_{\infty}\) acts as a barrier along the flow, except at any points along \(\gamma_{\infty}\) which are singularities of \(\phi\). Thus, under the assumption that the flow does not meet a singularity of \(\phi\), \((\gamma_{-},\gamma_{+},\gamma_{t})\) remains a triad with vertices \((p_{-},p_{+},p)\). Note that a singularity cannot develop at \(p_{-},p_{+}\) by the work in [10, SS6], where only the graded assumption (which is possible to make since \(\gamma_{t}\) is embedded) is used to rule out singularities at the endpoints.
Since the pacman disks associated with the triads \((\gamma_{-},\gamma_{+},\gamma_{t})\) must lie in a fixed plane for all time, we may assume without loss of generality they lie in the \((\mu_{2},\mu_{3})\)-plane. We may then lift these pacman disks to holomorphic pacman disks in \(X\) as in SS3.2, with boundary on curves contained in \(L_{t}=\pi^{-1}(\gamma_{t})\) and \(L_{\infty}=\pi^{-1}(\gamma_{\infty})\) that project to \(\gamma_{t}\) and \(\gamma_{\infty}\) under \(\pi\). Since \(\gamma_{\infty}\) is a connected union of straight line segments, \(L_{\infty}\) is a connected immersed minimal Lagrangian, and \(L_{t}\) satisfies Lagrangian mean curvature flow as \(\gamma_{t}\) satisfies (3.1).
We may therefore apply Lemma 3.6 to give that
\[\dot{A}(t)=\theta_{t}(p_{+})-\theta_{t}(p_{-}). \tag{4.11}\]
Since \(\gamma_{\infty}\) is a barrier for the flow \(\gamma_{t}\), as we argued above, the lines \(\ell_{\pm}\) are parts of this barrier. Therefore, there is a constant \(c\in\mathbb{R}\) (which is a multiple of \(\pi\)) such that \(\theta_{t}(p_{+})\leq\theta_{+}+c\) and \(\theta_{t}(p_{-})\geq\theta_{-}+c\), since this is true initially (i.e. for \(t=0\)) by the conditions on the triple \((\gamma_{-},\gamma_{+},\gamma_{0})\) being a triad with vertices \((p_{-},p_{+},p)\) as in Definition 4.9. The inequality (4.10) then follows from (4.11). The final result is then an easy consequence of (4.10).
A consequence of Proposition 4.11 is that we may now prove most of part (b) of Theorem 1.1, which also constitutes a solution to a version of Problem 3.12(a) in [15].
**Theorem 4.12**.: _Let \(X\neq\mathbb{R}^{3}\times\mathbb{S}^{1}\) be an ALE or ALF hyperkahler \(4\)-manifold constructed via the Gibbons-Hawking ansatz. Then, there is an almost calibrated, embedded, circle-invariant Lagrangian \(L_{0}\subset X\), diffeomorphic to \(\mathbb{S}^{1}\times\mathbb{R}\), such that the Lagrangian mean curvature flow starting at \(L_{0}\) develops a finite time singularity as in Theorem 4.6._
Proof.: If \(X\neq\mathbb{R}^{3}\times\mathbb{S}^{1}\) then \(\phi\) has at least one singularity which, with no loss of generality, we suppose to be located at the origin in \(\mathbb{R}^{3}\). Then, we define two infinite non-collinear planar rays \(\gamma_{-},\gamma_{+}\) emanating from \(0\) which meet no other singularities of \(\phi\). We orient these rays so that \(\gamma_{-}\) is directed towards \(0\) and \(\gamma_{+}\) is oriented towards its noncompact end. Notice that both \(\pi^{-1}(\gamma_{\pm})\) are special Lagrangian \(\mathbb{R}^{2}\)'s in \(X\). Then, we set \(\gamma_{0}\subset\mathbb{R}^{3}\) to be an infinite planar curve asymptotic to \(\gamma_{-}\) and \(\gamma_{+}\) at its two ends such that: it goes round \(0\) to connect its two ends through the side of \(0\) that makes an angle \(\alpha>\pi\); and the area, \(A\), of the infinite region enclosed by \(\gamma_{0}\cup\gamma_{-}\cup\gamma_{+}\) is finite. This may be thought of as the triad depicted in Figure 4.1 with the points \(p_{\pm}\) sent to infinity. Then, the same argument as in Proposition 4.11 shows that
\[\dot{A}\leq\theta_{+}-\theta_{-},\]
where \(\theta_{\pm}\) are the angles made by \(\gamma_{\pm}\) with a fixed line in the plane. We may always choose these rays so that \(\theta_{-}>\theta_{+}\). Then, the Lagrangian mean curvature flow starting at \(L_{0}=\pi^{-1}(\gamma_{0})\) must
reach \(0\) in a finite time less than \(A(0)/(\theta_{-}-\theta_{+})\) and so must develop a finite time singularity at least by that time. Note that \(L_{0}\) is topologically \(\mathbb{S}^{1}\times\mathbb{R}\) by construction.
### Unstable strictly convex curves
Proposition 4.11 has an important consequence: namely that circle-invariant strictly unstable Lagrangians develop finite time singularities. Here we shall not yet prove this result in its full generality but a simplified version for a certain class of strictly unstable circle-invariant Lagrangians \(L=\pi^{-1}(\gamma)\) for which \(\gamma\) is strictly convex. We shall see that, as a consequence, we can complete the proof of Theorem 1.1(b). Recall the notation that \(X\) is a hyperkahler 4-manifold with a circle action as described in Subsection 2.2.
**Theorem 4.13**.: _Let \(L=\pi^{-1}(\gamma)\) be a compact, embedded, almost calibrated, circle-invariant Lagrangian in \(X\). If \(\gamma\) is strictly convex and strictly unstable or flow unstable, then the Lagrangian mean curvature flow \(L_{t}=\pi^{-1}(\gamma_{t})\) starting at \(L_{0}=L\) attains a finite time singularity at \(\pi^{-1}(p)\) for some singularity \(p\) of \(\phi\) in the region bounded by \(\gamma\) and the straight line connecting its endpoints._
Proof.: From the assumptions on \(L\) and \(\gamma\), we know that there are singularities \(p_{-},p_{+}\) of \(\phi\) so that \(\gamma\) is a planar embedded arc from \(p_{-}\) to \(p_{+}\) and that \(\gamma\) meets no other singularities of \(\phi\). If \(\gamma\) is strictly convex then, using the notation of Definition 4.9, the region \(\Delta\) bounded by \(\gamma\cup\ell\) is convex.
As \(\gamma\) is strictly unstable or flow unstable, there must be singularities of \(\phi\) in the interior of \(\Delta\) which are not equal to \(p_{-},p_{+}\); let \(\Delta_{\infty}\subseteq\Delta\) be the convex hull of the union of these singularities with \(p_{-},p_{+}\). Then \(\Delta_{\infty}\) is a convex polygon with boundary given by a union of straight lines connecting singularities of \(\phi\), with one side given by \(\ell\).
Let \(p\) be any singularity of \(\phi\) on the boundary of \(\Delta_{\infty}\) not equal to \(p_{-},p_{+}\) and let \(\gamma_{-}\) be the oriented union of the sides of \(\Delta_{\infty}\) connecting \(p_{-}\) to \(p\), and let \(\gamma_{+}\) be the oriented union of the sides of \(\Delta_{\infty}\) connecting \(p\) to \(p_{+}\). By construction \((\gamma_{-},\gamma_{+},\gamma)\) is a triad with vertices \((p_{-},p_{+},p)\) and since \(\Delta_{\infty}\) is convex we know, in the notation of Definition 4.9, that \(\theta_{+}<\theta_{-}\).
Proposition 3.4 states that the solution \(\gamma_{t}\) to (3.1) remains strictly convex, as well as almost calibrated and embedded, as long as the flow does not meet any other singularities of \(\phi\). Therefore, the flow is pointing into the region \(\Delta\) for all \(t\) for which the flow exists smoothly. We deduce that \(\gamma_{t}\) lies in \(\Delta\) and \((\gamma_{-},\gamma_{+},\gamma_{t})\) is a triad with vertices \((p_{-},p_{+},p)\) for all \(t\) before \(\gamma_{t}\) reaches a singularity of \(\phi\) by Proposition 4.11.
Proposition 4.11 then implies that \(\gamma_{t}\) reaches \(p\) in finite time unless it reaches some other singularity of \(\phi\), which must be in \(\Delta\), before then. Since there are only a finite number of singularities of \(\phi\) on the boundary of \(\Delta_{\infty}\), there must be a first finite time \(T\) so that \(\gamma_{t}\) reaches a singularity (which we now call \(p\)) of \(\phi\) not equal to \(p_{\pm}\) lying on \(\partial\Delta_{\infty}\).
Since \(L_{T}\) would be given topologically by the union of at least two spheres meeting at a point, whereas \(L_{t}\) is a single sphere for \(t<T\), the flow must have a singularity at \(p\) at time \(t=T\).
An immediately corollary of Theorem 4.13 is the following result which guarantees the existence of a compact Lagrangian whose Lagrangian mean curvature flow develops a neck-pinch singularity, and completes the proof of Theorem 1.1(b).
**Theorem 4.14**.: _Let \(X\) be an ALE or ALF hyperkahler 4-manifold constructed via the Gibbons-Hawking ansatz so that \(\phi\) has at least three singular points that lie in a plane but which are not collinear. Then, there is a compact, almost calibrated, embedded, circle-invariant Lagrangian \(L_{0}\subset X\), diffeomorphic to \(\mathbb{S}^{2}\), such that the Lagrangian mean curvature flow starting at \(L_{0}\) develops a finite time singularity as in Theorem 4.6._
Proof.: If we have the assumption that \(\phi\) has three coplanar but not collinear singularities, we may label them \(p_{-},p_{+},p\) and arrange them in a triangle as in Figure 4.1. Let \(\ell\) be the straight line from \(p_{-}\) to \(p_{+}\) We may then clearly choose a strictly convex curve \(\gamma\) connecting \(p_{-}\) and \(p_{+}\), which lies in the same plane as \(p_{-},p_{+},p\), meets no other singularities of \(\phi\) and so that the region bounded by \(\gamma\cup\ell\) contains \(p\). Since \(p_{-},p_{+},p\) are not collinear, we deduce that \(\gamma\) is strictly unstable, and so we
may apply Theorem 4.13. Note that in this case, by the construction of \(\gamma\), we have that \(\pi^{-1}(\gamma)\) is a \(2\)-sphere.
## 5. Flow through singularities and long-time behaviour
In this section we shall prove that the Lagrangian mean curvature flow starting at an embedded, almost calibrated, circle invariant Lagrangian exists and can be continued through its finite time singularities, giving rise to a flow that exists for all time. Furthermore, we prove that at infinite time, such a Lagrangian mean curvature flow with surgeries converges to a union of special Lagrangian submanifolds. The main result is stated as Theorem 5.9.
### Flow of piecewise smooth curves
In this subsection we shall prove that for the modified curve shortening flow (3.1) for suitable planar curves \(\gamma_{t}\subset P\cong\mathbb{R}^{2}\subset\mathbb{R}^{3}\), we can flow through each finite time singularity which occurs. This flow of curves through singularities then gives rise to a Lagrangian mean curvature flow \(L_{t}:=\pi^{-1}(\gamma_{t})\) in the total space of \(\pi:X\to\mathbb{R}^{3}\).
To state the properties of this flow with surgeries we make the following definition.
**Definition 5.1**.: Let \(I\subset\mathbb{R}\) be an interval. A continuous family of piecewise smooth curves \(\{\gamma_{t}\}_{t\in I}\) in \(\mathbb{R}^{3}\) is said to be a solution of the flow (3.1) if the following hold.
1. For all \(t\in I\), any singular points of \(\gamma_{t}\) are singularities of \(\phi\).
2. Away from a finite set of times in \(I\), each smooth component \(\gamma_{t}^{(i)}\) of \(\gamma_{t}\) satisfies (3.1), i.e. \[\partial_{t}\gamma_{t}^{(i)}=\phi^{-1}(\gamma_{t}^{(i)})^{\prime\prime}.\]
Note that a piecewise smooth, embedded, planar curve \(\gamma\) has a grading \(\theta\), which is a lift of the angle that its tangent vector makes with a fixed line, and that is only defined where \(\gamma^{\prime}\) is well-defined. We can then clearly extend the definition of almost calibrated to such curves.
The main result of this subsection is the following.
**Proposition 5.2**.: _Let \(\gamma_{0}\) be an almost calibrated planar curve in some \(2\)-plane \(P\subset\mathbb{R}^{3}\) and let \(S\) be the singularities of \(\phi\) in \(P\). Suppose that the \(\gamma_{0}\) is either an embedded arc with endpoints in \(S\) or asymptotic to a pair of distinct lines so that, in both cases, the interior of \(\gamma_{0}\) does not meet \(S\). Then, there is a continuous family of piecewise smooth almost calibrated curves \(\{\gamma_{t}\}_{t\geq 0}\subset P\), which is a solution of the flow (3.1). This family of curves is real analytic in space-time except for a finite set of spatial points which lie in \(S\). Furthermore, there is a time \(T\) such that for \(t\geq T\) the number of smooth components of \(\gamma_{t}\) stays constant, so the flow has no further singular times._
**Remark 5.3**.: An example of a flow through singularities produced by Proposition 5.2, with three finite singular times, is shown in Figure 5.1.
Proof.: Let \(\gamma_{0}\) be an almost calibrated arc in \(P\) and write \(S=\{p_{1},\dots,p_{k}\}\), where \(p_{1},p_{2}\) are the endpoints of \(\gamma_{0}\). Given a family of curves \(\{\gamma_{t}\}_{t\in[0,T_{1})}\subset P\) starting at \(\gamma_{0}\) and evolving through (3.1), Proposition 4.2 shows that the flow exists as long as \(\gamma_{t}\) does not meet \(p_{i}\) for \(i>2\).
Thus, since the flow (3.1) has short-time existence, the first finite time singularity \(T_{1}<\infty\) (if it exists) is characterized as the least \(T_{1}>0\) such that
\[\limsup_{t\nearrow T_{1}}\operatorname{dist}(\gamma_{t},S\backslash\{p_{1},p _{2}\})=0.\]
Let \(S^{T_{1}}\) be the subset of \(S\) contained in the limit set of \(\gamma_{t}\) as \(t\nearrow T_{1}\), i.e.
\[S^{T_{1}}=S\cap\bigcap_{t\in[0,T_{1})}\overline{\bigcup_{t^{\prime}\in[t,T_{1 })}\gamma_{t^{\prime}}},\]
which we write as \(S^{T_{1}}=\{p_{1}^{T_{1}},\dots,p_{k_{1}+1}^{T_{1}}\}\subset S\). Notice that \(k_{1}+1>2=|\{p_{1},p_{2}\}|\), since there is a singularity of the original flow \(\gamma_{t}\) at \(p_{i}\) for \(i>2\). Since \(p_{1},p_{2}\in S^{T_{1}}\), with no loss of generality we
may set \(p_{1}^{T_{1}}=p_{1}\) and \(p_{k_{1}+1}^{T_{1}}=p_{2}\) and note that \(S\setminus S^{T_{1}}=k-k_{1}-1<k-2\). Note also that, by Propositions 4.2, the speed of the flow (3.1) for \(\gamma_{t}\) tends to zero as \(t\nearrow T_{1}\) at the points of \(S^{T_{1}}\).
We now describe the procedure for flowing past the singular time \(T_{1}\).
1. We let \(\gamma_{T_{1}}\) be the limit set of the curves \(\gamma_{t}\) as \(t\nearrow T_{1}\), which is a piecewise smooth curve with smooth components \(\gamma_{T_{1}}^{(i)}\) with endpoints \(p_{i}^{T_{1}},p_{i+1}^{T_{1}}\in S^{T_{1}}\) for \(i=1,\ldots,k_{1}\), i.e. \[\gamma_{T_{1}}:=\bigcap_{t\in[0,T_{1})}\overline{\bigcup_{t^{\prime}\in[t,T_ {1})}\gamma_{t^{\prime}}}=\gamma_{T_{1}}^{(1)}\cup\gamma_{T_{1}}^{(2)}\cup \ldots\cup\gamma_{T_{1}}^{(k_{1})}.\] Note that \(\gamma_{T_{1}}\) is \(C^{1}\) near each point in \(S^{T_{1}}\) by Theorem 1.1 and [13, Theorem 1.2].
2. Note that each smooth arc \(\gamma_{T_{1}}^{(i)}\) can be graded so that, away from the points of \(S^{T_{1}}\setminus\{p_{1},p_{2}\}\), a grading \(\theta(T_{1})\) of \(\gamma_{T_{1}}\) is defined so that \(\theta(t)\to\theta(T_{1})\) as \(t\nearrow T_{1}\). Given that \(\gamma_{0}\) is almost calibrated and the grading \(\theta(t)\) of \(\gamma_{t}\) evolves through the heat equation \(\gamma_{t}\) for \(t<T_{1}\), there is a \(\delta>0\) such that \[\sup_{\gamma_{t}}\theta(t)-\inf_{\gamma_{t}}\theta(t)\leq\sup_{\gamma_{0}} \theta(0)-\inf_{\gamma_{0}}\theta(0)\leq\pi-\delta.\] Therefore, by continuity, the grading \(\theta(T_{1})\) of \(\gamma_{T_{1}}\) satisfies the same inequality above. Hence, \(\gamma_{T_{1}}\) is almost calibrated, as are all of the smooth arcs \(\gamma_{T_{1}}^{(i)}\) for \(i=1,\ldots,k_{1}\).
3. Let \(i\in\{1,\ldots,k_{1}\}\). Since \(\phi^{-1}(\gamma_{T_{1}}^{(i)})^{\prime\prime}\) tends to zero at the endpoints \(p_{i}^{T_{1}},p_{i+1}^{T_{1}}\) of \(\gamma_{T_{1}}^{(i)}\), we may restart the flow (3.1) at time \(T_{1}\) with initial condition the almost calibrated arc \(\gamma_{T_{1}}^{(i)}\), fixing the endpoints of the evolving arcs, and it will remain almost calibrated. According to Definition 5.1, this means that the piecewise smooth almost calibrated curves \(\gamma_{t}=\bigcup_{i=1}^{k}\gamma_{t}^{(i)}\), for \(t>T_{1}\) solve (3.1) with initial condition \(\gamma_{T_{1}}\). Furthermore, as \(\gamma_{t}\) solves a parabolic equation away from the points of \(S^{T_{1}}\), it is real analytic except possibly at those points.
Figure 5.1. Flow with finite time singularities at \(0<T_{1}<T_{2}<T_{3}=T\).
We now proceed by induction, applying the procedure above to each independent flow \(\gamma_{t}^{(i)}\) of smooth arcs with fixed endpoints lying in \(S^{T_{1}}\) until the next finite time singularity \(T_{2}>T_{1}\) (if it exists). Through this procedure, we obtain a continuous family \(\{\gamma_{t}\}_{t\geq 0}\) of piecewise smooth, almost calibrated curves solving (3.1) (in the sense of Definition 5.1) with finite time singularities at times \(T_{1}<T_{2}<\ldots<T_{l}<\ldots\). Note that the grading defining the almost calibrated condition flows through the singularities (cf. [13, Theorem 1.2]) and each element of \(S\) can occur at most once as a finite time singularity, otherwise \(\gamma_{t}\) would contain a loop, which would violate the almost calibrated condition. Therefore, using Proposition 4.2, we see that the subset \(S^{T_{l}}\) of \(S\) contained in the limit set of \(\gamma_{t}\) as \(t\nearrow T_{1}\) satisfies \(|S\setminus S^{T_{l}}|<k-l-1\), so there can be at most \(k-2\) finite time singularities. Hence, for sufficiently large \(t\), the number of smooth components of \(\gamma_{t}\) stays constant, which completes the proof in the case when \(\gamma_{0}\) is an almost calibrated planar embedded arc.
The case when \(\gamma_{0}\) is instead an almost calibrated planar curve asymptotic to a pair of distinct lines follows from the same argument with minor modifications, using Proposition 4.4 in place of Proposition 4.2.
**Remark 5.4**.: The proof of Proposition 5.2 shows that the maximal number of smooth components of \(\gamma_{t}\) for sufficiently large \(t\) is \(\#S-1\) because \(\gamma_{t}\) cannot have loops.
### Unstable curves
We are now in a position to improve our results on strictly convex, strictly unstable, almost calibrated, compact curves in Theorem 4.13 in two different directions. First, we shall investigate the long-time behaviour for the flow of such unstable curves, and then use this to prove that all almost calibrated unstable curves develop finite time singularities.
**Corollary 5.5**.: _Let \(P\) be a \(2\)-plane in \(\mathbb{R}^{3}\) and let \(S\) be the singularities of \(\phi\) in \(P\). Let \(\gamma_{0}\) be an almost calibrated, strictly convex arc in \(P\) with endpoints \(p_{1},p_{2}\in S\), which otherwise does not meet \(S\). Let \(\ell\) be the straight line connecting \(p_{1},p_{2}\) and let \(\mathcal{R}\) be the open region in \(P\) bounded by \(\gamma\cup\ell\). Then, there is a ordered set of straight lines \(\{\ell_{1},\ldots,\ell_{k}\}\) with consecutive endpoints such that \(\gamma_{t}\) converges uniformly to their union,_
\[\lim_{t\to+\infty}\gamma_{t}=\ell_{1}\cup\ldots\cup\ell_{k},\]
_and \(\ell\cup\ell_{1}\cup\ldots\cup\ell_{k}\) is the boundary of the convex hull of \(S\cap\mathcal{R}\) in \(P\)._
Proof.: After possibly applying a rotation and translation of \(\mathbb{R}^{3}\) (which each induce isometries of the metric on the ambient hyperkahler \(4\)-manifold \(X\)), we may assume that \(\ell\) lies along the \(x_{1}\)-axis in \(\mathbb{R}^{3}\) and that \(P\) is the plane with \(x_{3}=0\). Without loss of generality, we may assume that \(p_{1}\) has \(x_{1}\) coordinate less than \(p_{2}\) and that \(\gamma\) lies in the region with \(x_{2}\geq 0\).
Let \(\{\gamma_{t}\}_{t\geq 0}\) be the piecewise smooth almost calibrated curves solving (3.1) given by Proposition 5.2 and let \(T\geq 0\) be so that the number of smooth components of \(\gamma_{t}\) stays constant for \(t\geq T\) and there are no further singular times. By Proposition 5.2 and its proof, we may decompose \(\gamma_{t}=\gamma_{t}^{(1)}\cup\ldots\cup\gamma_{t}^{(l)}\) for \(t\geq T\) into smooth components and let \(q_{1},\ldots,q_{l-1}\in S\setminus\{p_{1},p_{2}\}\) be ordered so that their \(x_{1}\)-coordinates are increasing and so that the curves \(\gamma_{t}^{(i)}\) connect \(p_{1}\) to \(q_{1}\), \(q_{1}\) to \(q_{2}\) and so on until the last curve connects \(q_{l-1}\) to \(p_{2}\). Since strict convexity is preserved along the flow for finite time by Proposition 3.4, we find that each of the curves \(\gamma_{t}^{(i)}\) is strictly convex. We then set \(\ell_{i}\) to be the straight line in \(P\) with the same endpoints as \(\gamma_{t}^{(i)}\) for \(i=1,\ldots,l\).
Since the flows \(\gamma_{t}^{(i)}\) are strictly convex and almost calibrated, but have no finite time singularities for \(t\geq T\), we deduce from Theorem 4.13 that \(\gamma_{T}^{(i)}\) must be semi-stable and flow semi-stable. By [11, Corollary 6.11], a flow stable curve will converge along the flow (3.1) to the straight line connecting its endpoints. Therefore, if \(\gamma_{T}^{(i)}\) is flow stable, the flow \(\gamma_{t}^{(i)}\) converges smoothly to \(\ell_{i}\). Hence, if all \(\gamma_{T}^{(i)}\) are flow stable the proof is complete with \(k=l\).
Suppose that there is some \(i\in\{1,\ldots,l\}\) such that \(\gamma_{T}^{(i)}\) is flow semi-stable but not flow stable. The semi-stability of \(\gamma_{T}^{(i)}\) means there must be no singularities of \(\phi\) in the region bounded by
and \(\ell_{i}\), but that there must be a singularity of \(\phi\) lying on \(\ell_{i}\). We may then decompose \(\ell_{i}\) into a union of straight lines \(\ell_{i}^{(1)}\cup\ldots\cup\ell_{i}^{(k_{i})}\) with consecutive endpoints connecting singularities of \(\phi\) so that there are no singularities of \(\phi\) in the interiors of each \(\ell_{i}^{(j)}\). Notice that each \(\ell_{i}^{(j)}\) must have the same angle as \(\ell_{i}\). Therefore, since \(\gamma_{t}^{(i)}\) exists smoothly for all \(t\geq T\), the flow can only become singular at singularities of \(\phi\), and the Lagrangian angle evolves through the heat equation (3.2), we see that the Lagrangian angle must converge uniformly to a constant as \(t\to\infty\) (cf. [11, p. 1109-1110]), which is the same angle as \(\ell_{i}\). Hence, \(\gamma_{t}^{(i)}\) converges uniformly to the union of straight lines \(\ell_{i}=\ell_{i}^{(1)}\cup\ldots\cup\ell_{i}^{(k_{i})}\) as \(t\to\infty\), which completes the proof (with \(k=\sum_{i=1}^{l}k_{i}\), setting \(k_{i}=1\) if \(\gamma_{T}^{(i)}\) is flow stable).
**Remark 5.6**.: Figure 5.1 gives an example of the result of Corollary 5.5, showing how an initial curve converges to a union of five straight line segments \(\cup_{j=1}^{5}\ell_{j}\). Notice that \(\ell_{2},\ell_{3}\) in Figure 5.1 have the same angle, which gives an example of the flow semi-stable but not flow stable situation considered at the end of the proof of Corollary 5.5.
**Corollary 5.7**.: _Let \(\gamma_{0}\) be an almost calibrated, strictly unstable, planar arc in \(\mathbb{R}^{3}\) so that its intersection with the singularities of \(\phi\) consists of its endpoints. Then, the flow \(\{\gamma_{t}\}_{t\in[0,+\infty)}\) obtained through Proposition 5.2 attains a finite time singularity._
Proof.: Let \(S\) be singularities of \(\phi\) in the \(2\)-plane \(P\) containing \(\gamma_{0}\). If \(\gamma_{0}\) is strictly unstable, then there is a non-empty subset \(\{q_{1},\ldots,q_{l}\}\subseteq S\) contained in the interior of the region bounded by \(\gamma_{0}\) and the straight line \(\ell\) connecting its endpoints (notice that even if \(\gamma_{0}\) and \(\ell\) intersect, they enclose a possibly disconnected region). Then, we choose an almost calibrated, strictly convex, planar arc \(\hat{\gamma}_{0}\) in \(P\) with the same endpoints as \(\ell\) (and \(\gamma_{0}\)), meeting no other elements of \(S\) and bounding a segment of \(\gamma_{0}\) which, together with \(\ell\), encloses some of the singularities \(\{q_{1},\ldots,q_{l^{\prime}}\}\subseteq\{q_{1},\ldots,q_{l}\}\) of \(\phi\). (Such a curve \(\hat{\gamma}_{0}\) exists: see Figure 5.2 for an example.)
Let \(\{\hat{\gamma}_{t}\}_{t\geq 0}\) and \(\{\gamma_{t}\}_{t\geq 0}\) be the piecewise smooth solutions of the flow given by Proposition 5.2 starting at \(\hat{\gamma}_{0}\) and \(\gamma_{0}\) respectively. The maximum principle implies that \(\hat{\gamma}_{t}\) and \(\gamma_{t}\) do not intersect away from \(S\), hence \(\hat{\gamma}_{t}\) can be taken as a barrier for \(\gamma_{t}\).
Let \(\Delta\) be the convex hull of the elements of \(S\) contained in the closure of the region bounded by \(\hat{\gamma}\cup\ell\). Its boundary \(\partial\Delta\) is given by a union of straight lines \(\ell\cup\ell_{1}\cup\ldots\cup\ell_{k}\) and \(\lim_{t\to+\infty}\hat{\gamma}_{t}=\ell_{1}\cup\ldots\cup\ell_{k}\) by Corollary 5.5. Suppose that \(q_{i}\in\partial\Delta\) for some \(i\in\{1,\ldots,l^{\prime}\}\). Then, \(\hat{\gamma}_{t}\) has a finite time singularity at \(q_{i}\) by Corollary 5.5 and its proof. The maximum principle then also implies that \(\gamma_{t}\) develops a finite time singularity at \(q_{i}\) as required.
Suppose instead that \(\{q_{1},\ldots,q_{l^{\prime}}\}\) is contained in the interior of \(\Delta\). As \(\lim_{t\to+\infty}\hat{\gamma}_{t}=\ell_{1}\cup\ldots\cup\ell_{k}\) and \(\hat{\gamma}_{t}\) is a barrier for \(\gamma_{t}\) we find that for sufficiently large \(T>0\) the curve \(\gamma_{T}\) must pass through
the interior of \(\Delta\). Hence, if \(\gamma_{t}\) does not develop a finite time singularity before \(t=T\), we can find another strictly convex, almost calibrated curve \(\hat{\gamma}_{0}^{\prime}\) with the same endpoints as \(\gamma_{t}\), totally contained inside \(\Delta\), and which bounds the segment of \(\gamma_{T}\) itself bounding \(\{q_{1},\ldots,q_{l^{\prime}}\}\) together with \(\ell\). We may then replace \(\hat{\gamma}_{0}\) with \(\hat{\gamma}_{0}^{\prime}\) to produce a new flow \(\hat{\gamma}_{t}^{\prime}\) by Proposition 5.2 which serves as a new barrier for \(\gamma_{t}\). Since \(S\) is finite, this procedure can be applied iteratively until at least one element of \(\{q_{1},\ldots,q_{l^{\prime}}\}\) is contained in the boundary of the convex hull of the singularities inside such a novel barrier curve. We are then in the previous situation above and so \(\gamma_{t}\) does develop a finite time singularity as claimed.
### Long time behaviour for LMCF through singularities
In this final subsection we put together the results of the previous subsections to prove Theorem 1.4. We begin with a natural definition following Definition 5.1.
**Definition 5.8**.: Let \(L_{0}=\pi^{-1}(\gamma_{0})\) be an embedded, circle-invariant, almost calibrated Lagrangian in the hyperkahler \(4\)-manifold \(X\). Let \(\{\gamma_{t}\}_{t\geq 0}\) be a piecewise smooth solution of the flow equation (3.1) in the sense of Definition 5.1. We say that the continuous family \(\{L_{t}=\pi^{-1}(\gamma_{t})\}_{t\geq 0}\) is a _Lagrangian mean curvature flow through singularities_ starting at \(L_{0}\).
We now prove one of our main results which, together with Theorem 1.1, will account for items (a)-(d) in Theorem 1.4, except for the convergence of currents in (c) of that result.
**Theorem 5.9**.: _Let \(L_{0}=\pi^{-1}(\gamma_{0})\) be a compact, connected, embedded, circle-invariant, almost calibrated Lagrangian in the hyperkahler \(4\)-manifold \(X\)._
* _A compact, connected, circle-invariant, almost calibrated Lagrangian mean curvature through singularities_ \(\{L_{t}\}_{t\geq 0}\) _exists for all time and has a finite number of finite time singularities._
* _There is an_ \(A_{k}\) _chain_ \(\{L_{1}^{\infty},\ldots,L_{k}^{\infty}\}\)_, in the sense of Definition_ 1.3_, of circle-invariant, embedded, special Lagrangian spheres such that_ \(\{L_{t}\}_{t\geq 0}\) _uniformly converges to_ \(L_{1}^{\infty}\cup\ldots\cup L_{k}^{\infty}\)_. Moreover, if the grading on_ \(L_{0}\) _is a perfect Morse function, then the phases of these special Lagrangians can be arranged to be non-increasing._
* _The number_ \(k\) _of special Lagrangians in the_ \(A_{k}\) _chain is exactly one if_ \(\gamma_{0}\) _is flow stable and strictly greater than one if_ \(\gamma_{0}\) _is flow unstable._
Proof.: Let \(\gamma_{0}\) be the planar curve in \(\mathbb{R}^{3}\) such that \(L_{0}=\pi^{-1}(\gamma_{0})\) and let \(\ell\) be the straight line connecting its endpoints. By Proposition 5.2, the piecewise smooth flow \(\{\gamma_{t}\}_{t\geq 0}\) exists for all time. Hence, so does the Lagrangian mean curvature flow through singularities \(\{L_{t}\}_{t\geq 0}\), which gives (a). Moreover, there is a finite time \(T\geq 0\) so that the flow \(\gamma_{t}\) (and hence \(L_{t}\)) has no singularities and the number of smooth components of \(\gamma_{t}\) stays constant.
We may then decompose \(\gamma_{t}\) for \(t\geq T\) into smooth components as \(\gamma_{t}=\gamma_{t}^{(1)}\cup\ldots\cup\gamma_{t}^{(l)}\) with the components ordered so that they have consecutive endpoints. For each \(i\in\{1,\ldots,l\}\), \(\gamma_{t}^{(i)}\) is an almost calibrated planar arc only meeting the singularities of \(\phi\) at its endpoints and with no singularity along the flow. Corollary 5.7 implies that \(\gamma_{t}^{(i)}\) is flow semi-stable for all \(i\).
By the same arguments leading to the conclusion of Corollary 5.5 (which one may notice do not use any convexity assumption on \(\gamma_{t}\)) we deduce that there is an ordered set of straight lines \(\{\ell_{1},\ldots\ell_{k}\}\) with consecutive endpoints which are singularities of \(\phi\) so that
\[\gamma_{t}\to\ell_{1}\cup\ldots\cup\ell_{k}\]
uniformly as \(t\to\infty\). Setting \(L_{i}^{\infty}=\pi^{-1}(\ell_{i})\) for \(i=1,\ldots,k\) gives the first part of (b).
If the Lagrangian angle on \(L_{0}\) is a perfect Morse function, then \(\gamma_{0}\) is strictly convex by (3.3), so Corollary 5.5 applies and \(\ell\cup\ell_{1}\cup\ldots\cup\ell_{k}\) is the boundary of the convex hull of the singularities of \(\phi\) contained in the region enclosed by \(\gamma_{0}\) and \(\ell\). Hence the non-increasing property in (b) can indeed be arranged as claimed.
The fact that \(k=1\) if \(\gamma_{0}\) is flow stable follows from the proof of the circle-invariant Thomas-Yau conjecture in [LO]. The fact that \(k>1\) if \(\gamma_{0}\) is flow unstable is a consequence of Corollary 5.7 if \(\gamma_{0}\) is strictly flow unstable, and otherwise follows from the argument at the end of the proof of Corollary 5.5 (again noticing that the convexity is not used there) since we are then in the flow semi-stable but not flow stable setting.
**Remark 5.10**.: Figure 5.1 gives an example where the grading is a perfect Morse function and the \(A_{k}\) chain (where \(k=5\) in the example, corresponding to the lines \(\{\ell_{1},\ldots,\ell_{5}\}\)) is arranged so that the phases are non-increasing. Notice that the phases corresponding to \(\ell_{2},\ell_{3}\) are equal, which shows that one cannot always ensure that the phases are decreasing.
To complete the proof of Theorem 1.4, it remains to prove the statements about the continuity and convergence as currents, which are consequences of the results we have proven so far.
**Proposition 5.11**.: _In the setting of Theorem 5.9, the family \(\{L_{t}\}_{t\in[0,+\infty)}\) varies continuously as an integral Lagrangian current with the following current convergence as \(t\to\infty\):_
\[\lim_{t\to+\infty}L_{t}=L_{1}^{\infty}+\ldots+L_{k}^{\infty}.\]
Proof.: By Theorem 5.9, there are \(0<T_{1}<\ldots<T_{l}\) such that, for \(t\in[0,+\infty)\backslash\{T_{1},\ldots,T_{l}\}\), \(L_{t}\) is a union of smooth Lagrangians solving Lagrangian mean curvature flow, so the claimed continuity of \(t\mapsto L_{t}\) follows immediately for these times. We are therefore left with proving continuity at the singular times \(T_{i}\). For this, for any compactly supported \(2\)-form \(\alpha\) on \(X\), we show that
\[t\mapsto\int_{L_{t}}\alpha, \tag{5.1}\]
is continuous at \(t=T_{i}\) for \(i=1,\ldots,l\). Recall by the proof of Proposition 5.2 that, for \(\varepsilon>0\) sufficiently small,
\[L_{T_{i}}:=\bigcap_{s\in[T_{i}-\varepsilon,T_{i})}\overline{\bigcup_{s^{ \prime}\in[s,T_{i})}L_{s^{\prime}}}.\]
Hence, using the fact that \(L_{t}=\pi^{-1}(\gamma_{t})\) and \(L_{T_{i}-\varepsilon}\cup-L_{T_{i}}\) is the boundary of \(\pi^{-1}(\overline{\cup_{s\in[T_{i}-\varepsilon,T_{i})}\gamma_{s}})\) and Proposition 5.2, we find
\[\big{|}\int_{L_{T_{i}-\varepsilon}}\alpha-\int_{L_{T_{i}}}\alpha\big{|}=\big{|} \int_{\partial\pi^{-1}(\overline{\cup_{s\in[T_{i}-\varepsilon,T_{i})}\gamma_ {s}})}\alpha\big{|}=\big{|}\int_{\pi^{-1}(\overline{\cup_{s\in[T_{i}- \varepsilon,T_{i})}\gamma_{s}})}d\alpha\big{|}\lesssim\varepsilon\|d\alpha\|_{ L^{\infty}},\]
thus proving that (5.1) is continuous.
Finally, the statement that \(\lim_{t\to+\infty}L_{t}=L_{1}^{\infty}+\ldots+L_{k}^{\infty}\) as currents follows from the uniform convergence of \(L_{t}\) to \(L_{1}^{\infty}\cup\ldots\cup L_{k}^{\infty}\) from Theorem 5.9.
## 6. Monotonicity of the Lagrangian angles
Conjecture 3.34(e) in [11] states that the decomposition into special Lagrangians \(L_{1}^{\infty},\ldots,L_{k}^{\infty}\), as in Theorem 1.4, should be chosen to have their phases satisfying \(\theta_{1}\geq\ldots\geq\theta_{k}\). In Theorem 1.4(d), when the grading on the initial Lagrangian is a perfect Morse function, we see that this ordering by phase coincides with the ordering as an \(A_{k}\)-chain. In this section we shall consider one example that shows how the decomposition of an initial Lagrangian using our flow through singularities works using the monotonicity of the Lagrangian angles, but which does not coincide with the ordering from the \(A_{k}\)-chain.
We consider \(X\) to be an ALE or ALF gravitational instanton obtained via the Gibbons-Hawking ansatz with \(\phi\) having at least \(4\) planar singularities \(\{p_{1},p_{2},q_{1},q_{2}\}\) arranged as in Figure 6.1 and let \(\gamma_{0}\) be the planar almost calibrated arc indicated there.
By Theorem 5.9 there is a Lagrangian mean curvature flow through singularities \(L_{t}=\pi^{-1}(\gamma_{t})\) starting at \(L_{0}=\pi^{-1}(\gamma_{0})\) which as \(t\to+\infty\) converges to an \(A_{k}\) chain of special Lagrangian spheres.
We claim that this \(A_{k}\) chain is \(\cup_{j=1}^{3}L_{j}^{\infty}\) where \(L_{j}^{\infty}=\pi^{-1}(\ell_{j})\). To prove this, we define two triads: \((\gamma_{-}^{1},\gamma_{+}^{1},\hat{\gamma}^{1})\) with vertices \((p_{1},q_{2},q_{1})\) and \((\gamma_{-}^{2},\gamma_{+}^{2},\hat{\gamma}^{2})\) with vertices \((p_{2},q_{1},q_{2})\). These triads are chosen so that \(\hat{\gamma}^{j}\) is almost calibrated for \(j=1,2\) and as shown in Figure 6.2, which also shows the pacman disks associated with the triads.
The flows \(\hat{\gamma}_{t}^{j}\) starting at \(\hat{\gamma}^{j}\) have finite time singularities at \(q_{j}\) by Proposition 4.11. By construction, these flows act as barriers for \(\gamma_{t}\), and since the only places where \(\gamma_{t}\) can have finite time singularities are \(q_{1},q_{2}\), the flow \(\gamma_{t}\) must have singularities at both points in finite time.
Hence, there are two singular times \(T_{1}\leq T_{2}\) after which the family \(\gamma_{t}\) consists of a union of three smooth flows of embedded arcs which exist for all time and converge to the straight lines connecting their endpoints. We therefore have that \(L_{t}\) converges to \(\cup_{j=1}^{3}L_{j}^{\infty}\) where \(L_{j}^{\infty}=\pi^{-1}(\ell_{j})\) as claimed, and moreover
\[\lim_{t\to+\infty}L_{t}=L_{1}^{\infty}+L_{2}^{\infty}+L_{3}^{\infty}.\]
Now, one needs to take care when exhibiting the limiting destabilizing configuration for \(L_{0}\) since, as in [14, Conjecture 3.34(e)], one requires a monotonicity condition on the Lagrangian angles of \(L_{j}^{\infty}\). Since the Lagrangian angles \(\theta_{j}\) of \(L_{j}^{\infty}\) are equal (up to the addition of a fixed constant, independent of \(j\)) to the angle \(\ell_{j}\) makes with the line from \(p_{1}\) to \(p_{2}\), we see that \(\theta_{1}>\theta_{3}>0>\theta_{2}\). Hence, there is no natural ordering of the \(L_{j}^{\infty}\) as a chain so that the desired monotonicity of the Lagrangian angles is achieved.
Figure 6.2. Pacman disks for the flow through singularities starting at \(\gamma_{0}\).
Instead, notice that we can decompose the Hamiltonian isotopy class of \(L_{0}\) into a graded Lagrangian connect sum of the special Lagrangian spheres as \(L_{1}^{\infty}\#(L_{3}^{\infty}\#L_{2}^{\infty})\). This shows that \(L_{1}^{\infty}\) destabilizes the class of \(L_{0}\) with "quotient" \(L_{3}^{\infty}\#L_{2}^{\infty}\), which is itself destabilized by \(L_{3}^{\infty}\) with "quotient" the stable special Lagrangian \(L_{2}^{\infty}\). This sequence of destabilizations occurs in the correct order of monotonicity as \(\theta_{1}>\theta_{3}>\theta_{2}\). The reader may wonder what would happen if we had considered a similar configuration to Figure 6.1 but with \(\theta_{3}>\theta_{1}>\theta_{2}\) instead of \(\theta_{1}>\theta_{3}>\theta_{2}\). This is explained by the fact that we have Hamiltonian isotopies
\[L_{1}^{\infty}\#(L_{3}^{\infty}\#L_{2}^{\infty})\sim L_{0}\sim L_{3}^{\infty} \#(L_{1}^{\infty}\#L_{2}^{\infty}),\]
and so we also have destabilizations ordered \(L_{3}^{\infty},L_{1}^{\infty},L_{2}^{\infty}\) which is compatible with \(\theta_{3}>\theta_{1}>\theta_{2}\). Therefore, though we have two different destabilizing configurations for \(L_{0}\) using the special Lagrangians \(L_{j}^{\infty}\), only one is compatible with the required monotonicity of the Lagrangian angles.
In conclusion, the Lagrangian mean curvature flow starting at \(L_{0}=\pi^{-1}(\gamma_{0})\) as in Figure 6.1 shows how the limit of the conjectured Lagrangian mean curvature flow through singularities envisaged in [10] can be presented uniquely so that all of the conditions required there are satisfied, particularly involving the monotonicity of the Lagrangian angles.
|
2304.12111 | Non planar free boundary minimal disks into ellipsoids | We prove the existence of embedded non planar free boundary minimal disks
into rotationally symmetric ellipsoids of $\mathbb{R}^3$. The construction
relies on the optimization of combinations of first and second Steklov
eigenvalues renormalized by the length of the boundary, among metrics on the
disk. We also prove that non planar free boundary harmonic maps from a disk
into a ellipsoid of $\mathbb{R}^3$ such that the coordinate functions are first
or second Steklov eigenfunctions with respect to the associated critical metric
is a minimal immersion (without branched points) and that if the critical
metric is even with respect to the coordinates of the disk, then the minimal
immersion is an embedding. | Romain Petrides | 2023-04-24T14:14:13Z | http://arxiv.org/abs/2304.12111v2 | # Non planar free boundary minimal disks into ellipsoids
###### Abstract.
We prove the existence of embedded non planar free boundary minimal disks into rotationally symmetric ellipsoids of \(\mathbb{R}^{3}\). The construction relies on the optimization of combinations of first and second Steklov eigenvalues renormalized by the length of the boundary, among metrics on the disk. We also prove that non planar free boundary harmonic maps from a disk into a ellipsoid of \(\mathbb{R}^{3}\) such that the coordinate functions are first or second Steklov eigenfunctions with respect to the associated critical metric is a minimal immersion (without branched points) and that if the critical metric is even with respect to the coordinates of the disk, then the minimal immersion is an embedding.
We consider the following \(2\)-ellipsoid parametrized by \(\sigma:=(\sigma_{1},\sigma_{2},\sigma_{3})\):
\[\mathcal{E}_{\sigma}:=\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3};\sigma_{1}x_{1}^{ 2}+\sigma_{2}x_{2}^{2}+\sigma_{3}x_{3}^{2}=1\}\]
with semi-axes \((\sigma_{i})^{-\frac{1}{2}}\) for \(i=1,2,3\). The equatorial curves \(\{x_{i}=0\}\cap\mathcal{E}_{\sigma}\) for \(i=1,2,3\) are simple closed geodesic in \(\mathcal{E}_{\sigma}\). By a result by Morse [17] (see also [10]), they are also the only ones for ellipsoids close to the sphere. In a celebrated result, proved by the combination of the min-max method by Lusternik and Schnirelmann [1] and mean flow methods by Grayson [1], any closed Riemannian two-sphere contains at least \(3\) simple closed geodesics. It was a refinement of the min-max method initiated by Birkhoff [1] to prove existence of simple closed geodesics in a Riemannian topological sphere. Many generic ellipsoids \(\mathcal{E}_{\sigma}\) realize then the minimal number of simple closed geodesics \(3\). Later, Viesel [14] proved existence of ellipsoids that contain an arbitrary number of simple closed geodesics (see also [10]).
In the same spirit, we ask for embedded free boundary minimal disks into \(\mathcal{E}_{\sigma}\). By definition, free boundary minimal disks into a surface \(\Sigma\subset\mathbb{R}^{3}\) are topological disks that are critical points of the area functional among all disks \(D\) such that \(\partial D\subset\Sigma\). They are exactly the minimal disks \(D\) of \(\mathbb{R}^{3}\) such that \(\partial D\subset\Sigma\) and \(D\) meets orthogonally \(\Sigma\) on \(\partial D\). The planar disks \(\{x_{i}=0\}\cap co\left(\mathcal{E}_{\sigma}\right)\) are the first trivial examples of free boundary minimal embedded disks with \(\Sigma=\mathcal{E}_{\sigma}\). In the current paper we are interested in the following question:
**Question 1** (Dierkes, Hildebrandt, Kuster, Wohlrab, 1993, [13] p335).: _Are there non-planar embedded free boundary minimal disks into ellipsoids of \(\mathbb{R}^{3}\)?_
One analogous question by Yau in 1987 [23] was raised in the case of \(2\)-spheres into \(3\)-ellipsoids: are the only minimal \(2\)-spheres in an ellipsoid centered about the origin in \(\mathbb{R}^{4}\) the planar ones? Haslhofer and Ketover recently proved that the answer is no [10] for sufficiently elongated ellipsoids, thanks to subtle min-max and mean curvature flow methods. Since then, other constructions of non planar minimal spheres in [1] and [11] have been given by different methods. In [1], Bettiol and Piccione build an arbitrary number of non planar minimal spheres for sufficiently elongated rotationally
symmetric ellipso by bifurcations methods. In [20], we use analogous techniques as in the current paper with optimization of combination of eigenvalues of the Laplacian.
Notice that the free boundary minimal 2-disks into the boundary of a 3-d manifold can purely be seen as a 1-d problem into a 2-d surface in a non local setting: they are half-harmonic maps into the surface. More precisely, if it is classical that \(f:\mathbb{R}_{+}^{2}\to\mathbb{R}\) is a harmonic function, then \(-\partial_{y}f=\Delta^{\frac{1}{2}}f\) on \(\mathbb{R}\times\{0\}\). Following the definition by Da Lio and Riviere [1], if \(\Sigma\subset\mathbb{R}^{3}\) is a surface, we say that \(f:\mathbb{R}\to\Sigma\) is a half harmonic map into \(\Sigma\) if it is a critical for the energy \(\int_{\mathbb{R}}\left|\Delta^{\frac{1}{4}}f\right|^{2}\) under the constraint \(f\in\Sigma\) a.e. The Euler-Lagrange equation reads as
\[\Delta^{\frac{1}{2}}f\perp T_{f}\Sigma\text{ on }\mathbb{R}\times\{0\}.\]
We set \(\Phi=\hat{f}\circ h:\mathbb{D}\to\mathbb{R}^{3}\), where \(h:\mathbb{D}\to\mathbb{R}^{2}_{+}\) is the biholomorphism such that \(h(1)=0\), \(h(0)=1\) and \(h(-1)=\infty\) and \(\hat{f}:\mathbb{R}^{2}_{+}\to\mathbb{R}^{3}\) is the harmonic extension of \(f\). Then \(f\) is a half-harmonic map into \(\Sigma\) if and only if \(\Phi\) is a free boundary harmonic map into \(\Sigma\) (defined as critical maps of \(\int_{\mathbb{D}}|\nabla\Psi|^{2}\) under the constraint \(\Psi(\mathbb{S}^{1})\subset\Sigma\) a.e). In this case, the equation reads as:
\[\begin{cases}\Delta\Phi=0\text{ in }\mathbb{D}\\ \partial_{r}\Phi\parallel\Delta^{\frac{1}{2}}f\perp T_{\Phi}\Sigma\text{ on } \mathbb{S}^{1}.\end{cases}\]
The restriction at the boundary then satisfies a non local version of the equation of closed geodesics on \(\Sigma\). By a classical argument on the Hopf differential, if \(\Phi\) is a free boundary harmonic map, then it is a branched conformal immersion and then a free boundary minimal branched immersed disk into \(\Sigma\). If \(\Sigma=\mathcal{E}_{\sigma}\), there is a natural weaker question than the previous one:
**Question 2**.: _Is there a non planar simple closed half harmonic maps \(\Phi:\mathbb{S}^{1}\to\mathcal{E}_{\sigma}\)?_
The answer of Question 1 and 2 is no if the ellipsoid is a round sphere (\(\sigma_{1}=\sigma_{2}=\sigma_{3}\)) by Nitsche [21]: the only free boundary minimal disks into spheres are the planar ones. This result is the same as the classical one for 2-spheres into 3-spheres by Almgren [1]. In this context, free-boundary minimal disks are more rigid, because even if we increase the dimension of the target sphere, the only free boundary branched immersed minimal disks are the planar ones [14], while we have existence of many non planar minimal spheres into higher dimensional spheres (leading to a theory initiated by Calabi [1] and Barbosa [1]).
The construction of free boundary minimal disks has been investigated by Struwe [22], Fraser [15] (with methods inspired by Sacks and Uhlenbeck [16]), Laurain, Petrides [17] and Lin, Sun, Zhou [18] (with min-max methods inspired the replacement or shortening process by Birkhoff and Colding and Minicozzi [13]), Gruter-Jost [11] and Li-Zhou [18] (with min-max methods coming from geometric measure theory initiated by Almgren [1]). In all these constructions, the variational methods are not as refined as we need to obtain new _embedded_ free boundary minimal _disks_ into _2-ellipsoids_. As in the closed case, the free boundary version [23] by Wang of the proof of the celebrated Yau conjecture by Song [24] is not helpful to build new disks, since they do not give information on the topological type of their minimal surfaces.
The submentioned methods in the closed case: [10] or [1] may still be possible to address the free boundary case but involve new difficulties. For instance in [1], the authors spectacularly use a rotational invariance assumption to prove existence of an arbitrary number of embedded non-planar rotationally invariant minimal spheres into sufficiently ellongated rotationally symmetric 3-d ellipsoids with bifurcation methods. However, we cannot build any embedded non-planar rotationally invariant free boundary minimal disk into a rotationally symmetric 2-ellipsoid.
In the current paper, we shall use an indirect way: the shape optimization of combinations of Steklov eigenvalues. Indeed, in [12], (see also [12] and [13]) the author noticed that all possibly branched conformally immersed minimal map \(\Phi\) from a 2-surface with boundary \(\Sigma\) into a n-ellipsoid \(\mathcal{E}\) can be seen as a shape associated to a critical metric of some functional depending on combinations of Steklov eigenvalues with respect to the metric. In addition, the \(k\)-th parameter of the ellipsoid is associated to a Steklov eigenvalue involved in the functional at the critical metric, and the \(k\)-th coordinate of the immersion is an eigenfunction associated to this eigenvalue. This is a generalization of a result by Fraser and Schoen when only one eigenvalue appears in the functional: the target manifold is then a ball. Their discovery led to a method for the construction of free boundary minimal surface of genus 0 into 2-spheres [14] extended in [12] for higher genuses and higher eigenvalues. Applying the variational methods developped for combinations of Steklov eigenvalues in [12] and [12], we are in position to prove the following result:
**Theorem 0.1**.: _For any \(s\neq 0\), there is a one parameter family \((p_{s,t})_{t>t_{*}(s)}\) for some \(t_{*}(s)>0\) such that there is an embedded free boundary non planar minimal topological disk \(D_{s,t}\) into the rotationally symmetric ellipsoid_
\[\mathcal{E}_{\sigma_{s,t}}:=\{x\in\mathbb{R}^{3};p_{s,t}x_{0}^{2}+x_{1}^{2}+x_ {2}^{2}=1\}\]
_and such that the coordinates \(x_{0},x_{2},x_{2}\) are first and second Steklov eigenfunctions for an induced metric \(g_{s,t}=e^{2v_{t}}\xi\) on \(D_{s,t}\) such that_
\[e^{v_{s,t}}=\frac{1}{\left(p_{s,t}^{2}x_{0}^{2}+x_{1}^{2}+x_{2}^{2}\right)^{ \frac{1}{2}}}\text{ on }\partial D_{s,t}\text{ and }L_{s,t}=\int_{\partial D_{s,t}}dL_{g_{s,t}}\]
_where \(\xi\) is the standard Euclidean metric. For any \(s\) and \(t_{1}<t_{2}\), \(D_{s,t_{1}}\) is not isometric to \(D_{s,t_{2}}\). Moreover \(t\mapsto L_{s,t}p_{s,t}\) is decreasing and \(t\mapsto L_{s,t}\) is increasing, and we have that \(p_{s,t}\to 0\) and \(L_{s,t}\to 4\pi\) as \(t\to+\infty\) and that \(D_{s,t}\) converges as varifolds to the disk \(\{0\}\times\mathbb{D}^{2}\) with multiplicity \(2\) as \(t\to+\infty\)._
This theorem gives a positive answer to Question 1 and Question 2. We obtain \(D_{s,t}=\Phi\left(\mathbb{D}\right)\) as the image of the minimal free boundary immersion \(\Phi:\mathbb{D}\to\mathcal{E}_{\sigma_{s,t}}\) associated to a critical metric \(g_{s,t}=e^{2v_{s,t}}(dx^{2}+dy^{2})\) for a minimization problem with respect to the functionals
\[F_{s,t}:g\mapsto\left(\bar{\sigma}_{1}(g)^{-s}+t\bar{\sigma}_{2}(g)^{-s} \right)^{\frac{1}{s}}\]
for \(t>0\) and \(s\neq 0\), where for \(k\geq 0\), \(\bar{\sigma}_{k}(g)=\sigma_{k}(g)L_{g}(\mathbb{S}^{1})\) denotes the renormalized \(k\)-th Steklov eigenvalue \(\sigma_{k}(g)\) by the length \(L_{g}(\mathbb{S}^{1})\) of the boundary of \(\mathbb{D}\) with respect to \(g\). By convention, notice that \(\sigma_{0}=0\) is the simple eigenvalue associated to constant functions because \(\mathbb{D}\) is connected and \(\sigma_{1}(g)>0\) is the first non-zero eigenvalue and that we can have
\(\sigma_{2}(g)=\sigma_{1}(g)\) if \(\sigma_{1}(g)\) is multiple. Notice that we can also try many other functionnals depending on \(\bar{\sigma}_{1}\) and \(\bar{\sigma}_{2}\) than \(F_{s,t}=:h_{s,t}(\bar{\sigma}_{1},\bar{\sigma}_{2})\).
With the notations of Theorem 0.1, we have \(\bar{\sigma}_{1}\left(D_{s,t},g_{s,t}\right)=p_{s,t}L_{s,t}\) and \(\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)=L_{s,t}\). Since \(\bar{\sigma}_{2}(g_{s,t})\to 4\pi\), \(0.3\) shows that \(g_{s,t}\) is a particular maximizing sequence for \(\bar{\sigma}_{2}\) as \(t\to+\infty\). Notice also that
\[L_{s,t}= \int_{\partial D_{s,t}}dL_{g_{s,t}}=\int_{\partial D_{s,t}}(p_{s, t}x_{0}^{2}+x_{1}^{2}+x_{2}^{2})dL_{g_{s,t}}\] \[= \int_{D_{s,t}}\left(|\nabla x_{0}|^{2}_{g_{s,t}}+|\nabla x_{1}|^{ 2}_{g_{s,t}}+|\nabla x_{2}|^{2}_{g_{s,t}}\right)dA_{g_{s,t}}=2A(D_{s,t})\]
where \(A(D_{s,t})\) is the area of the minimal surface. As explained below, we emphasize that \(\Phi:\mathbb{D}\to\mathcal{E}_{\sigma_{s,t}}\) is an embedding and its geometric properties (symmetries, intersection with equatorial planes) are described in Claim 1.3.
Let's explain the steps of proof for Theorem 0.1. It is not _a priori_ clear that the following properties hold true:
1. \(F_{s,t}\) has a minimizer, and then there is a minimal (possibly branched) free boundary immersion \(\Phi\) from \(\mathbb{D}\) into a \(n\)-ellipsoid associated to this minimal metric.
2. Up to rearrangement of coordinates, \(\Phi\) has at most \(3\) coordinates and the target of \(\Phi\) is the convex hull of a \(2\)-ellipsoid \(\mathcal{E}_{\sigma_{s,t}}\) for \(\sigma_{s,t}=(\sigma_{1}(g_{s,t}),\sigma_{2}(g_{s,t}),\sigma_{2}(g_{s,t}))\).
3. \(D_{s,t}=\Phi\left(\mathbb{D}\right)\) is non-planar.
4. \(\Phi\) is an embedding.
Let's explain every difficulty separately:
1. For the choice \(t=0\), we know by Weinstock inequality [10]
\[\bar{\sigma}_{1}\leq 2\pi \tag{0.1}\]
and that \(2\pi\) is only realized for the flat disk. Weinstock arguments also give that for \(t=1\) and \(s=1\),
\[\frac{1}{\bar{\sigma}_{1}}+\frac{1}{\bar{\sigma}_{2}}\geq\frac{1}{\pi} \tag{0.2}\]
and the unique minimizer is the flat disk. For \(t=\infty\): the only eigenvalue which appears is \(\bar{\sigma}_{2}\), then, by Hersch Payne and Schiffer [11]
\[\bar{\sigma}_{2}<4\pi \tag{0.3}\]
and it is not realized (see [1]). \(4\pi\) corresponds to the disjoint union of two flat disks of same boundary length. In particular, there are maximizing sequences approaching \(4\pi\) which blow up and "bubble converge" to a disjoint union of two disks (see for instance Theorem 0.2 below). In [12], we give an optimal assumption on the functional to ensure that bubbling cannot occur and then to ensure existence of a minimizer. In our context, setting \(h_{s,t}(\sigma_{1},\sigma_{2})=\left(\sigma_{1}^{-s}+t\sigma_{2}^{-s}\right)^ {\frac{1}{s}}\), the condition reads as
\[\inf_{g}F_{s,t}<h_{s,t}(0,4\pi). \tag{0.4}\]
This condition is automatic if \(s>0\), but needs to be discussed if \(s<0\). In section 3, we prove the following
**Theorem 0.2**.: _There is a one parameter family of metrics \(h_{\varepsilon}=e^{2v_{\varepsilon}}(dx^{2}+dy^{2})\) such that_
\[\bar{\sigma}_{1}(h_{\varepsilon})=\frac{2\pi}{\ln\left(\frac{1}{\varepsilon} \right)}+O\left(\frac{1}{\ln\left(\frac{1}{\varepsilon}\right)^{2}}\right)\text { and }\bar{\sigma}_{2}(g_{\varepsilon})=4\pi-16\pi\varepsilon+o(\varepsilon)\]
_as \(\varepsilon\to 0\) and \(e^{2v_{\varepsilon}}\) satisfies the following symmetry properties_
\[\forall(x,y)\in\mathbb{D},e^{2v_{\varepsilon}}(x,y)=e^{2v_{\varepsilon}}(-x,y )=e^{2v_{\varepsilon}}(x,-y).\]
This theorem ensures that a minimum also exists for \(F_{s,t}\) with \(s<0\) and \(t>0\).
**(2)**: The coordinate functions of the minimal free boundary immersion \(\Phi:\mathbb{D}\rightarrow\mathbb{R}^{n+1}\) into a \(n\)-ellipsoid are associated to first and second Steklov eigenvalues with respect to the minimal metric for \(F_{s,t}\). Up to rearrangement of coordinates, we can assume that they are independent eigenfunctions. Knowing by Jammes [1], that the multiplicity of \(\sigma_{1}\) and \(\sigma_{2}\) on the disk is less than \(2\), we obtain that \(n+1=2\) or \(n+1=3\). In all the cases, the target surface is the convex hull of \(\mathcal{E}_{\sigma_{s,t}}\) with \(\sigma_{s,t}=(\bar{\sigma}_{1}(g_{s,t}),\bar{\sigma}_{2}(g_{s,t}),\bar{\sigma} _{2}(g_{s,t}))\) where if \(n+1=2\), we set by convention \(\phi_{3}=0\) and the minimal immersion is planar.
**(3)**: As noticed in (0.1) and (0.2), for \(t=0\) or for \(s=1\) and \(t\leq 1\) the only minimizers are flat disks, so that we do not build non planar minimal disks into ellipsoids in these cases. Then, we have to choose carefully the parameters \(s\) and \(t\) that ensure that minimizers cannot be planar ellipses. In the current paper (see Proposition 1.2), we prove that any planar immersed disk into the convex hull of an ellipse has to be a diffeomorphism. Then if \(\Phi\left(\mathbb{D}\right)\) is planar, there is only one candidate and the associated critical metric \(g_{c}\) satisfies
\[\bar{\sigma}_{1}(g_{c})=2\pi\sqrt{\frac{p}{q}}\text{ and }\bar{\sigma}_{2}(g_{ c})=2\pi\sqrt{\frac{q}{p}}\]
where up to a dilatation, \(\left(\mathbb{D},g_{c}\right)\) is isometric to \(E_{p,q}=\left\{(x,y)\in\mathbb{R}^{2};px^{2}+qy^{2}=1\right\}\) endowed with a metric \(e^{2v}(dx^{2}+dy^{2})\) such that the conformal factor satisfies
\[\forall(x,y)\in\partial E_{p,q},e^{v}(x,y)=\left(p^{2}x^{2}+q^{2}y^{2}\right)^ {-\frac{1}{2}}.\]
Notice that if \(\sqrt{\frac{q}{p}}\geq 2\), then by \(0.3\), \(\bar{\sigma}_{2}\) cannot correspond to a second eigenvalue and we obtain a contradiction. We denote \(\theta_{\star}\) the minimal value such that for any \(\theta:=\frac{q}{p}>\theta_{\star}\), \(q\) is not a second Steklov eigenvalue of \(E_{p,q}\): we have \(\theta_{\star}\leq 4\).
If in addition, \(\Phi\left(\mathbb{D}\right)\) is not a flat disk (that is \(p<q\)), we have an extra mass condition given in [11] (see also [10] and Proposition 2.1) coming from the choice of the combination \(h_{s,t}\) in the variational problem \(F_{s,t}\), written as
\[\frac{p}{q}=\frac{\partial_{2}h_{s,t}(p,q)}{\partial_{1}h_{s,t}(p,q)}\]
so that \(p^{-s}=tq^{-s}\) and
\[\bar{\sigma}_{1}(g_{c})=2\pi t^{-\frac{1}{2s}}\text{ and }\bar{\sigma}_{2}(g_{c})= 2\pi t^{\frac{1}{2s}}\]
It implies that if \(s>0\) then \(1<t\leq\theta_{\star}^{s}<4^{s}\) and that if \(s<0\) then \(4^{s}<\theta_{\star}^{s}\leq t<1\).
Moreover, we can compare the value of the functional on the flat disk and on \(E_{p,q}\)
\[h_{s,t}(2\pi t^{-\frac{1}{2s}},2\pi t^{\frac{1}{2s}})=\frac{\left(2\sqrt{t} \right)^{\frac{1}{s}}}{2\pi}\text{ and }h_{s,t}(2\pi,2\pi)=\frac{(1+t)^{\frac{1}{s}}}{2\pi}\]
so that if \(s>0\) and \(t>1\), then the flat disk \(E_{1,1}\) is never a minimizer and if \(s<0\), then \(E_{p,q}\) for \(p<q\) is never a minimizer.
Finally, if \(s<0\) and if \(\Phi\left(\mathbb{D}\right)\) is a flat disk, we obtain that
\[\frac{(1+t)^{\frac{1}{s}}}{2\pi}=h_{s,t}(2\pi,2\pi)=\inf_{g}F_{s,t}<h_{s,t}(0, 4\pi)=\frac{t^{\frac{1}{s}}}{4\pi}\]
so that \(t<\frac{2^{s}}{1-2^{s}}\).
As a conclusion, we obtain that if
\[s>0\text{ and }t>\theta_{\star}^{s}\text{ or }s<0\text{ and }t\geq\frac{1}{2^{-s}-1}\]
then \(\Phi\left(\mathbb{D}\right)\) has to be non planar.
**(4)**: While we easily prove that planar possibly branched immersed free boundary minimal disks into an ellipsoid by first and second eigenfunctions have to be embedded (see Proposition 1.2), the non-planar case is less obvious. In the current paper, we prove it under symmetry assumptions,
**Theorem 0.3**.: _Let \(\Phi:\mathbb{D}\to co\left(\mathcal{E}_{\sigma}\right)\) be a non-planar possibly branched free boundary minimal immersion such that \(\sigma=(\sigma_{1},\sigma_{2},\sigma_{2})\) only contains first and second Steklov eigenvalues with respect to the critical weight on \(\mathbb{S}^{1}\)_
\[e^{v}:=\frac{|\Phi_{\theta}|}{\left(\sigma_{1}^{2}\phi_{0}^{2}+\sigma_{2}^{2} \left(\phi_{1}^{2}+\phi_{2}^{2}\right)\right)^{\frac{1}{2}}}=\frac{|\Phi_{r}| }{\left(\sigma_{1}^{2}\phi_{0}^{2}+\sigma_{2}^{2}\left(\phi_{1}^{2}+\phi_{2} ^{2}\right)\right)^{\frac{1}{2}}}\]
_then \(\Phi\) does not have any branched point. Moreover, if_
\[\forall(x,y)\in\mathbb{S}^{1},e^{v}(x,y)=e^{v}(-x,y)=e^{v}(x,-y),\]
_then \(\Phi\) is an embedding._
Theorem 0.3 is a similar result as the embeddedness of (possibly branched) minimal free boundary immersions into balls _by first Steklov eigenfunctions_ for surfaces with boundary of _genus_\(0\)[11] but the proof of Theorem 0.3 needs much more refinement (see Section 1.5). We do no know if it is possible to remove the symmetry assumptions on \(e^{v}\) and get the same conclusion, or at least the weaker conclusion: \(\Phi:\mathbb{S}^{1}\to\mathcal{E}_{\sigma}\) parametrizes a simple curve.
At that stage, we do not _a priori_ know if a minimizer of \(F_{s,t}\) has symmetries. In order to obtain Theorem 0.1, we then perform a variational method on combinations of eigenvalues under symmetry constraints on the metrics (see Section 2). Thanks to the symmetry properties of the test functions of Theorem 0.2, condition (0.4) is still realized for the infimum among symmetric metrics. Then again, no blow up happens on symmetric minimizing sequences of \(F_{s,t}\) for \(t\geq t_{\star}(s)\). This leads to the existence of the expected embedded non planar free boundary minimal disk into some ellipsoid. By other simple arguments, we deduce the complete proof of Theorem 0.1 (see Section 2.4)
The current paper is fairly self-contained up to several small lemma already written in [12], and all the steps of our construction can also be performed in the analogous closed case (see [12]). In the end of Section 1.2, we also suggest ways to make the construction more explicit in the same spirit as the bifurcation methods in the closed case [10]. New branches of embedded free boundary minimal disks should appear as soon
as the parameter of the ellipsoid crosses values having multiple eigenvalues for a Steklov problem on ellipses endowed with a metric conformal to the flat one. Such a bifurcation method would even give an arbitrary number of embedded disks for sufficiently elongated ellipsoids.
## 1. Properties of the free boundary minimal disks into an ellipsoid
### Free boundary minimal surfaces into ellipsoids and Steklov eigenvalues
We start with the following remark: let \(\mathcal{E}\subset\mathbb{R}^{n}\) be an ellipsoid of parameters \(\sigma=diag\left(\sigma_{1},\cdots,\sigma_{n}\right)\), with \(\sigma_{i}>0\), defined by
\[\mathcal{E}=\{(x_{1},\cdots,x_{n})\in\mathcal{E};\sigma_{1}x_{1}^{2}+\cdots+ \sigma_{n}x_{n}^{2}=1\}\;,\]
endowed with the induced metric of the Euclidean metric \(\xi\). We know that \(x\) is a harmonic map. We compute the outward normal derivative of \(x\) on \(\mathcal{E}\):
\[\partial_{\nu}x=\nu\;,\]
where the outward normal of the ellipsoid is denoted by \(\nu=\frac{\sigma x}{|\sigma x|}\) where \(|\sigma x|=\left(\sum_{i=1}^{n}\sigma_{i}^{2}x_{i}^{2}\right)^{\frac{1}{2}}\). Therefore, if we endow \(\mathcal{E}\) with the metric \(g=\frac{\xi}{|\sigma x|}\), \(x_{i}\) is a Steklov eigenfunction on \((\mathcal{E},g)\) associated to the eigenvalue \(\sigma_{i}\).
Now, let \(\Phi:(\Sigma,h)\to\mathbb{R}^{n}\) be such that \(\Phi(\partial\Sigma)\in\mathcal{E}\), a \(n-1\) dimentional ellipsoid of parameter \(\sigma=(\sigma_{1},\cdots,\sigma_{n})\). A well-known characterisation of \(\Phi:(\Sigma,h)\to\mathbb{R}^{n}\) to be minimal with free boundary in \(\mathcal{E}\) is free boundary harmonicity in \(\mathcal{E}\) and conformality. We recall that \(\Phi\) is harmonic in \(\mathcal{E}\) with free boundary if it is a critical point of the energy
\[E(\Phi)=\frac{1}{2}\int_{\Sigma}\left|\nabla\Phi\right|_{h}^{2}dA_{h}\]
under the constraint \(\Phi(\partial\Sigma)\subset\mathcal{E}\). The Euler-Lagrange characterization is
\[\Delta_{h}\Phi=0\text{ in }\Sigma\text{ and }\partial_{\nu}\Phi\in\left(T_{ \Phi}\mathcal{E}\right)^{\perp}\text{ on }\partial\Sigma\]
Then \(\partial_{\nu}\Phi=f\nu\) for some function \(f=\Phi.\partial_{\nu}\Phi\). Conformality is characterized by the vanishing of
\[0=\left|\nabla\Phi\right|_{h}^{2}\frac{h}{2}-d\Phi\otimes d\Phi:=\sum_{i=1}^{ n}\left(\left|\nabla\Phi_{i}\right|_{h}^{2}\frac{h}{2}-d\Phi_{i}\otimes d \Phi_{i}\right)\;.\]
For a smooth positive function \(e^{2u}\), such that \(g=e^{2u}h\), we have
\[\Delta_{g}f=e^{-2u}\Delta_{h}f\text{ and }\partial_{\nu_{g}}f=e^{-u}\partial_{ \nu_{h}}f\;,\]
and if \(\Phi:(\Sigma,h)\to\mathbb{R}^{n}\) is a minimal isometric immersion with free boundary in \(\mathcal{E}\), setting \(g=e^{2u}h\) for any function \(u\) extending the following formula on the boundary,
\[e^{u}=\Phi.\partial_{\nu}\Phi=\frac{1}{|\sigma\Phi|}\text{ on }\partial\Sigma\;,\]
the coordinates of \(\Phi\) are Steklov eigenfunctions on \((\Sigma,g)\) with eigenvalues \(\sigma_{1},\cdots,\sigma_{n}\).
### The case of planar ellipses
Let's consider the simplest example: we assume that \(\Phi=id:\mathcal{E}_{p}\to\mathcal{E}_{p}\) into a 2-d ellipse \(\mathcal{E}_{p}\) of parameters \(p=(p_{1},p_{2})\). Setting the metric conformal to \(\xi\)
\[g_{p}=e^{2u}\xi\text{ where }e^{u}=\frac{1}{|\sigma x|}=\frac{1}{\sqrt{p_{1}^{ 2}x_{1}^{2}+p_{2}^{2}x_{2}^{2}}}\text{ on }\mathcal{E}_{p}\;,\]
on \((\mathcal{E}_{p},g_{p})\) we then have
\[\Delta_{g_{p}}x_{i}=0\text{ and }\partial_{\nu_{p}}x_{i}=p_{i}x_{i}\]
for all \(i=1,2\). This means that the coordinate functions are eigenfunctions on \((\mathcal{E}_{p},g_{p})\) with eigenvalues \(p_{1},p_{2}\). However, these are not necessarily eigenfunctions associated to the first and second eigenvalue of \((\mathcal{E}_{p},g_{p})\). Let's compute their renormalized eigenvalue. By invariance of the indices of eigenvalues by diltation of the ellipse, we study \(\mathcal{E}_{p}=\{px^{2}+y^{2}=1\}\), for some \(0\leq p\leq 1\), where the boundary is endowed with the conformal factor \(e^{u_{p}(x)}=\left(p^{2}x^{2}+y^{2}\right)^{-\frac{1}{2}}\). We know that there are \(k_{1}\leq k_{2}\) such that \(\sigma_{k_{1}}(\mathcal{E}_{p},e^{u_{p}})=p\) and \(\sigma_{k_{2}}(\mathcal{E}_{p},e^{u_{p}})=1\). By parametrisation of the ellipse on the circle \((\frac{1}{\sqrt{p}}\cos\theta,\sin\theta)\), we have that
\[dL_{\mathcal{E}_{p}}=\sqrt{\frac{1}{p}\sin^{2}\theta+\cos^{2}\theta}d\theta= \frac{1}{\sqrt{p}}e^{-u}d\theta\]
We obtain that the total length of the boundary is
\[L_{\mathcal{E}_{p}}(e^{u_{p}}dL_{\mathcal{E}_{p}})=\int_{\mathcal{E}_{p}}e^{u _{p}}dL_{\mathcal{E}_{p}}=\int_{0}^{2\pi}\frac{d\theta}{\sqrt{p}}=\frac{2\pi} {\sqrt{p}}\]
and the renormalized eigenvalues are
\[\bar{\sigma}_{k_{1}}(\mathcal{E}_{p},e^{u_{p}})=p\sigma_{k_{1}}(\mathcal{E}_{ p},e^{u_{p}})=pL=2\pi\sqrt{p}\text{ and }\bar{\sigma}_{k_{2}}(\mathcal{E}_{p},e^{u_{p}})=L=\frac{2\pi}{\sqrt{p}}\]
Then, for degenerating ellipses, we have that
\[\bar{\sigma}_{k_{1}}(\mathcal{E}_{p},e^{u_{p}})\to 0\text{ and }\bar{\sigma}_{k_{2}}( \mathcal{E}_{p},e^{u_{p}})\to+\infty\text{ as }p\to 0\;.\]
We know that \(\bar{\sigma}_{k}(\mathcal{E}_{p},e^{u_{p}})\) has to be bounded by \(2\pi k\) for any \(k\). Since \(\sigma_{k_{2}}(\mathcal{E}_{p},g_{p})=1\), we have that \(k_{2}\to+\infty\) as \(p\to 0\). It would be interesting to know the value of
\[k_{1}(p)=\inf\{k\in\mathbb{N};\sigma_{k}(\mathcal{E}_{p},g_{p})=p\}\text{ and }k_{2}(p)=\inf\{k\in\mathbb{N};\sigma_{k}(\mathcal{E}_{p},g_{p})=1\}\]
for \(i=1,2\) on the Riemannian manifold \((\mathcal{E}_{p},g_{p})\) defined above. For \(p=1\), \(k_{i}(p)=1\) since the ellipse is a circle. The points \(p\) such that \(k_{i}\) jumps are points of bifurcation of eigenvalues. Computing precisely the set \(\{p\in\mathbb{R}^{2}\setminus\{0\};\forall i\in\{1,2\},k_{i}(p)\leq 2\}\) would give the minimal value \(\theta_{\star}\) such that if \(\frac{1}{p}>\theta_{\star}\), \(k_{2}(p)\geq 3\). More generally, we conjecture that \(k_{1}(p)=1\) for any \(0\leq p\leq 1\) and that a new bifurcation branch of non planar free boundary minimal surfaces is created if \(\frac{1}{p}\) crosses
\[\theta_{\star}^{k}:=\inf\left\{\frac{1}{p};0\leq p\leq 1\text{ and }k_{2}(p)=k\right\}\]
as \(p\) decreases.
### Minimal immersions into ellipsoids do not have branch points at the boundary
Let \(\Phi:\Sigma\to\mathbb{R}^{n}\) be some (possibly branched) conformal minimal immersion into an ellipsoid
\[\mathcal{E}=\{(x_{1},\cdots,x_{n})\in\mathcal{E};\sigma_{1}x_{1}^{2}+\cdots+ \sigma_{n}x_{n}^{2}=1\}\;,\]
parametrized by \(\sigma=diag(\sigma_{1},\cdots,\sigma_{n})\). We let \(e^{2u}\left(dx^{2}+dy^{2}\right)=\Phi^{*}eucl\) be the pull-back metric of the Euclidean one by \(\Phi\). Notice that branched points correspond to singularities \(u(x)=-\infty\). We know that the coordinate functions of \(\Phi\) are Steklov eigenfunctions with respect to the conformal factor \(e^{v}=\frac{e^{u}}{|\sigma\Phi|}\) at the boundary. We first prove the following
**Claim 1.1**.: \(\Phi\) _cannot have any branched points at the boundary. In other words \(e^{u}\) and \(e^{v}\) are positive everywhere._
Proof.: Let \(x\in\partial\Sigma\). We set \(\psi=\sigma_{1}\phi_{1}(x)\phi_{1}+\cdots+\sigma_{n}\phi_{n}(x)\phi_{n}\). We have that for any \(y\in\mathbb{D}\),
\[\psi(y)=\langle\sigma\phi(x),\phi(y)\rangle\leq\sqrt{\langle\sigma\phi(x), \phi(x)\rangle}\sqrt{\langle\sigma\phi(y),\phi(y)\rangle}=\sqrt{\langle\sigma \phi(y),\phi(y)\rangle}\]
The function \(f:y\mapsto\langle\sigma\phi(y),\phi(y)\rangle\) is subharmonic since
\[\Delta f=-\left\langle\sigma\nabla\phi,\nabla\phi\right\rangle\leq 0\;,\]
so that \(f\) realizes its maximum at the boundary, and \(f=1\) at the boundary gives that
\[\psi(y)\leq 1=\psi(x)\;.\]
\(\psi\) is a harmonic function that realizes its maximum at \(x\in\mathbb{S}^{1}\). Since \(\psi\) is harmonic, by the Hopf lemma, we must have \(\partial_{\nu}\psi(x)\neq 0\). And we have that
\[e^{u(x)}=\left|\sigma\Phi(x)\right|^{2}e^{v(x)}=\left\langle\sigma\Phi(x), \partial_{\nu}\Phi(x)\right\rangle=\partial_{\nu}\psi(x)\neq 0\;.\]
### Free boundary minimal disks by first and second eigenfunctions do not have branched points
The main property of the subsection relies on the following claim:
**Claim 1.2**.: _Let \(x\in\mathbb{D}\) and let \(\psi\) a first or second Steklov eigenfunction on the disk, associated to some positive weight \(e^{v}\) on the boundary. Then_
\[\psi(x)=0\Rightarrow\nabla\psi(x)\neq 0\;.\]
Proof.: By the Courant nodal theorem, \(\psi\) has at most three nodal domains. Moreover, the nodal set is either a smooth curve having two ends on the boundary or the disjoint union of two connected curves having two ends at the boundary. Indeed, since eigenfunctions are non constant and harmonic, the nodal set cannot contain a closed curve, and, the nodal set cannot contain a singularity in the interior of the disk since otherwise, the eigenfunction would have at least four nodal domains. Now let \(x\in\mathbb{D}\) be such that \(\psi(x)=0\). Let \(D\) be a nodal domain such that \(x\in\partial D\). Then \(\partial D\) is smooth at \(x\) and \(x\) is an extremal point of the harmonic function \(\psi\) on \(D\). By the Hopf lemma \(\partial_{\nu}\psi(x)\neq 0\). This ends the proof of the claim.
It is clear that \(\Phi\) has at least two coordinates since first and second eigenfunctions cannot be constant. We know by [16] that the multiplicity of first eigenfunctions on the disk is less than \(2\) and that the multiplicity of second eigenfunctions on the disk is less than \(2\). Therefore, \(\Phi\) has at most \(3\) coordinates. We consider the cases \(n=2\) and \(n=3\) separately.
**Proposition 1.1**.: _We first assume \(\Phi=(\phi_{0},\phi_{1},\phi_{2}):\mathbb{D}\to\mathbb{R}^{3}\) is a possibly branched free boundary minimal immersion into \(\mathcal{E}=\{x\in\mathbb{R}^{2};\sigma_{1}x_{0}^{2}+\sigma_{2}(x_{1}^{2}+x_{2}^ {2})=1\}\), where \(\sigma_{1}<\sigma_{2}\) and \(\phi_{0}\) is a first eigenfunction and \(\phi_{1}\) and \(\phi_{2}\) are second eigenfunctions. Then \(\Phi\) does not have any branched point._
Proof.: Notice that by Claim 1.1, \(\Phi\) does not have any branched point at the boundary. It remains to prove that for \(z\in\mathbb{D}\), we have that \(\nabla\Phi(z)\neq 0\). In fact, we will even prove that \(\nabla\eta(z)\neq 0\), where \(\eta=(\phi_{1},\phi_{2})\). Let \(v\in\mathbb{S}^{1}\) be such that \(\left\langle v,\eta(z)\right\rangle=0\). Then \(\left\langle v,\eta\right\rangle\) vanishes at \(z\) and the previous claim implies that \(\left\langle v,\nabla\eta(z)\right\rangle=\nabla\left(\left\langle v,\nabla \eta\right\rangle\right)(z)\neq 0\).
In the following proposition, we even obtain that planar free boundary minimal disks into ellipsoids by first and second eigenfunction are embeddings.
**Proposition 1.2**.: _We assume that \(\Phi=(\phi_{1},\phi_{2}):\mathbb{D}\to\mathbb{R}^{2}\) is a possibly branched free boundary minimal immersion into \(\mathcal{E}=\{x\in\mathbb{R}^{2};\sigma_{1}x_{1}^{2}+\sigma_{2}x_{2}^{2}=1\}\), where \(\sigma_{1}\leq\sigma_{2}\) and \(\phi_{1}\) is a first eigenfunction and \(\phi_{2}\) is either a first or second eigenfunction. Then \(\Phi\) is a diffeomorphism._
Proof.: We first prove that the curve \(\Phi_{|\mathbb{S}^{1}}:\mathbb{S}^{1}\to\mathcal{E}\) is an embedding.
Since \(\Phi\) is conformal, we must have that \(\left|\partial_{r}\Phi\right|^{2}=\left|\partial_{\theta}\Phi\right|^{2}\) on \(\mathbb{S}^{1}\). Moreover, by Claim 1.1, \(\left|\partial_{r}\Phi\right|^{2}=\left|\sigma\Phi\right|^{2}e^{2v}\) never vanishes. Therefore, \(\Phi_{|\mathbb{S}^{1}}:\mathbb{S}^{1}\to\mathcal{E}\) is an immersion. Then it is a covering map. Knowing that \(\phi_{1}\) is a first eigenfunction, \(\phi_{1}\) has at most two nodal domains, and the nodal line meets the boundary at two points. Therefore, the degree of \(\Phi_{|\mathbb{S}^{1}}\) must be equal to \(1\) and \(\phi_{|\mathbb{S}^{1}}\) is an embedding.
As a conclusion, since the ellipse \(\mathcal{E}\) encloses a convex curve and \(\Phi\) is harmonic in \(\mathbb{D}\) a classical result by Kneser [10] (see also Choquet [12] or a more general result by Alessandrini-Nesi [1]) gives that \(\Phi\) is a diffeomorphism.
In the following section, we also prove embeddedness of immersed non planar free boundary disks into ellipsoids by first and second eigenfunctions with symmetry assumptions.
### Non planar free boundary minimal disks with symmetry properties are embedded
From now to the end of the subsection, we assume that \(\Phi=(\phi_{0},\phi_{1},\phi_{2}):\mathbb{D}\to\mathbb{R}^{3}\) is a free boundary minimal immersion into \(\mathcal{E}=\{x\in\mathbb{R}^{2};\sigma_{1}x_{0}^{2}+\sigma_{2}\left(x_{1}^{2}+ x_{2}^{2}\right)=1\}\), where \(\sigma_{1}<\sigma_{2}\) and \(\phi_{0}\) is a first eigenfunction and \(\phi_{1}\) and \(\phi_{2}\) are second eigenfunctions with respect to the positive function \(e^{v}=\Phi.\partial_{r}\Phi\) on \(\mathbb{S}^{1}\) (positive thanks to Claim 1.1). We also assume that \(e^{v}\) satisfies the following symmetry assumtions:
\[\forall(x,y)\in\mathbb{S}^{1},e^{v(x,y)}=e^{v(-x,y)}=e^{v(x,-y)}\]
**Claim 1.3**.: _Up to reparametrization and rotation of \(\Phi\), we must have that_
\[\forall x,y\in\mathbb{D},\phi_{0}(x,-y)=-\phi_{0}(x,y)\text{ and }\phi_{0}(-x,y)=\phi_{0}(x,y)\]
\[\forall x,y\in\mathbb{D},\phi_{1}(x,-y)=\phi_{1}(x,y)\text{ and }\phi_{1}(-x,y)=-\phi_{1}(x,y)\]
\[\forall x,y\in\mathbb{D},\phi_{2}(x,-y)=\phi_{2}(x,y)\text{ and }\phi_{2}(-x,y)=\phi_{2}(x,y)\]
_where \(\phi_{0}\) and \(\phi_{1}\) have exactly \(2\) nodal domains and \(\phi_{2}\) has exactly \(3\) nodal domains. Moreover, \(\phi_{2}\) does not vanish on \([-1,1]\times\{0\}\cup\{(0,\pm 1)\}\)._
Proof.: The proof the claim is based on the 3 following simple facts
**Fact 1:** For any Steklov eigenfunction \(\phi:\mathbb{D}\to\mathbb{R}\), if for all \((x,y)\in\mathbb{D}\), \(\phi(x,y)=\phi(-x,y)=\phi(x,-y)\) then \(\phi\) has at least 3 nodal domains and if for all \((x,y)\in\mathbb{D}\), \(\phi(x,y)=-\phi(-x,y)=-\phi(x,-y)\) then \(\phi\) has at least 4 nodal domains.
**Fact 2:** If \(\phi\) is a Steklov eigenfunction on \(\mathbb{D}\) associated to the symmetric weight \(e^{v}:\mathbb{S}^{1}\to\mathbb{R}\), then \(\phi(-x,y)\) and \(\phi(x,-y)\) are also Steklov eigenfunctions.
**Fact 3:** A second eigenfunction \(\phi\) vanishes on at most 4 points at the boundary. If \(\phi\) vanishes at 2 points \(p_{1},p_{2}\in\partial D\), then it has exactly 2 nodal domains and the nodal set is a smooth curve ending at \(p_{1}\) and \(p_{2}\). If \(\phi\) vanishes at 3 points \(p_{0},p_{1},p_{2}\in\partial\mathbb{D}\), then (up to a permutation of the indices of \(p_{i}\)) it has exactly 3 nodal domains and the nodal set is the union of two smooth curves: one ending at \(p_{0}\) and \(p_{1}\), the other one ending at \(p_{0}\) and \(p_{2}\) and they only intersect at \(p_{0}\). If \(\phi\) vanishes at 4 points \(p_{1},p_{2},p_{3},p_{4}\in\partial D\), then (up to a permutation of the indices of \(p_{i}\)) it has exactly 3 nodal domains and the nodal set is a disjoint union of a smooth curve ending at \(p_{1}\) and \(p_{2}\) and a smooth curve ending at \(p_{3}\) and \(p_{4}\).
**Step 1** Up to a quarter rotation in the set of parametrizations, \(\phi_{0}\) has the expected symmetries, and there are orthogonal second eigenfunctions \(\eta_{1}\) and \(\eta_{2}\) such that
\[\forall(x,y)\in\mathbb{D},\eta_{i}(x,y)=\varepsilon_{i}^{1}\eta_{i}(-x,y)= \varepsilon_{i}^{2}\eta_{i}(x,-y).\]
**Proof of Step 1**
From fact 2, since the first eigenvalue is simple, we must have that
\[\phi_{0}(x,y)=\pm\phi_{0}(-x,y)=\pm\phi_{0}(x,-y).\]
From fact 1 and the Courant nodal theorem (\(\phi_{0}\) have at most two nodal domains), we deduce that, up to a quarter rotation on the set of parametrization, we have the expected symmetries on \(\phi_{0}\).
From fact 2, up to replace \(\phi_{i}\) by \(\frac{\phi_{i}(x,y)+\phi_{i}(-x,y)+\phi_{i}(x,-y)+\phi_{i}(-x,-y)}{4}\) for \(i=1,2\), we can find two Steklov eigenfunctions \(\eta_{1}\) and \(\eta_{2}\) associated to \(\sigma_{2}(e^{v})\) such that for \(i=1,2\), \(j=1,2\), there are \(\varepsilon_{i}^{j}\in\{\pm 1\}\) such that
\[\forall(x,y)\in\mathbb{D},\eta_{i}(x,y)=\varepsilon_{i}^{1}\eta_{i}(-x,y)= \varepsilon_{i}^{2}\eta_{i}(x,-y).\]
**Step 2** Up to exchange the indices of \(\eta_{1}\) and \(\eta_{2}\), we have that \(\varepsilon_{1}^{2}=\varepsilon_{2}^{2}=1\), that \(\varepsilon_{1}^{1}=-1\) and that \(\varepsilon_{2}^{1}=1\):
\[\forall x,y\in\mathbb{D},\eta_{1}(x,-y)=\eta_{1}(x,y)\text{ and }\eta_{1}(-x,y)=-\eta_{1}(x,y)\]
\[\forall x,y\in\mathbb{D},\eta_{2}(x,-y)=\eta_{2}(x,y)\text{ and }\eta_{2}(-x,y)=\eta_{2}(x,y)\]
**Proof of Step 2:**
Let \(i\in\{1,2\}\). From fact 1 and the Courant nodal theorem, we cannot have \(\varepsilon_{i}^{1}=\varepsilon_{i}^{2}=-1\).
More precisely, let's prove that \(\varepsilon_{1}^{2}=\varepsilon_{2}^{2}=1\). We assume by contradiction that for some \(i\in\{1,2\}\), \(\varepsilon_{i}^{2}=-1\). Then \(\varepsilon_{i}^{1}=1\), and knowing that the function \(\eta_{i}\) is orthogonal to \(\phi_{0}\), \(\eta_{i}\) must vanish at some point \((x_{0},y_{0})\in\mathbb{S}^{1}\) with \(y_{0}\neq 0\). By symmetries this implies that \(\eta_{i}\) vanishes at \((x_{0},-y_{0})\in\mathbb{S}^{1}\). Since \(\{y=0\}\) is a nodal set, there are at least two
nodal domains in \(\mathbb{D}_{+}\) and two other nodal domains in \(\mathbb{D}_{-}\). Then \(\eta_{i}\) has at leats four nodal domains: this contradicts the Courant nodal theorem. Then \(\varepsilon_{1}^{2}=\varepsilon_{2}^{2}=1\).
By a similar argument, we can prove that \(1\in\{\varepsilon_{1}^{1},\varepsilon_{2}^{1}\}\). Indeed, if not, \(\eta_{1}\) and \(\eta_{2}\) vanish on \(\{x=0\}\). Since \(\eta_{1}\) is orthogonal to \(\eta_{2}\), symmetries and the Courant nodal theorem give a contradiction.
Now, we set \(\eta=(\eta_{1},\eta_{2})\): we have that \(\forall(x,y)\in\mathbb{D},\eta(x,-y)=\eta(x,y)\). By contradiction, we assume that we also have \(\varepsilon_{1}^{1}=\varepsilon_{2}^{1}=1\), that is \(\forall(x,y)\in\mathbb{D},\eta(-x,y)=\eta(x,y)\). By fact 1, it follows that for any \(v\in\mathbb{S}^{1}\), \(\langle\eta,v\rangle\) is a second eigenfunction having at least 3 nodal domains and therefore exactly 3 nodal domains by the Courant nodal theorem. By continuity, the map \(N:v\in\mathbb{S}^{1}\mapsto\)_the number of positive nodal domains of \(\langle\eta,v\rangle\)_ is a constant map. We obtain a contradiction because for any \(v\in\mathbb{S}^{1}\), \(\{N(v),N(-v)\}=\{1,2\}\). Then, up to exchange the indices of \(\eta_{1}\) and \(\eta_{2}\), we can assume that \(\varepsilon_{1}^{1}=-1\) and that \(\varepsilon_{2}^{1}=1\).
**Step 3:**\(\eta_{1}\) has exactly 2 nodal domains and \(\eta_{1}^{-1}(\{0\})=\{0\}\times[-1,1]\). \(\eta_{2}\) has exactly 3 nodal domains and \(\eta_{2}\) cannot vanish on \([-1,1]\times\{0\}\cup\{(0,\pm 1)\}\).
**Proof of Step 3:**
First of all, we assume by contradiction that \(\eta_{2}\) vanishes on \(\{(\pm 1,0),(0,\pm 1)\}\), for instance at \((1,0)\). By symmetry \((x,y)\mapsto(x,-y)\), \((1,0)\) has to be the ending point of at least two smooth vanishing curves. By symmetry \((x,y)\mapsto(-x,y)\), \((-1,0)\) is also a zero of \(\eta_{2}\) and the ending point of at least two smooth curves. This contradicts Fact 3. Then, \(\eta_{2}\) vanishes at \(z\in\mathbb{S}^{1}\setminus\{(\pm 1,0),(0,\pm 1)\}\) and by symmetries at \(z,\bar{z},-z,-\bar{z}\).
By Fact 3, \(\{z,\bar{z},-z,-\bar{z}\}\) is the nodal set at the boundary and the two disjoint smooth nodal curves \(C_{1}\) and \(C_{2}\) must not connect \(z\) and \(-z\) or \(\bar{z}\) and \(-\bar{z}\). Let's prove that up to change the indices of \(C_{i}\), \(C_{1}\) connects \(z\) to \(-\bar{z}\) and \(C_{2}\) connects \(-z\) to \(\bar{z}\). We assume by contradiction that \(C_{1}\) connects \(z\) to \(\bar{z}\) and \(-z\) to \(-\bar{z}\). Then, let's study the nodal set of the second eigenfunctions \(\langle\eta,v\rangle\) as \(v\) describes \(\mathbb{S}^{1}\). In particular, following Fact 3, we can write
\[\mathbb{S}^{1}=A_{2}\sqcup A_{3}\sqcup A_{4}\]
where \(A_{k}\) is the set of \(v\in\mathbb{S}^{1}\) such that \(\langle\eta,v\rangle\) vanishes at exactly \(k\) points at the boundary. \(A_{2}\) and \(A_{4}\) are open sets and \(A_{3}\) is a closed set. For instance, \((0,1)\in A_{4}\) and \((1,0)\in A_{2}\). Let \(v_{0}\in A_{3}\) be a point at the boundary of the connected component of \(A_{4}\) containing \((0,1)\). Then by symmetries, the point \(z_{0}\) intersecting the two smooth branches of the nodal set of \(\langle\eta,v_{0}\rangle\) has to belong to \(\{(\pm 1,0);(0,\pm 1)\}\). If \(z_{0}=\pm(0,1)\), then by symmetry of \(\langle\eta,v_{0}\rangle\) with respect to \((x,y)\mapsto(x,-y)\), \(-z_{0}\) is also such a singular vanishing point. This contradicts fact 3. Then \(z_{0}=\pm(1,0)\). However, following the nodal set of \(\langle\eta,v\rangle\) for \(v\in](0,1),v_{0}]\subset A_{4}\) we obtain that the nodal set of \(\langle\eta,v_{0}\rangle\) contains a smooth curve having two ends at \(z_{0}\) : it does not satisfy fact 3. It is a contradiction.
Therefore, \(C_{1}\) connects \(z\) to \(-\bar{z}\) and \(C_{2}\) connects \(-z\) to \(\bar{z}\). Finally, for \(i=1,2\), \(C_{i}\cap[-1,1]\times\{0\}=\emptyset\) because if we assume \(C_{i}\cap[-1,1]\times\{0\}\neq\emptyset\), then the set contains at least two points and by symmetries, \(\eta_{2}\) would have a nodal domain included in the interior \(\mathbb{D}\): this is absurd.
**Conclusion:**
Up to replace \(\Phi\) by \((\phi_{0},\cos\theta\phi_{1}+\sin\theta\phi_{2},-\sin\theta\phi_{1}+\cos\theta \phi_{2})\), we can assume that \(\phi_{1}=\alpha\eta_{1}\) for some \(\alpha\in\mathbb{R}\) and \(\phi_{1}\) have the same symmetries as \(\eta_{1}\). Knowing that
\[\sigma_{1}\phi_{0}^{2}+\sigma_{2}\left(\phi_{1}^{2}+\phi_{2}^{2}\right)=1\text{ on }\mathbb{S}^{1}\]
we obtain that \(\mathbb{V}(x,y)\in\mathbb{S}^{1},\phi_{2}(x,y)^{2}=\phi_{2}(x,-y)^{2}=\phi_{2}( -x,y)^{2}\). By harmonic extension, it is also true in \(\mathbb{D}\) and \(\phi_{2}\) must have the same symmetries as \(\eta_{2}\).
**Proposition 1.3**.: \(\Phi\) _is an embedding_
Proof.: We aim at proving that a projection of the map \(\Phi\), the map \(\eta:=(\phi_{1},\phi_{2}):\mathbb{D}_{+}\to\eta(\mathbb{D}_{+})\) is a diffomorphism. By symmetries, it will prove that \(\Phi:\mathbb{D}\to\mathbb{R}^{3}\) is injective and since it is an immersion, we will obtain that \(\Phi\) is an embedding.
**Step 1:**\(0\notin\eta\left(\partial\left(\mathbb{D}_{+}\right)\right)\) and \(\frac{\eta}{|\eta|}:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{S}^{1}\) is a homeomorphism. In other words, \(\eta:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{R}^{2}\) is an injective closed curve and up to change the orientation, \(\eta\wedge\eta_{\theta}\) is non-negative on \(\mathbb{S}^{1}_{+}\) and \(\eta\wedge\eta_{x}\) is non negative on \([-1,1]\times\{0\}\). In particular, \(\eta\left(\partial\left(\mathbb{D}_{+}\right)\right)\) encloses a star-shaped domain with respect to \(0\).
**Proof of Step 1:**\(\phi_{1}^{-1}(\{0\})=\{0\}\times(-1,1)\) so that \(\phi_{1}^{-1}(\{0\})\cap\partial\left(\mathbb{D}_{+}\right)=\{(0,0),(0,1)\}\) and \(\phi_{2}\) does not vanish on this set. Therefore, we can consider \(\frac{\eta}{|\eta|}:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{S}^{1}\) and prove that it is monotone. In other words, let's prove that for any \(v\in\mathbb{S}^{1}\), and every \(x\in\partial\left(\mathbb{D}_{+}\right)\) such that \(\eta(x)\in D_{v}\), there is an arc \((a,b)\subset\partial\left(\mathbb{D}_{+}\right)\) such that \(x\in(a,b)\) and
\[\forall y\in(a,x),\eta(y)\in H_{v}^{+}\text{ and }\forall y\in(x,b),\eta(y)\in H_{v}^ {-}\]
where
\[D_{v}=\{z\in\mathbb{R}^{2};\langle z,v\rangle=0\},H_{v}^{+}=\{z\in\mathbb{R}^ {2};\langle z,v\rangle>0\},H_{v}^{-}=\{z\in\mathbb{R}^{2};\langle z,v\rangle<0\}.\]
We assume that it is not true for some \(x\in\partial\left(\mathbb{D}_{+}\right)\) and \(v\in\mathbb{S}^{1}\).
If \(x\neq(\pm 1,0)\), \(\eta\) is analytic at \(x\) and we deduce that there is an arc \((a,b)\subset\mathbb{S}^{1}\) such that \(x\in(a,b)\) and either for all \(y\in(a,b)\), \(\eta(y)\in D_{v}\) or for all \(y\in(a,b)\setminus\{x\}\), \(\eta(y)\in H_{v}^{+}\) or for all \(y\in(a,b)\setminus\{x\}\), \(\eta(y)\in H_{v}^{-}\). Since \(\langle\eta,v\rangle\) cannot vanish on an arc of \(\mathbb{S}^{1}\), nor on an arc of \([-1,1]\times\{0\}\) by symmetries, up to take \(-v\), we assume that for all \(y\in(a,b)\setminus\{x\}\), \(\eta(y)\in H_{v}^{+}\). Then, knowing that the nodal set is smooth in the interior of \(\mathbb{D}\), \(\langle\eta,v\rangle\) has at least \(3\) distinct nodal domains at the neighborhood of \(x\). By symmetries, \(\langle\eta,v\rangle\) has at least four nodal domains and it is a contradiction.
Now we assume that \(x=(\pm 1,0)\). We assume that there is an arc \((a,b)\subset\mathbb{S}^{1}\) such that \(x\in(a,b)\) and either for all \(y\in(a,b)\setminus\{x\}\), \(\eta(y)\in H_{v}^{+}\) or for all \(y\in(a,b)\setminus\{x\}\), \(\eta(y)\in H_{v}^{-}\). Then, knowing that the nodal set is smooth in the interior of \(\mathbb{D}\), \(\langle\eta,v\rangle\) has at least \(3\) nodal domains at the neighborhood of \(x\) in \(\mathbb{D}_{+}\). By symmetries, \(\langle\eta,v\rangle\) has at least \(5\) nodal domains and it is a contradiction. Therefore, such an arc \((a,b)\) does not exist, but by analyticity of \(\eta\) on \(\partial\left(\mathbb{D}_{+}\right)\setminus\{(\pm 1,0)\}\), such an arc \((a,b)\) would exist at the neighborhood of a point \(y\in\partial\left(\mathbb{D}_{+}\right)\setminus\{(\pm 1,0)\}\) and it is not possible.
In order to conclude step 1, we just notice that \(\phi_{1}\) and \(\phi_{2}\) vanish only twice on \(\partial\left(\mathbb{D}_{+}\right)\), so that the degree of \(\frac{\eta}{|\eta|}:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{S}^{1}\) is \(1\) and it is a homeomorphism.
**Step 2:**\(\eta_{r}\wedge\eta_{\theta}=\eta_{x}\wedge\eta_{y}\) does not vanish on \(\mathbb{S}^{1}_{+}\).
**Proof of Step 2:** We denote \(n=e^{-2u}\Phi_{x}\wedge\Phi_{y}\) a normal vector of the free-boundary minimal surface. Then \(n_{0}=e^{-2u}\eta_{x}\wedge\eta_{y}\) and \(n\) satisfies the equation of harmonic maps
into \(\mathbb{S}^{2}\). Notice that
\[\begin{cases}n_{x}=-ee^{-2u}\Phi_{x}-fe^{-2u}\Phi_{y}\\ n_{y}=-fe^{-2u}\Phi_{x}+ee^{-2u}\Phi_{y}\end{cases}\]
Let \(z\in\mathbb{S}^{1}_{+}\). Notice in particular that \(z\neq(\pm 1,0)\) and \(\phi_{0}(z)\neq 0\). We assume by contradiction that \(n_{0}(z)=0\). By step 1, \(n_{0}\) has a constant sign at the neighborhood of \(z\). Therefore, we must have that \(n_{0,\theta}(z)=0\). Up to a rotation in the set of parametrization, we assume that for any function \(f:\mathbb{D}\to\mathbb{R}\), we have \(f_{r}=f_{x}\) and \(f_{\theta}=f_{y}\) on \(\mathbb{S}^{1}\). By a Taylor expansion,
\[\Phi(z+h)= \Phi(z)+h_{1}\Phi_{x}(z)+h_{2}\Phi_{y}(z)\] \[+\frac{1}{2}\left((h_{1})^{2}\Phi_{xx}(z)+2h_{1}h_{2}\Phi_{xy}(z )+(h_{2})^{2}\Phi_{yy}(z)\right)+o(|h|^{2})\]
Since \(n_{0}(z)=0\), we have that \(\left\langle\nabla\eta(z),\tilde{n}(z)\right\rangle=\left\langle\nabla\Phi(z ),n(z)\right\rangle=0\) and by the Steklov eigenfunction equation, \(\sigma_{2}e^{v(z)}\left\langle\eta(z),\tilde{n}(z)\right\rangle=\left\langle \eta(z),\tilde{n}(z)\right\rangle=0\) so that
\[\left\langle\eta(z+h),\tilde{n}(z)\right\rangle=\frac{1}{2}\left(\left((h_{1} )^{2}-(h_{2})^{2}\right)e+2h_{1}h_{2}f\right)+o(|h|^{2})\]
Since the nodal set of the eigenfunction \(\left\langle\eta,\tilde{n}(z)\right\rangle\) has only one branch starting from \(z\) (see Step 1), then by the Morse lemma, we must have \(e=0\). Since \(n_{0,y}(z)=0\), we obtain that \(-f\phi_{0,x}(z)=0\). Since by the Steklov eigenfunction equation \(\phi_{0,x}(z)=\sigma_{1}e^{v(z)}\phi_{0}(z)\), we obtain that either \(\phi_{0}(z)=0\), either \(f=0\). Since \(\phi_{0}(z)\neq 0\) we obtain \(f=0\). Now, since \(\left\langle\eta,\tilde{n}(z)\right\rangle\) is harmonic and smooth until the boundary of \(\mathbb{D}\), it has a harmonic extension at the neighborhood of \(z\). Since \(e=f=0\) at \(z\), the nodal set of \(\left\langle\eta,\tilde{n}(z)\right\rangle\) has a singularity of multiplicity higher than 3 at \(z\), so that the nodal set \(\left\langle\eta,\tilde{n}(z)\right\rangle\) has at least two branches starting from \(z\) in \(\mathbb{D}\), and it is a contradiction.
**Step 3:**\(\eta\wedge\eta_{x}\) does not vanish on \((-1,1)\times\{0\}\).
**Proof of Step 3:** We assume that for some \(z\in(-1,1)\times\{0\}\), \(\eta(z)\wedge\eta_{x}(z)=0\). By symmetries, we know that \(\phi_{0,x}(z)=0\) and \(\eta_{y}(z)=0\). Therefore, if we denote \(n=(n_{0},\tilde{n})\), and \(\varphi=\left\langle\eta,\tilde{n}(z)\right\rangle\), we have obtain that \(\nabla\varphi(z)=0\) and \(\varphi(z)=0\) so that the nodal set of \(\varphi\) is singular at \(z\): it is a contradiction.
**Step 4:** Setting \(e=\left\langle\Phi_{xx},n\right\rangle=-\left\langle\Phi_{yy},n\right\rangle\) and \(\tilde{e}=\left\langle\Phi_{xxx},n\right\rangle=-\left\langle\Phi_{yyx},n\right\rangle\), where we denote \(n=e^{-2u}\Phi_{x}\wedge\Phi_{y}\), then \(e\) or \(\tilde{e}\) never vanish in \(\{(\pm 1,0)\}\)
**Proof of Step 4:** We have by a Taylor extension that for \(z\in\{(\pm 1,0)\}\)
\[\eta(z+h)=\sum_{k=0}^{+\infty}\frac{1}{k!}\cos(k\theta)r^{k}\eta^{(k)_{x}}(z)\]
for \(h=r(\cos\theta,\sin\theta)\), because \(\eta\) is harmonic and satisfies \(\eta(x,-y)=\eta(x,y)\). Now, thanks to the Steklov equation, \(\eta_{x}(z)\) and \(\eta(z)\) are parallel vectors and are orthogonal to \(\tilde{n}(z)\), where \(n(z)=(n_{0}(z),\tilde{n}(z))\), since \(n_{0}(z)=0\). Then, if \(e=\tilde{e}=0\),
\[\left\langle\eta(z+h),\tilde{n}(z)\right\rangle=c\cos(k\theta)r^{k}+o(r^{k})\]
for a constant \(c\neq 0\) and \(k\geq 4\) and it is impossible because of the structure of the nodal set of the second eigenfunction \(\left\langle\eta,\tilde{n}(z)\right\rangle\).
**Step 5:** We still denote \(\Phi=(\phi_{0},\eta):\mathbb{D}_{r_{0}}\to\mathbb{R}^{2}\) a harmonic extension of \(\Phi\) where \(r_{0}>1\). There is \(\varepsilon_{0}>0\) such that for \(\varepsilon\leq\varepsilon_{0}\), there is a smooth function \(\eta_{\varepsilon}:\mathbb{D}_{r_{0}}\to\mathbb{R}^{2}\) for \(r_{0}>1\), satisfying that
* \(\eta_{\varepsilon}:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{R}^{2}\) parametrizes a Jordan curve,
* \(\eta_{\varepsilon,x}\wedge\eta_{\varepsilon,y}\) is positive on \(\partial\left(\mathbb{D}_{+}\right)\),
* there is a positive smooth function \(\sigma_{\varepsilon}\) such that \[-div\left(\sigma_{\varepsilon}\nabla\eta_{\varepsilon}\right)=0\text{ in }\mathbb{D}_{+}\]
* \(\eta_{\varepsilon}\) converges to \(\eta\) in \(\mathcal{C}^{2}\left(\mathbb{D}_{r_{0}}\right)\) as \(\varepsilon\to 0\).
**Proof of Step 5:** By Step 1 and Step 2, \(\eta_{x}\wedge\eta_{y}=\eta_{r}\wedge\eta_{\theta}\) is positive on \(\mathbb{S}_{+}^{1}\). By a straightforward computation, using the Steklov eigenvalue equation
\[\tilde{\eta}_{x}\wedge\tilde{\eta}_{y}=\tilde{\eta}_{r}\wedge\tilde{\eta}_{ \theta}=\frac{1+\varepsilon\left(1-\frac{\sigma_{1}}{\sigma_{2}}\right)\phi_{0 }}{\left(1+\varepsilon\phi_{0}\right)^{3}}\eta_{r}\wedge\left(\eta_{\theta}- \frac{\varepsilon\phi_{0,\theta}}{1+\varepsilon\phi_{0}}\eta\right)=\frac{1+ \varepsilon\left(1-\frac{\sigma_{1}}{\sigma_{2}}\right)\phi_{0}}{\left(1+ \varepsilon\phi_{0}\right)^{3}}\eta_{r}\wedge\eta_{\theta}>0\]
on \(\mathbb{S}_{+}^{1}\) for \(\varepsilon>0\) small enough. Now, we compute \(\tilde{\eta}_{x}\wedge\tilde{\eta}_{y}\) on \([-1,1]\times\{0\}\). By the symmetries, we know that \(\eta_{y}=0\) and \(\phi_{0}=\phi_{0,x}=0\). Then
\[\tilde{\eta}_{x}\wedge\tilde{\eta}_{y}=\eta_{x}\wedge\left(-\varepsilon\phi_{0,y}\eta\right)=\varepsilon\phi_{0,y}\left(\eta\wedge\eta_{x}\right)\]
on \([-1,1]\times\{0\}\). Up to choose \(-\phi_{0}\), we assume that \(\phi_{0}\) is positve on \(\mathbb{D}_{+}\) so that by the Hopf lemma, \(\phi_{0,y}>0\). By step 1 and step 3, \(\tilde{\eta}_{x}\wedge\tilde{\eta}_{y}\) is then positive on \((-1,1)\times\{0\}\).
Now, we assume that \(e(0,1)\neq 0\) and we set \(\eta_{\varepsilon}=\frac{\eta+\alpha_{\varepsilon}\eta_{y}}{1+\varepsilon\phi _{0}}\) where \(\alpha_{\varepsilon}>0\) and \(\alpha_{\varepsilon}\to 0\) as \(\varepsilon\to 0\). By similar computations as for \(\tilde{\eta}\),
\[\eta_{\varepsilon,x}\wedge\eta_{\varepsilon,y}\frac{1+\varepsilon\left(1- \frac{\sigma_{1}}{\sigma_{2}}\right)\phi_{0}}{\left(1+\varepsilon\phi_{0} \right)^{3}}\eta_{r}\wedge\eta_{\theta}+\alpha_{\varepsilon}\left(\eta_{xy} \wedge\eta_{y}+\eta_{x}\wedge\eta_{yy}\right)+O\left(\alpha_{\varepsilon}^{2}+ \varepsilon^{2}\right)\]
on \(\mathbb{S}_{+}^{1}\) and
\[\eta_{\varepsilon,x}\wedge\eta_{\varepsilon,y}=\eta_{x}\wedge\left(- \varepsilon\phi_{0,y}\eta\right)=\varepsilon\phi_{0,y}\left(\eta\wedge\eta_{x} \right)+\alpha_{\varepsilon}\eta_{x}\wedge\eta_{yy}+O\left(\alpha_{\varepsilon }^{2}+\varepsilon^{2}\right)\]
on \([-1,1]\times\{0\}\), knowing that \(\eta_{xy}=0\) on \([-1,1]\times\{0\}\). Notice that in both formula, \(O\left(\alpha_{\varepsilon}^{2}+\varepsilon^{2}\right)\) is uniform in \(\mathbb{D}\). Moreover, we have by the symmetries \(\phi_{1}(-x,y)=-\phi_{1}(x,y)\) and \(\phi_{2}(-x,y)=\phi_{2}(x,y)\) that
\[\left(\eta_{x}\wedge\eta_{yy}\right)(-1,0)=\left(\eta_{x}\wedge\eta_{yy} \right)(1,0)\]
and that \(\eta_{x}\wedge\eta_{yy}(1,0)=-e(1,0)\eta_{x}(1,0)\wedge\tilde{n}(1,0)\neq 0\). Then, up to take \(-\phi_{1}\) instead of \(\phi_{1}\) and choosing \(\alpha_{\varepsilon}\) such that \(\alpha_{\varepsilon}=o(\varepsilon)\) and \(\varepsilon^{2}=o(\alpha_{\varepsilon})\), then there is \(\varepsilon_{0}>0\) such that for any \(\varepsilon\leq\varepsilon_{0}\), \(\eta_{\varepsilon}:\mathbb{D}_{r_{0}}\to\mathbb{R}^{2}\) is a local diffeomorphism at the neighbourhood of any point of \(\partial\mathbb{D}_{+}\) and \(\eta_{\varepsilon}:\partial\left(\mathbb{D}_{+}\right)\to\mathbb{R}^{2}\) parametrizes a Jordan curve. Finally, setting \(\sigma_{\varepsilon}=(1+\varepsilon\phi_{0})^{2}\), we have
\[-div\left(\sigma_{\varepsilon}\nabla\eta_{\varepsilon}\right)= -div\left((1+\varepsilon\phi_{0})\nabla\left(\eta+\alpha_{ \varepsilon}\eta_{y}\right)-\left(\eta+\alpha_{\varepsilon}\eta_{y}\right) \nabla\left((1+\varepsilon\phi_{0})\right)\right)\] \[= (1+\varepsilon\phi_{0})\Delta\left(\eta+\alpha_{\varepsilon}\eta_{ y}\right)-\left(\eta+\alpha_{\varepsilon}\eta_{y}\right)\Delta(1+\varepsilon\phi_{0})=0\]
and the expected equation is satisfied. Notice that if \(e=0\), then by Step 4, \(\tilde{e}\neq 0\) and the same argument occurs by setting \(\eta_{\varepsilon}=\frac{\eta+\alpha_{\varepsilon}\eta_{xy}}{1+\varepsilon\phi _{0}}\).
**Conclusion:** By Alessandrini and Nesi [1], we obtain that \(\eta_{\varepsilon}:\Omega_{\varepsilon}\to\eta_{\varepsilon}(\Omega_{\varepsilon})\) is a diffeomorphism, where \(\Omega_{\varepsilon}\) is chosen as a smooth domain such that
and \(\eta_{\varepsilon}:\partial\Omega_{\varepsilon}\to\mathbb{R}^{2}\) is a parametrization of a Jordan curve. Then \(\eta_{\varepsilon}:\mathbb{D}_{+}\to\eta(\mathbb{D}_{+})\) is a diffeomorphism. Letting \(\varepsilon\to 0\), we obtain that \(\eta_{x}\wedge\eta_{y}\geq 0\) on \(\mathbb{D}_{+}\). The zero set of \(\eta_{x}\wedge\eta_{y}\) is the same as the zero set of the first coordinate \(n_{0}\) of the normal vector to the surface, which satisfies an eigenfunction equation \(\Delta n_{0}=|\nabla n|^{2}\,n_{0}\) (because \(n\) is a harmonic map into the sphere). \(n_{0}\) has only one nodal domain. Then \(\eta_{x}\wedge\eta_{y}>0\) on \(\mathbb{D}_{+}\). Then \(\eta_{\varepsilon}^{-1}\) converges in \(\mathcal{C}^{2}(K)\) for any compact subset \(K\) of \(\eta\left(\mathbb{D}_{+}\right)\) to a function \(\psi:\eta\left(\mathbb{D}_{+}\right)\to\mathbb{D}_{+}\) that is the inverse function of \(\eta\). In particular, \(\eta\) is injective on \(\mathbb{D}_{+}\). By symmetries, \(\Phi\) is injective on \(\mathbb{D}\) and we obtain the expected result. \(\diamondsuit\)
## 2. Optimization of combinations of Steklov eigenvalues
In the current section, we prove Theorem 0.1. In particular, we gather the previous techniques for maximization of combinations of Steklov eigenvalues [10] and [10], and adapt them for the first time on a restriction of the admissible conformal factors at the boundary \(e^{v}\) to symmetric ones on a surface \(\Sigma\). For simplicity, we choose to write it in our case \(\Sigma=\mathbb{D}\) and
\[\forall x,y\in\mathbb{S}^{1},e^{v(x,y)}=e^{v(-x,y)}=e^{v(x,-y)}\]
but it appears that such a method for symmetry constraints can be setted in a general framework.
### Critical metrics
We denote \(\sigma_{k}(f)\) is the \(k\)-th non zero Steklov eigenvalue with respect to the density \(f:\mathbb{S}^{1}\to\mathbb{R}_{+}^{*}\) on \(\mathbb{D}\), that is the restriction at the boundary of the density of the measure associated to some metric conformal to the flat one on the disk. We set \(\bar{\sigma_{k}}(f)=\sigma_{k}(f)L_{f}(\mathbb{S}^{1})\), where \(L_{f}\left(\mathbb{S}^{1}\right)=\int_{\mathbb{S}^{1}}fd\theta\) is the length of the boundary with respect to \(f\). We set
\[V=\mathcal{C}^{0}\left(\mathbb{S}^{1}\right)\text{ and }X=\{f\in\mathcal{C}^{0}( \mathbb{S}^{1}),f>0\text{ and }\forall x,y\in\mathbb{S}^{1},f(x,y)=f(-x,y)=f(x,-y)\}\]
and for \(f\in X\) we set
\[E(f):=F\left(\bar{\sigma}_{1}(f),\cdots,\bar{\sigma}_{m}(f)\right)\]
where \(F:\mathbb{R}^{m}\to\mathbb{R}_{+}\) is a \(\mathcal{C}^{1}\) function such that for all \(i\in\{1,\cdots,m\}\), \(\partial_{i}F\leq 0\) everywhere. Notice that in the current paper, \(F=h_{s,t}\) only depends on first and second Steklov eigenfunctions. We denote \(\partial E(f)\) the subdifferential of \(E\) at \(f\).
We proved in [10]
\[\partial E(f)\subset\overline{\omega}_{(\phi_{1},\cdots,\phi_{m})\in \mathcal{O}_{E(f)}}\left\{\sum_{i=1}^{m}d_{i}\bar{\sigma}_{i}(f)\left(\frac{1 }{\int_{\mathbb{S}^{1}}f}-\left(\phi_{i}\right)^{2}\right)\right\}\!, \tag{2.1}\]
where \((\phi_{1},\cdots,\phi_{m})\) lies in the set \(\mathcal{O}_{E(f)}\) of \(L^{2}(\partial\Sigma,fd\theta)\)-orthonormal families where \(\phi_{i}\in E_{i}(f)\) and \(d_{i}=\partial_{i}F\left(\bar{\sigma}_{1}(f),\cdots,\bar{\sigma}_{m}(f)\right)\leq 0\). Notice that in this case the subdifferential is a space of functions on \(\partial\mathbb{S}^{1}\), while it is defined as a subspace of \(V^{\star}\), the set of Radon measures on \(\mathbb{S}^{1}\): here, we identify a function \(\psi\) to the measure \(\psi d\theta\) on \(\mathbb{S}^{1}\).
From this (see also [10]) and the multiplicity results by [11] for the first and second eigenvalues of the disk (and Proposition 1.2 and Proposition 1.3 for the embedding part), we obtain in our case \(F=h_{s,t}\) that the critical densities \(e^{u}\) (densities such that \(0\in\partial E(e^{u})\)) satisfy:
**Proposition 2.1**.: _If \(e^{u}\) is a critical density of \(h_{s,t}(\bar{\sigma}_{1},\bar{\sigma}_{2})=\left(\bar{\sigma}_{1}^{-s}+t\bar{ \sigma}_{2}^{-s}\right)^{\frac{1}{s}}\), then there is a free boundary minimal immersion \(\Phi=(\phi_{0},\phi_{1},\phi_{2}):\mathbb{D}\to co(\mathcal{E}_{\sigma}),\) where \(\sigma=(\bar{\sigma}_{1}(e^{u}),\bar{\sigma}_{2}(e^{u}),\bar{\sigma}_{2}(e^{u }))\), \(\phi_{0}\) is a Steklov eigenfunction associated to \(\sigma_{1}(e^{u})\) and \(\phi_{1}\) and \(\phi_{2}\) are associated to \(\sigma_{2}(e^{u})\) such that \((\phi_{0},\phi_{1},\phi_{2})\) are independent function in \(L^{2}(\mathbb{S}^{1},e^{u}d\theta)\) or \(\phi_{2}=0\) and \((\phi_{0},\phi_{1})\) are independent. Moreover,_
\[\int_{\mathbb{S}^{1}}\phi_{0}^{2}e^{u}d\theta=\frac{\bar{\sigma}_{1}(e^{u})^{- s-1}}{f_{s,t}(\bar{\sigma}_{1}(e^{u}),\bar{\sigma}_{2}(e^{u}))}\int_{\mathbb{S}^{1} }e^{u}d\theta\]
_and_
\[\int_{\mathbb{S}^{1}}\left(\phi_{1}^{2}+\phi_{2}^{2}\right)e^{u}d\theta=\frac {t\bar{\sigma}_{2}(e^{u})^{-s-1}}{f_{s,t}(\bar{\sigma}_{1}(e^{u}),\bar{\sigma} _{2}(e^{u}))}\int_{\mathbb{S}^{1}}e^{u}d\theta.\]
_If in addition \(f\) is symmetric with respect to the two axes of \(\mathbb{S}^{1}\), then \(\Phi\) is an embedding._
### Construction of Palais-Smale sequences
We set
\[|\partial E(f)|=-\inf_{h\in Y}\max_{\psi\in\partial E(f)}\int_{\mathbb{S}^{1} }h\psi d\theta\]
where \(Y=\{h\in\mathcal{C}^{0}(\mathbb{S}^{1});h\geq 0,\int_{\mathbb{S}^{1}}hd\theta=1, \forall x,y\in\mathbb{S}^{1},h(x,y)=h(-x,y)=h(x,-y)\}\). We proved in [22] the following deformation lemma:
**Proposition 2.2**.: _If there is \(\varepsilon_{0}>0\) and \(\delta>0\) such that_
\[\forall f\in X;\forall\varepsilon\in(0,\varepsilon_{0}),|E(f)-c|\leq \varepsilon\Rightarrow|\partial E(f)|\geq\delta\;,\]
_Then \(\forall\varepsilon\in(0,\varepsilon_{0})\), there is \(\eta:X\to X\) such that_
* \(\eta(f)=f\) _for any_ \(f\in\{E\geq c+\varepsilon_{0}\}\cup\{E\leq c-\varepsilon_{0}\}\)__
* \(\forall f\in X,E(\eta(f))\leq E(f)\)__
* \(\eta(\{E\leq c+\varepsilon\})\subset\{E\leq c-\varepsilon\}\)__
Thanks to this lemma, we have existence of a Palais-Smale minimizing sequence \(f_{\varepsilon}\in X\)
\[|E(f_{\varepsilon})-c|\leq\varepsilon\text{ and }\delta_{\varepsilon}=| \partial E(f_{\varepsilon})|\to 0\text{ as }\varepsilon\to 0.\]
We rewrite \(\delta_{\varepsilon}=|\partial E(f_{\varepsilon})|\) as
\[\forall\tau\in\bar{Y},\exists\psi\in\partial E(f_{\varepsilon});-\int_{ \mathbb{S}^{1}}\psi d\tau\leq\delta_{\varepsilon}\]
where \(\bar{Y}=\{\tau\in\mathcal{P}(\mathbb{S}^{1});\tau=s_{1}^{\star}\tau=s_{2}^{ \star}\tau\}\) is a subset of the set of probability measures \(\mathcal{P}(\mathbb{S}^{1})\) with symmetries \(s_{1}(-x,y)=s_{1}(x,y)\) and \(s_{2}(x,-y)=s_{2}(x,y)\). It can be rewritten as
\[\forall\tau\in\bar{Y},\exists\psi\in\partial E(f_{\varepsilon});-\int_{ \mathbb{S}^{1}}(\psi+\delta_{\varepsilon})d\tau\leq 0\]
Notice that since for any eigenfunction \(\phi\), \(\phi\circ s_{1}\) and \(\phi\circ s_{2}\) are eigenfunctions with the same mass:
\[\psi\in\partial E(f_{\varepsilon})\Rightarrow\psi\circ s_{1}\in\partial E(f_{ \varepsilon})\text{ and }\psi\circ s_{2}\in\partial E(f_{\varepsilon})\]
and we can deduce that
\[-(\partial E(f_{\varepsilon})+\{\delta_{\varepsilon}\})\cap\{a\in\mathcal{C}^{ 0};a\leq 0\}\neq\emptyset. \tag{2.2}\]
Indeed, if not, we use the classical Hahn-Banach theorem to separate these two spaces (the first one is compact, the second one is closed in \(\mathcal{C}^{0}\)) by \(\tau\in\left(\mathcal{C}^{0}\right)^{\star}\) satisfying
\[\forall\psi\in\partial E(f_{\varepsilon}),\int_{\mathbb{S}^{1}_{1}}-(\psi+ \delta_{\varepsilon})d\tau>0\]
\[\forall a\in\mathcal{C}^{0};a\leq 0,\left\langle\tau,a\right\rangle\leq 0.\]
The second condition implies that \(\tau\) is a non-negative Radon measure. Up to a renormalization, and up to a symmetrization, we assume that \(\tau\in\bar{Y}\) and we obtain a contradiction. Then the Palais-Smale condition (PS) can be written as the assumption of Proposition 2.3.
### Bubble convergence of Palais-Smale sequences
In [10] we stated the following proposition (in our case \(\Sigma=\mathbb{D}\), endowed with the Euclidean metric)
**Proposition 2.3**.: _Let \(e^{u_{\varepsilon}}\), \(\sigma_{\varepsilon}\), \(\Phi_{\varepsilon}:\Sigma\to\mathbb{R}^{m}\) be a smooth sequence of maps satisfying the Palais-Smale assumption \((PS)\) as \(\varepsilon\to 0\), that is_
* \(\sigma_{\varepsilon}=\text{diag}(\sigma_{k_{1}^{\varepsilon}}^{\varepsilon}, \cdots,\sigma_{k_{m}^{\varepsilon}}^{\varepsilon})\) _where the diagonal terms are eigenvalues associated to_ \(e^{u_{\varepsilon}}\) _with uniformly bounded spectral indices_ \(k_{i}^{\varepsilon}\)_, and_ \(\sigma_{\varepsilon}\to\sigma\) _as_ \(\varepsilon\to 0\)_._
* \(\Delta_{g}\Phi_{\varepsilon}=0\) _in_ \(\Sigma\) _and_ \(\partial_{\nu}\Phi_{\varepsilon}=e^{u_{\varepsilon}}\sigma_{\varepsilon} \Phi_{\varepsilon}\) _on_ \(\partial\Sigma\)__
* \(\int_{\partial\Sigma}e^{u_{\varepsilon}}dL_{g}=\int_{\partial\Sigma}|\Phi_{ \varepsilon}|_{\sigma_{\varepsilon}}^{2}\,e^{u_{\varepsilon}}dL_{g}=\int_{ \Sigma}|\nabla\Phi_{\varepsilon}|_{g}^{2}\,dA_{g}=1\)__
* \(|\Phi|_{\sigma_{\varepsilon}}^{2}\geq 1-\delta_{\varepsilon}\) _uniformly on_ \(\partial\Sigma\)_, where_ \(\delta_{\varepsilon}\to 0\) _as_ \(\varepsilon\to 0\)_._
_Then, up to a subsequence \(\Phi_{\varepsilon}\) bubble tree converges in \(W^{1,2}\) to \(\Phi_{0}:\Sigma\to co\left(\mathcal{E}_{\sigma}\right)\), and \(\Phi_{j}:\mathbb{D}\to co\left(\mathcal{E}_{\sigma}\right)\) for \(j=1,\cdots,l\) (\(l\geq 0\)) with an energy identity:_
\[1=\int_{\Sigma}\left|\nabla\Phi_{0}\right|_{g}^{2}dA_{g}+\sum_{j=1}^{l}\int_{ \mathbb{D}}\left|\nabla\Phi_{j}\right|_{h}^{2}dA_{h}\]
_Moreover, \(\Phi_{j}\) are smooth harmonic maps with free boundary for \(j=0,\cdots,l\) and their \(i\)-th coordinates are eigenfunctions associated to \(\lambda_{k_{i}}\) on the surface \(\Sigma\cup\bigcup_{1\leq i\leq l}\mathbb{D}\) with respect to the metrics \(e^{2u}g\) on \(\Sigma\) such that \(e^{u}=\partial_{\nu}\Phi_{0}.\Phi_{0}\) on \(\partial\Sigma\) and \(e^{2v_{j}}\xi\) on \(\mathbb{D}\) such that \(e^{v_{j}}=\partial_{\nu}\Phi_{j}.\Phi\) on \(\mathbb{S}^{1}\)._
We proved it in [10] under the assumption that \(\sigma_{k_{j}^{\varepsilon}}^{\varepsilon}\) does not converge to \(0\), leaving to the reader the remaining case, that is analogous to the Laplacian case proved in all generality in [10]. In the current paper, for the sake of completeness, we shall add the missing arguments for convergence of Palais Smale sequences in the specific case \(\Sigma=\mathbb{D}\), and \(e^{u_{\varepsilon}}\) symmetric with respect to the two axes, \(\sigma_{\varepsilon}=(\sigma_{1}^{\varepsilon},\sigma_{2}^{\varepsilon}, \sigma_{2}^{\varepsilon})\), \(\Phi_{\varepsilon}=(\phi_{0}^{\varepsilon},\phi_{1}^{\varepsilon},\phi_{2}^{ \varepsilon})=(\phi_{0}^{\varepsilon},\eta_{\varepsilon})\), where \(\phi_{0}^{\varepsilon}\) is a eigenfunction, \(\phi_{1}^{\varepsilon}\) and \(\phi_{2}^{\varepsilon}\) are independent second eigenfunctions (or possibly \(\phi_{2}^{\varepsilon}=0\)), \(\sigma_{1}^{\varepsilon}\to 0\) and \(\sigma_{2}^{\varepsilon}\) uniformly lower bounded as \(\varepsilon\to 0\).
Following [10] (with a slight improvement in the current paper) we first have:
**Claim 2.1**.: _Let \(\omega_{\varepsilon}\) be the harmonic extension of \(\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}\) in \(\mathbb{D}\) and \(\widetilde{\Phi_{\varepsilon}}=\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\). Then_
\[\int_{\mathbb{D}}\left|\nabla\omega_{\varepsilon}\right|^{2}+\int_{\mathbb{D}} \left|\nabla\left(\Phi_{\varepsilon}-\widetilde{\Phi_{\varepsilon}}\right) \right|_{\sigma_{\varepsilon}}^{2}=O\left(\delta_{\varepsilon}\right)\text{ as }\varepsilon\to 0. \tag{2.3}\]
Proof.: We have
\[\int_{\mathbb{D}}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right|_{\sigma_{\varepsilon}}^{2} -\int_{\mathbb{D}}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{ \varepsilon}}^{2}-\int_{\mathbb{D}}\left|\nabla\left(\Phi_{\varepsilon}-\frac{ \Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^ {2}\] \[=2\int_{\mathbb{D}}\left\langle\nabla\Phi_{\varepsilon},\nabla \left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}} \right)\right\rangle_{\sigma_{\varepsilon}}\] \[=2\int_{\mathbb{D}}\Delta\Phi_{\varepsilon}\sigma_{\varepsilon} \cdot\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}} \right)+2\int_{\mathbb{S}^{1}}\partial_{r}\Phi_{\varepsilon}.\sigma_{ \varepsilon}\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right)\] \[=2\int_{\mathbb{S}^{1}}e^{u_{\varepsilon}}\sigma_{\varepsilon} \cdot\Phi_{\varepsilon}.\sigma_{\varepsilon}.\left(\Phi_{\varepsilon}-\frac{ \Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\] \[=2\int_{\mathbb{S}^{1}}e^{u_{\varepsilon}}\frac{\left|\sigma_{ \varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\left( \omega_{\varepsilon}^{2}-\omega_{\varepsilon}\right)=O\left(\delta_{ \varepsilon}\right)\]
where we integrated \(\Delta\Phi_{\varepsilon}=0\) in \(\mathbb{D}\) and \(\partial_{r}\Phi_{\varepsilon}=\sigma_{\varepsilon}e^{u_{\varepsilon}}\Phi_ {\varepsilon}\) in \(\mathbb{S}^{1}\) against \(\sigma_{\varepsilon}.\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{ \omega_{\varepsilon}}\right)\), and we used that \(\omega_{\varepsilon}^{2}=\left|\Phi_{\varepsilon}\right|^{2}\geq 1-\delta_{\varepsilon}\) and that \(\int\left(\omega_{\varepsilon}^{2}-1\right)e^{u_{\varepsilon}}=0\). In particular, we have
\[0\leq\int_{\mathbb{D}}\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{ \varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^{2} \leq\int_{\mathbb{D}}\left(\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}-\left|\nabla\Phi_{ \varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\right)+O(\delta_{\varepsilon})\]
and knowing that with the straightforward computations we have
\[\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\sigma_ {\varepsilon}}^{2}-\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon }}^{2}=\left(1-\omega_{\varepsilon}^{2}\right)\left|\nabla\frac{\Phi_{ \varepsilon}}{\omega_{\varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}+\left| \nabla\omega_{\varepsilon}\right|^{2}\frac{\left|\Phi_{\varepsilon}\right|_{ \sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{2}}-\nabla\ln\omega_{ \varepsilon}\nabla\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\]
and knowing that
\[\int_{\mathbb{D}}\nabla\ln\omega_{\varepsilon}\nabla\left|\Phi_ {\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}= \int_{\mathbb{D}}\Delta\ln\omega_{\varepsilon}\left|\Phi_{ \varepsilon}\right|_{\sigma_{\varepsilon}}^{2}+\int_{\mathbb{S}^{1}}\omega_{ \varepsilon}^{2}\partial_{r}\ln\omega_{\varepsilon}\] \[= \int_{\mathbb{D}}\left|\nabla\omega_{\varepsilon}\right|^{2}\frac{ \left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{ \varepsilon}^{2}}+\int_{\mathbb{D}}\Delta\frac{\omega_{\varepsilon}^{2}}{2}\] \[= \int_{\mathbb{D}}\left|\nabla\omega_{\varepsilon}\right|^{2}\frac{ \left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{ \varepsilon}^{2}}+\int_{\mathbb{D}}\left|\nabla\omega_{\varepsilon}\right|^{2}\]
we obtain that
\[0\leq\int_{\mathbb{D}}\left|\nabla\omega_{\varepsilon}\right|^{2}+\int_{ \mathbb{D}}\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{ \omega_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^{2}\leq\int_{\mathbb{ D}}\left(1-\omega_{\varepsilon}^{2}\right)\left|\nabla\frac{\Phi_{\varepsilon}}{ \omega_{\varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}+O(\delta_{\varepsilon}) \leq O\left(\delta_{\varepsilon}\right)\]
as \(\varepsilon\to 0\).
Following [10] we define concentration points as points \(p\in\mathbb{S}^{1}\) such that
\[\lim_{r\to 0}\liminf_{\varepsilon\to 0}\int_{\mathbb{S}^{1}\setminus \mathbb{D}_{r}(p)}e^{u_{\varepsilon}}d\theta>0\]
and bad points as points \(p\in\mathbb{S}^{1}\) such that there are sequences \(p_{\varepsilon}\in\mathbb{S}^{1}\) and \(r_{\varepsilon}>0\) with \(p_{\varepsilon}\to p\) and \(r_{\varepsilon}\to 0\) as \(\varepsilon\to 0\) such that the eigenvalue
\[\sigma_{\star}\left(p_{\varepsilon},r_{\varepsilon},e^{u_{\varepsilon}}d\theta \right):=\inf_{\varphi\in\mathcal{C}_{c}^{\infty}\left(\mathbb{D}_{r_{ \varepsilon}(p_{\varepsilon})}\cap\overline{\mathbb{D}}\right)\setminus\{0\}} \frac{\int_{\mathbb{D}_{r_{\varepsilon}(p_{\varepsilon})}}\left|\nabla\varphi \right|^{2}}{\int_{\mathbb{D}_{r_{\varepsilon}(p_{\varepsilon})}\cap \mathbb{S}^{1}}\varphi^{2}e^{u_{\varepsilon}}d\theta}\]
satisfies
\[\sigma_{\star}\left(p_{\varepsilon},r_{\varepsilon},e^{u_{\varepsilon}}d\theta \right)\leq\sigma_{2}^{\varepsilon}\]
and good points are the other points of \(\mathbb{S}^{1}\).
**Claim 2.2**.: _Let \(p\in\mathbb{S}^{1}\) be a good point. Then_
\[\lim_{r\to 0}\limsup_{\varepsilon\to 0}\int_{\mathbb{S}^{1}\cap \mathbb{D}_{r}(p)}e^{u_{\varepsilon}}d\theta=\lim_{r\to 0}\limsup_{\varepsilon\to 0}\int_{ \mathbb{D}\cap\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}\] \[\qquad\qquad=\lim_{r\to 0}\limsup_{\varepsilon\to 0}\int_{ \mathbb{D}\cap\mathbb{D}_{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_ {\varepsilon}}^{2}=\lim_{r\to 0}\limsup_{\varepsilon\to 0}\int_{ \mathbb{D}\cap\mathbb{D}_{r}(p)}\left|\nabla\eta_{\varepsilon}\right|^{2}=0\]
Proof.: We choose \(r\) small enough such that \(\sigma_{\star}\left(p,\sqrt{r},e^{u_{\varepsilon}}d\theta\right)\geq\sigma_{ 2}^{\varepsilon}\). Let \(\eta\in\mathcal{C}_{c}^{\infty}\left(\mathbb{D}_{\sqrt{r}}(p)\right)\) such that \(0\leq\eta\leq 1\), \(\eta=1\) in \(\mathbb{D}_{r}(p)\) and \(\int_{\mathbb{R}^{2}}\left|\nabla\eta\right|^{2}\leq\frac{C}{\ln\frac{1}{r}}\)
\[\int_{\mathbb{S}^{1}}\eta e^{u_{\varepsilon}}\leq\left(\int_{\mathbb{S}^{1}} \eta^{2}e^{u_{\varepsilon}}\right)^{\frac{1}{2}}\leq\left(\frac{1}{\sigma_{ \star}\left(p,\sqrt{r},e^{u_{\varepsilon}}d\theta\right)}\int_{\mathbb{D}_{ \sqrt{r}}(p)}\left|\nabla\eta\right|^{2}\right)^{\frac{1}{2}}\leq\left(\frac{1 }{\sigma_{2}^{\varepsilon}}\frac{C}{\ln\frac{1}{r}}\right)^{\frac{1}{2}}\]
so that \(p\) is not a concentration point. Then we integrate the equation \(\Delta\Phi_{\varepsilon}=0\) and \(\partial_{r}\Phi_{\varepsilon}=\sigma_{\varepsilon}e^{u_{\varepsilon}}\Phi_{\varepsilon}\) against \(\eta\sigma_{\varepsilon}\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}\) and we obtain
\[\int_{\mathbb{D}}\eta\left\langle\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}^{2}},\nabla\Phi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon }}+\int_{\Sigma}\nabla\eta\nabla\sigma_{\varepsilon}\Phi_{\varepsilon}\frac{ \Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}=\int_{\mathbb{S}^{1}}\eta\frac{ \left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon }^{2}}e^{u_{\varepsilon}}d\theta\]
and since we have that
\[\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}} \right|_{\sigma_{\varepsilon}}^{2}= \left\langle\frac{\nabla\Phi_{\varepsilon}}{\omega_{\varepsilon}},\omega_{\varepsilon}\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}} +\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}\nabla\omega_{\varepsilon }\right\rangle_{\sigma_{\varepsilon}}-\left\langle\Phi_{\varepsilon}\frac{ \nabla\omega_{\varepsilon}}{\omega_{\varepsilon}^{2}},\frac{\nabla\Phi_{ \varepsilon}}{\omega_{\varepsilon}}-\Phi_{\varepsilon}\frac{\nabla\omega_{ \varepsilon}}{\omega_{\varepsilon}^{2}}\right\rangle_{\sigma_{\varepsilon}}\] \[= \left\langle\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2 }},\nabla\Phi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}+\frac{\left| \nabla\omega_{\varepsilon}\right|^{2}\left|\Phi_{\varepsilon}\right|_{\sigma_ {\varepsilon}}^{2}}{\omega_{\varepsilon}^{4}}\]
we obtain
\[\int_{\mathbb{D}}\eta\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon} }\right|_{\sigma_{\varepsilon}}^{2}\leq\sigma_{2}^{\varepsilon}\int_{\mathbb{ S}^{1}}\eta e^{u_{\varepsilon}}+\left(\frac{C}{\ln\frac{1}{r}}\int_{ \mathbb{D}}\left|\nabla\Phi_{\varepsilon}\right|^{2}\right)^{\frac{1}{2}}\sup _{\mathbb{D}}\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|}{ \omega_{\varepsilon}^{2}}+\int_{\mathbb{D}}\eta\frac{\left|\nabla\omega_{ \varepsilon}\right|^{2}\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}} {\omega_{\varepsilon}^{4}}\]
so that \(\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\sigma_{ \varepsilon}}^{2}\) does not concentrate at \(p\). And
\[\int_{\mathbb{D}}\eta\sigma_{2}^{\varepsilon}\left|\nabla\eta_{ \varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\leq \int_{\mathbb{D}}\eta\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{ \varepsilon}}^{2}\leq \int_{\mathbb{D}}\eta\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}+\int_{\mathbb{D}}\left|\nabla \left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}} \right)\right|_{\sigma_{\varepsilon}}\left|\nabla\left(\Phi_{\varepsilon}+\frac{ \Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}\] \[= \int_{\mathbb{D}}\eta\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{ \varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}+O\left(\delta_{\varepsilon}^{ \frac{1}{2}}\right)\]
completes the proof of the claim.
We deduce thanks to test function arguments (see [21]) that since \(\sigma_{2}^{\varepsilon}\) is uniformly lower bounded, \(e^{u_{\varepsilon}}d\theta\) has at most two concentration points and that
* If \(e^{u_{\varepsilon}}d\theta\) does not have concentration points, then in addition, by symmetries, any \(p\in\mathbb{S}^{1}\) has to be a good point (see test function arguments in [21]).
* If \(e^{u_{\varepsilon}}d\theta\) has one concentration point, then by symmetry and up to a quarter rotation there are two concentration points \((1,0),(-1,0)\). Then, up take half of the density \(1_{x_{1}>0}.e^{u_{\varepsilon}}\) and to rescale by the Mobius group of \(\mathbb{D}\), we have thanks to the symmetry that \[\lim_{r\to 0}\liminf_{\varepsilon\to 0}\int_{\mathbb{S}^{1}\setminus \mathbb{D}_{r}((-1,0))}e^{u_{\varepsilon}}d\theta=\frac{1}{2}\] and that \(e^{u_{\varepsilon}}d\theta\) does not concentrate on \(\mathbb{S}^{1}\setminus\{(-1,0)\}\). In addition, any \(p\in\mathbb{S}^{1}\setminus\{(-1,0)\}\) has to be a good point (see test function arguments in [2]).
Without loss of generality, we assume that the second case occurs (we recall this assumption in Proposition 2.4)
**Claim 2.3**.: _For any \(r>0\), there is \(C_{r}>0\) such that for \(j=0,1,2\),_
\[\int_{\mathbb{S}^{1}\cap\mathbb{D}_{r}((-1,0))}\left(\phi_{\varepsilon}^{j}- m_{\varepsilon,r}^{j}\right)^{2}d\theta\leq C_{r}\int_{\mathbb{D}}\left| \nabla\phi_{\varepsilon}^{j}\right|^{2}, \tag{2.4}\]
_where \(M_{\varepsilon,r}=(m_{\varepsilon,r}^{j})_{j=0,1,2}\) satisfies_
\[M_{\varepsilon,r}=\frac{\int_{\mathbb{S}^{1}\setminus\mathbb{D}_{r}((-1,0))} \Phi_{\varepsilon}e^{u_{\varepsilon}}d\theta}{\int_{\mathbb{S}^{1}\setminus \mathbb{D}_{r}((-1,0))}e^{u_{\varepsilon}}d\theta}\]
_and satisfies \(\left|M_{\varepsilon,r}\right|_{\sigma_{\varepsilon}}\leq 1\). In particular, \(\eta_{\varepsilon}\) is bounded in \(H^{1}_{loc}\left(\mathbb{D}\setminus\{(-1,0)\}\right)\) and \(\sqrt{\sigma_{1}^{\varepsilon}}\phi_{0}^{\varepsilon}\) converges to a constant function in \(H^{1}_{loc}\left(\mathbb{D}\setminus\{(-1,0)\}\right)\)._
This Claim is the consequence of a Poincare inequality given from a uniform \(H^{-\frac{1}{2}}\) control of \(\frac{e^{u_{\varepsilon}}}{\int_{\mathbb{S}^{1}\setminus\mathbb{D}_{r}((-1,0 ))}e^{u_{\varepsilon}}d\theta}\) on \(\mathbb{S}^{1}\setminus\mathbb{D}_{r}((-1,0))\) as \(\varepsilon\to 0\) because all the points of \(\mathbb{S}^{1}\setminus\{(-1,0)\}\) are good points. It is left to the reader (follow the proof of Claim 2.6 in [2]) Thanks to the previous claim, we have the assumptions of Proposition 2.4.
**Proposition 2.4**.: _We assume that (up to some rescaling by the Mobius group of the disk) \((\sigma_{1}^{\varepsilon})^{\frac{1}{2}}\,\phi_{0}^{\varepsilon}\) converges to a constant function \(c\) in \(H^{1}_{loc}(\mathbb{D}^{1}\setminus\{(-1,0)\})\), then either \(|c|=1\) and \(\eta_{\varepsilon}\) bubble tree converges to two constant functions in \(W^{1,2}\) or \(|c|<1\) and up to a subsequence as \(\varepsilon\to 0\), \(\frac{\sigma_{2}^{\varepsilon}}{\sqrt{1-c^{2}}}\eta_{\varepsilon}\) bubble tree converges in \(W^{1,2}\) to two identity maps of \(\mathbb{D}\)._
Notice that we a posteriori have that for maximizing sequences, \(\eta_{\varepsilon}\) cannot bubble tree converge to constant functions since \(\sigma_{2}^{\varepsilon}\) does not converge to \(0\). We do not prove the case \(c=1\) because it is not possible in the current paper with the choice \(h_{s,t}\) because \(c=0\) in these cases (see Proposition 2.1). In the proof of Proposition 2.4, we will use the following
**Claim 2.4**.: _[_12_]__, lemma 3.1 Let \(\Phi:\mathbb{D}^{+}\to\mathbb{R}^{n}\) be a Euclidean harmonic map. We assume that \(|\Phi|\geq 1\) on \([-1,1]\times\{0\}\). Then for any \(\alpha>0\) there is \(\varepsilon_{\alpha}>0\) such that if_
\[\int_{\mathbb{D}_{+}}\left|\nabla\Phi\right|^{2}\leq\varepsilon_{\alpha}\]
_then \(|\Phi|^{2}\geq 1-\alpha\) on \(\mathbb{D}^{+}_{\frac{1}{2}}\)._
Proof of Proposition 2.4.: We assume \(c<1\). We can extend (the rescaling by the Mobius group of the disk of) \(\Phi_{\varepsilon}\) to a map still denoted \(\Phi_{\varepsilon}=(\phi_{0,\varepsilon},\eta_{\varepsilon}):\mathbb{R}^{2} \to\mathbb{R}^{3}\) by \(\Phi_{\varepsilon}(z)=\Phi_{\varepsilon}\left(\frac{z}{|z|^{2}}\right)\) for \(|z|\geq 1\).
**Step 1**: We define a suitable replacement of \(\eta_{\varepsilon}\), denoted by \(\psi_{\varepsilon}\)
Let \(p\in\mathbb{S}^{1}\) be a good point, \(0<\delta\) be chosen later and \(r>0\) be such that for any \(\varepsilon>0\),
\[\int_{\mathbb{D}_{2}\sqrt{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{ \varepsilon}}^{2}\leq\delta.\]
By the Courant-Lebesque lemma, we can find \(r_{\varepsilon}\in(r,\sqrt{r})\) such that
\[\int_{\partial\mathbb{D}_{r_{\varepsilon}}(p)}\left|\partial_{\theta}\Phi_{ \varepsilon}\right|_{\sigma_{\varepsilon}}^{2}d\theta\leq\frac{1}{\ln 2}\int_{ \mathbb{D}\sqrt{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{ \varepsilon}}^{2}\leq\frac{1}{\ln 2}\delta \tag{2.5}\]
and as a consequence
\[\forall q,q^{\prime}\in\partial\mathbb{D}_{r_{\varepsilon}}(p);\sigma_{1}^{ \varepsilon}(\phi_{0,\varepsilon}(q)-\phi_{0,\varepsilon}(q^{\prime}))^{2}+ \sigma_{2}^{\varepsilon}\left|\eta_{\varepsilon}(q)-\eta_{\varepsilon}(q^{ \prime})\right|^{2}\leq\frac{\pi}{\ln 2}\delta. \tag{2.6}\]
By the classical trace \(L^{2}\) embedding into \(H^{1}\) and (2.4), we have a sequence \(m_{\varepsilon,\rho}^{0}\) such that
\[\int_{\partial\mathbb{D}_{r_{\varepsilon}}(p)}\left(\sqrt{\sigma_{1}^{ \varepsilon}}\phi_{0,\varepsilon}-\sqrt{\sigma_{1}^{\varepsilon}}m_{ \varepsilon,\rho}^{0}\right)^{2}=O\left(\sigma_{1}^{\varepsilon}\right)\]
as \(\varepsilon\to 0\) and using this and (2.6), we obtain for \(\varepsilon\) small enough:
\[\forall q\in\partial\mathbb{D}_{r_{\varepsilon}}(p);\sigma_{1}^{\varepsilon} \left|\phi_{0,\varepsilon}(q)-m_{\varepsilon,\rho}^{0}\right|^{2}\leq\frac{ \pi}{\ln 2}\delta+O\left(\sqrt{\sigma_{1}^{\varepsilon}}\right).\]
We also have that
\[\sigma_{1}^{\varepsilon}\left(m_{\varepsilon,\rho}^{0}\right)^{2}=c^{2}+o(1)\]
Then, given \(0<\alpha\) applying the claim 2.4 where up to reduce \(r\), we can reduce \(\delta\) so that \(\delta\leq\varepsilon_{\frac{\alpha}{4}}\) and \(2\sqrt{\frac{\pi\delta}{\ln 2}}\left|c\right|+\frac{\pi}{\ln 2}\delta\leq \frac{\alpha}{4}\), we obtain on \(\partial\mathbb{D}_{r_{\varepsilon}}(p)\),
\[\sigma_{2}^{\varepsilon}\left|\eta_{\varepsilon}\right|^{2}= \left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}-\sigma_{1}^{ \varepsilon}\left(\phi_{0}^{\varepsilon}\right)^{2} \geq 1-O(\delta_{\varepsilon})-\left(\sqrt{\sigma_{1}^{ \varepsilon}}m_{\varepsilon,\rho}^{0}+\sqrt{\frac{\pi}{\ln 2}\delta}\right)^{2}\] \[\geq 1-c^{2}-\alpha\]
for \(\varepsilon\) small enough, so that we can define \(\frac{\eta_{\varepsilon}}{\left|\eta_{\varepsilon}\right|}\) on \(\partial\mathbb{D}_{r_{\varepsilon}}(p)\). Let \(\tilde{\eta}_{\varepsilon}:\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D} \rightarrow\mathbb{D}\) be a free boundary harmonic (with respect to \(\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{S}^{1}\)) extension of \(\frac{\eta_{\varepsilon}}{\left|\eta_{\varepsilon}\right|}:\partial\mathbb{D} _{r_{\varepsilon}}(p)\cap\mathbb{D}\). Such a map is unique, as soon as its energy is small enough by the energy convexity result (see [10]), that we apply to any map \(\varphi_{\varepsilon}\) such that \(\varphi_{\varepsilon}=\frac{\eta_{\varepsilon}}{\left|\eta_{\varepsilon} \right|}\) on \(\partial\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}\) and \(\left|\varphi_{\varepsilon}\right|=1\) on \(\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{S}^{1}\):
\[\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \left(\varphi_{\varepsilon}-\tilde{\eta}_{\varepsilon}\right)\right|^{2}\leq \int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla\varphi_{ \varepsilon}\right|^{2}-\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\nabla\tilde{\eta}_{\varepsilon}\right|^{2} \tag{2.7}\]
Before using this, we have to check that the energy of \(\tilde{\eta}_{\varepsilon}\) is small enough. Let \(h_{\varepsilon}\) be the harmonic extension of \(\eta_{\varepsilon}:\partial\mathbb{D}_{r_{\varepsilon}}(p)\rightarrow\mathbb{R} ^{2}\) in \(\mathbb{D}_{r_{\varepsilon}}(p)\), By the maximum principle and by (2.6), we obtain that \(\sigma_{2}^{\varepsilon}\left|h_{\varepsilon}\right|^{2}\geq 1-c^{2}-2\alpha\) so that
\[\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \tilde{\eta}_{\varepsilon}\right|^{2}\leq\int_{\mathbb{D}_{r_{\varepsilon}}(p) \cap\mathbb{D}}\left|\nabla\frac{h_{\varepsilon}}{\left|h_{\varepsilon} \right|}\right|^{2}\leq\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\frac{\left| \nabla h_{\varepsilon}\right|^{2}}{\left|h_{\varepsilon}\right|^{2}}\] \[\leq\frac{\sigma_{2}^{\varepsilon}}{1-c^{2}-2\alpha}\int_{\mathbb{ D}_{2}\sqrt{r}(p)}\left|\nabla h_{\varepsilon}\right|^{2}\leq\frac{\sigma_{2}^{ \varepsilon}}{1-c^{2}-2\alpha}\delta\]
and up to reduce \(r\) again, the energy of \(\tilde{\eta}_{\varepsilon}\) is small enough to have (2.7). We set
\[\tilde{\omega}_{\varepsilon}=\sqrt{\frac{\omega_{\varepsilon}^{2}-\sigma_{1}^{ \varepsilon}\varphi_{\varepsilon}^{2}}{\sigma_{2}^{\varepsilon}}},\]
where \(\varphi_{\varepsilon}\) is the harmonic extension of \(\phi_{0}^{\varepsilon}:\partial\mathbb{D}_{r_{\varepsilon}}(p)\to\mathbb{R}\) in \(\mathbb{D}_{r_{\varepsilon}}(p)\). We will use \(\psi_{\varepsilon}=\tilde{\omega}_{\varepsilon}\tilde{\eta}_{\varepsilon}\) as the suitable replacement of \(\eta_{\varepsilon}\) to prove the \(H^{1}\)-bubble convergence.
**Step 2**: We test \(\frac{\psi_{\varepsilon}-\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon} }^{2}}\) against the equation satisfied by \(\eta_{\varepsilon}\) and we obtain (2.8)
\[\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left\langle \nabla\eta_{\varepsilon},\nabla\frac{\eta_{\varepsilon}-\psi_{\varepsilon}}{ \widetilde{\omega_{\varepsilon}}^{2}}\right\rangle=\int_{\mathbb{D}_{r_{ \varepsilon}}(p)\cap\mathbb{S}^{1}}e^{u_{\varepsilon}}\left\langle\sigma_{2}^ {\varepsilon}\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}, \frac{\eta_{\varepsilon}-\psi_{\varepsilon}}{\widetilde{\omega_{\varepsilon} }}\right\rangle\] \[= \frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}e ^{u_{\varepsilon}}\sigma_{2}^{\varepsilon}\left|\nabla\left(\frac{\eta_{ \varepsilon}-\psi_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right) \right|^{2}-\frac{1}{2}\sigma_{1}^{\varepsilon}\int_{\mathbb{D}_{r_{ \varepsilon}}(p)\cap\mathbb{S}^{1}}\frac{e^{u_{\varepsilon}}}{\widetilde{ \omega_{\varepsilon}}^{2}}\left((\phi_{0,\varepsilon})^{2}-(\varphi_{ \varepsilon})^{2}\right)\] \[\leq \frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\nabla\left(\frac{\eta_{\varepsilon}-\psi_{\varepsilon}}{\widetilde{ \omega_{\varepsilon}}}\right)\right|^{2}\] \[+\frac{1}{c_{0}}\sigma_{1}^{\varepsilon}\left(\int_{\mathbb{D}_{ r_{\varepsilon}}(p)\cap\mathbb{S}^{1}}\left(\phi_{0,\varepsilon}+\varphi_{ \varepsilon}\right)^{2}e^{u_{\varepsilon}}\right)^{\frac{1}{2}}\left(\frac{1}{ \sigma_{2}^{\varepsilon}}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D} }\left|\nabla\left(\phi_{0,\varepsilon}-\varphi_{\varepsilon}\right)\right|^ {2}\right)^{\frac{1}{2}}\] \[\leq \frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\nabla\left(\frac{\eta_{\varepsilon}-\psi_{\varepsilon}}{\widetilde{ \omega_{\varepsilon}}}\right)\right|^{2}+O\left((\sigma_{1}^{\varepsilon})^{ \frac{1}{2}}\right)\]
which gives
\[\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right|^{2}- \int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}|\nabla \tilde{\eta}_{\varepsilon}|^{2}\leq O\left((\sigma_{1}^{\varepsilon})^{\frac{1 }{2}}\right)\] \[+\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\frac{ \nabla\widetilde{\omega_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}} \left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}} \nabla\left(\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}- \tilde{\eta_{\varepsilon}}\right)-\left(\frac{\eta_{\varepsilon}}{\widetilde{ \omega_{\varepsilon}}}-\tilde{\eta_{\varepsilon}}\right)\frac{\nabla\eta_{ \varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right)\] \[\leq O\left((\sigma_{1}^{\varepsilon})^{\frac{1}{2}}+\delta_{ \varepsilon}^{\frac{1}{2}}\right) \tag{2.8}\]
**Step 3**: We prove the \(H^{1}_{loc}\left(\mathbb{S}^{1}\setminus\{p\}\right)\) convergence of \(\eta_{\varepsilon}\) to a free boundary harmonic map in \(\mathbb{S}^{1}\)
Now, we test \(\widetilde{\eta_{\varepsilon}}-\frac{\eta_{\varepsilon}}{\widetilde{\omega_{ \varepsilon}}}\) against the harmonic map equation satisfied by \(\widetilde{\eta_{\varepsilon}}\). We obtain in a similar way
\[I:=2\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left\langle \nabla\left(\widetilde{\eta_{\varepsilon}}-\frac{\eta_{\varepsilon}}{ \widetilde{\omega_{\varepsilon}}}\right),\nabla\widetilde{\eta_{\varepsilon}} \right\rangle=2\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{S}^{1}} \widetilde{\eta_{\varepsilon}}.\partial_{r}\widetilde{\eta_{\varepsilon}}\left \langle\sigma_{2}^{\varepsilon}\widetilde{\eta_{\varepsilon}},\widetilde{\eta_ {\varepsilon}}-\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right\rangle\]
and using the consequence of the \(\varepsilon\)-regularity result on free boundary harmonic maps \(|\nabla\widetilde{\eta_{\varepsilon}}|^{2}\leq\frac{C\int_{\mathbb{D}_{r_{ \varepsilon}}(p)\cap\mathbb{D}}|\nabla\widetilde{\eta_{\varepsilon}}|^{2}}{(1-| x|)^{2}}\) and a trace Hardy inequality (this is the way we prove energy convexity result (2.7) in [19]), we obtain:
\[I\leq C\delta\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left| \nabla\left(\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}- \widetilde{\eta_{\varepsilon}}\right)\right|^{2}+C\frac{1}{c_{0}}\sigma_{1}^{ \varepsilon}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \frac{\left(\varphi_{\varepsilon}\right)^{2}-\left(\phi_{0,\varepsilon} \right)^{2}}{\widetilde{\omega_{\varepsilon}}^{2}}\right|^{2}\] \[=: C\delta\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\nabla\left(\eta_{\varepsilon}-\frac{\widetilde{\eta_{\varepsilon}}}{ \widetilde{\omega_{\varepsilon}}}\right)\right|^{2}+\frac{C}{c_{0}}II\]
and we prove that \(II=o(1)\) as \(\varepsilon\to 0\), using that \(\sigma_{1}^{\varepsilon}\left(\varphi_{\varepsilon}^{2}+\phi_{0,\varepsilon} ^{2}\right)\leq C\tilde{\omega_{\varepsilon}}^{2}\) and several integration by parts
\[II\leq 2\left(\sigma_{1}^{\varepsilon}\int_{\mathbb{D}_{r_{\varepsilon }}(p)\cap\mathbb{D}}\frac{\left|\nabla\left(\varphi_{\varepsilon}\right)^{2}- \left(\phi_{0,\varepsilon}\right)^{2}\right|^{2}}{\widetilde{\omega_{ \varepsilon}}^{2}}+C\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}| \nabla\widetilde{\omega_{\varepsilon}}|^{2}\right)\] \[\leq 2\left(C\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\Delta\left(\left(\varphi_{\varepsilon}\right)^{2}-\left(\phi_{0, \varepsilon}\right)^{2}\right)\right|+\sigma_{1}^{\varepsilon}\int_{\mathbb{D }_{r_{\varepsilon}}(p)\cap\mathbb{D}}|\nabla\widetilde{\omega_{\varepsilon}}| \left|\nabla\left(\varphi_{\varepsilon}-\phi_{0}^{\varepsilon}\right)\right| \frac{\left|\varphi_{\varepsilon}-\phi_{0}^{\varepsilon}\right|}{\widetilde{ \omega_{\varepsilon}}}\right)\] \[+2C\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}|\nabla \widetilde{\omega_{\varepsilon}}|^{2}\] \[= 4C\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left| \nabla\varphi_{\varepsilon}|^{2}-|\nabla\phi_{0}^{\varepsilon}|^{2}\right|+O \left(\left(\sigma_{1}^{\varepsilon}\right)^{\frac{1}{2}}+\delta_{ \varepsilon}^{\frac{1}{2}}\right)=o(1)\]
as \(\varepsilon\to 0\) since \(\varphi_{\varepsilon}-\phi_{0}^{\varepsilon}\) converges to \(0\) in \(H^{1}\). Letting \(\delta\) be small enough such that \(1-C\delta\geq\frac{1}{2}\) we obtain that
\[\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}} \left|\nabla\left(\eta_{\varepsilon}-\frac{\widetilde{\eta_{\varepsilon}}}{ \widetilde{\omega_{\varepsilon}}}\right)\right|^{2}\leq(1-C\delta)\int_{ \mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla\left(\eta_{ \varepsilon}-\frac{\widetilde{\eta_{\varepsilon}}}{\widetilde{\omega_{ \varepsilon}}}\right)\right|^{2}\] \[\leq \int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \left(\eta_{\varepsilon}-\frac{\widetilde{\eta_{\varepsilon}}}{\widetilde{ \omega_{\varepsilon}}}\right)\right|^{2}-2\int_{\mathbb{D}_{r_{\varepsilon}}(p) \cap\mathbb{D}}\left\langle\nabla\left(\widetilde{\eta_{\varepsilon}}-\frac{ \eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right),\nabla\widetilde{ \eta_{\varepsilon}}\right\rangle+o(1)\] \[\leq \int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla \frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\right|^{2}-\int_{ \mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla\widetilde{\eta_{ \varepsilon}}\right|^{2}+o\left(1\right).\]
From (2.8) proved in Step 2, we obtain
\[\int_{\mathbb{D}_{r_{\varepsilon}}(p)\cap\mathbb{D}}\left|\nabla\left(\widetilde{ \eta_{\varepsilon}}-\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}} \right)\right|^{2}\to 0\]
as \(\varepsilon\to 0\). We obtain the expected local \(H^{1}\) comparison of \(\frac{\eta_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\) to a free boundary harmonic map into \(\mathbb{S}^{1}\). Since \(\tilde{\eta}_{\varepsilon}\) converges in \(\mathcal{C}^{1}\left(\mathbb{D}_{\frac{r}{2}}(p)\cap\mathbb{D}\right)\) we obtain the expected local \(H^{1}\) convergence of \(\eta_{\varepsilon}\) to a free boundary harmonic map \(\eta:\mathbb{D}\setminus\{(0,1)\}\to\mathbb{D}\) at the neighborhood of any point \(p\in\mathbb{S}^{1}\setminus\{(0,1)\). By a standard regularity and point removability argument [19] we have a smooth free boundary harmonic map \(\eta:\mathbb{D}\to\mathbb{D}\) at the limit.
**Conclusion**
By symmetries, we obtain a \(H^{1}\) bubble convergence to two free boundary harmonic maps \(\eta:\mathbb{D}\to\mathbb{D}\). Up to a reparametrization by the Mobius group, they have to be \(z\mapsto z^{n}\). Since they came from second eigenfunctions there energy is less than \(2\pi\): we must have \(n=1\) and this completes the proof of the Proposition. \(\diamondsuit\)
In order to complete the proof of Proposition 2.3 (in our special case), (again up to a Mobius reparametrization), we need to prove that the weak-\(\star\) limit \(\nu\) of \(e^{u_{\varepsilon}}d\theta\) is absolutely continuous with respect to \(d\theta\) with a smooth positive density. Let \(\zeta\in\mathcal{C}_{c}^{\infty}\left(\mathbb{D}_{r}(p)\cap\overline{\mathbb{D }}\right)\) where \(p\in\mathbb{S}^{1}\setminus\{(-1,0)\}\). We have that
\[\int_{\mathbb{S}^{1}}\zeta\left(\sigma_{2}^{\varepsilon}\eta_{ \varepsilon}e^{u_{\varepsilon}}d\theta-\sigma\eta d\nu\right) = \int_{\mathbb{S}^{1}}\zeta\left(\sigma_{2}^{\varepsilon}\eta_{ \varepsilon}-\sigma\eta\right)e^{u_{\varepsilon}}d\theta\] \[\quad+\int_{\mathbb{S}^{1}}\zeta\sigma\eta\left(e^{u_{ \varepsilon}}d\theta-d\nu\right)\;.\]
Then on the first right-hand term, we have that
\[\int_{\mathbb{S}^{1}}\zeta\left(\sigma_{2}^{\varepsilon}\eta_{ \varepsilon}-\sigma\eta\right)e^{u_{\varepsilon}}d\theta \leq \left(\int_{D_{r}(p)\cap\mathbb{S}^{1}}\zeta^{2}\left|\sigma_{2}^ {\varepsilon}\eta_{\varepsilon}-\sigma\eta\right|^{2}e^{u_{\varepsilon}} \theta\right)^{\frac{1}{2}}\] \[\leq \left(\frac{1}{\sigma_{\star}(p,r,e^{u_{\varepsilon}}d\theta)} \int_{\mathbb{D}_{r}(p)\cap\mathbb{D}}\left|\nabla\left(\zeta\left|\sigma_{2} ^{\varepsilon}\eta_{\varepsilon}-\sigma\eta\right|\right)\right|^{2}\right)^{ \frac{1}{2}}\] \[\leq C\left(\int_{\mathbb{D}_{r}(p)\cap\mathbb{D}}\left|\nabla\left( \eta_{\varepsilon}-\eta\right)\right|^{2}\right)^{\frac{1}{2}}\]
for some constant \(C\) independent of \(\varepsilon\). Letting \(\varepsilon\to 0\) in a weak sense to the eigenvalue equation \(\Delta\eta_{\varepsilon}=0\) in \(\mathbb{D}\) and \(\partial_{r}\eta_{\varepsilon}=\sigma_{2}^{\varepsilon}\eta_{\varepsilon}e^{u _{\varepsilon}}\) on \(\mathbb{S}^{1}\), we get by Proposition 2.4:
\[\Delta\eta=0\mbox{ in }\mathbb{D}\mbox{ and }\partial_{r}\eta=\sigma\Phi\nu\mbox{ on } \mathbb{S}^{1}\]
and since \(\eta\) is the identity by Proposition 2.4, we obtain that \(\nu=d\theta\): we obtain a smooth density for \(\nu\).
### Proof of Theorem 0.1
Thanks to the previous section, we have existence of a maybe disconnected minimizer for \(g\mapsto F_{s,t}(g)\) under symmetry constraints. Since the second eigenvalue must not converge to \(0\), there are at most two connected components at the limit. Thanks to Theorem 0.2, involving symmetric test functions, there is in fact at most one connected component at the limit by comparison of the energy between one disk and two disjoint disks. By symmetry assumptions, if there is one concentration point, we must have that \(e^{u_{\varepsilon}}d\theta\) weak \(\star\) converges to a sum \(\frac{\delta_{1}+\delta_{-1}}{2}\) or \(\frac{\delta_{i}+\delta_{-i}}{2}\) which is impossible. Then the \(H^{1}\) bubble convergence of Proposition 2.3 is in fact a strong \(H^{1}\) convergence. We obtain the existence of smooth minimizers on the disk for the symmetric problem. By Theorem 0.3, we obtain embedded free boundary minimal disks into ellipsoids. The remaining arguments to obtain non planar ones are in the introduction. Let's verify the remaining properties of Theorem 0.1
#### 2.4.1. Monotonicity
We prove that \(L_{s,t}p_{s,t}\) is decreasing and that \(L_{s,t}\) is increasing. Let \(t_{1}<t_{2}\), we set for \(i=1,2\): \(a_{i}=L_{s,t_{i}}p_{s,t_{i}}\) and \(b_{i}=L_{s,t_{i}}\) the first and second Steklov eigenvalues of the spectral problem. We have
\[h_{s,t_{1}}(a_{1},b_{1})\leq h_{s,t_{1}}(a_{2},b_{2})\text{ and }h_{s,t_{2}}(a_{ 2},b_{2})\leq h_{s,t_{2}}(a_{1},b_{1})\]
so that if \(s<0\),
\[a_{1}^{-s}+t_{1}b_{1}^{-s}\geq a_{2}^{-s}+t_{1}b_{2}^{-s}\text{ and }a_{2}^{-s}+t_{2}b_{2}^{-s}\geq a_{1}^{-s}+t_{2}b_{1}^{-s}\]
and by sum,
\[(t_{2}-t_{1})(b_{2}^{-s}-b_{1}^{-s})\geq 0\]
and dividing the equations by \(t_{i}\), and a sum,
\[\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\left(a_{1}^{-s}-a_{2}^{-s}\right)\geq 0\]
We argue the same way if \(s>0\).
#### 2.4.2. Energy estimates
By Theorem 0.2, we have that for any \(s\neq 0\)\(t>0\) and \(\varepsilon>0\), \(h_{s,t}(D_{s,t},g_{s,t})\leq h_{s,t}(\mathbb{D},h_{\varepsilon})\) and a straightforward computation gives that
\[\bar{\sigma}_{1}\left(D_{s,t},g_{s,t}\right)\geq\left(a_{\varepsilon}^{-s}+t \left(b_{\varepsilon}^{-s}-\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)^{-s} \right)\right)^{-\frac{1}{s}}\geq\left(a_{\varepsilon}^{-s}+t\left(b_{ \varepsilon}^{-s}-(4\pi)^{-s}\right)\right)^{-\frac{1}{s}},\]
where \(a_{\varepsilon}=\frac{2\pi}{\ln\left(\frac{1}{\varepsilon}\right)}+O\left( \frac{1}{\ln\left(\frac{1}{\varepsilon}\right)^{2}}\right)\) and \(b_{\varepsilon}=4\pi-16\pi\varepsilon+o(\varepsilon)\) as \(\varepsilon\to 0\), so that
\[\frac{\bar{\sigma}_{1}\left(D_{s,t},g_{s,t}\right)}{2\pi}\geq\ln\left(\frac{1 }{\varepsilon}\right)^{-1}\left(1+O\left(\frac{1}{\ln\left(\frac{1}{ \varepsilon}\right)}\right)+t\varepsilon\ln\left(\frac{1}{\varepsilon} \right)^{-s}s(4\pi)^{-s}16\pi(1+o(1))\right)^{-\frac{1}{s}}\]
as \(\varepsilon\to 0\), so that choosing \(t=\varepsilon^{-1}\ln\left(\frac{1}{\varepsilon}\right)^{s-1}\) we have
\[\bar{\sigma}_{1}\left(D_{s,t},g_{s,t}\right)\geq\frac{2\pi}{\ln t}\left(1+O \left(\frac{1}{\ln t}\right)\right) \tag{2.9}\]
as \(t\to+\infty\). In addition, we have that for any \(s\neq 0\)
\[\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)\geq\begin{cases}\left(\frac{- \bar{\sigma}_{1}(D_{s,t},g_{s,t})^{-s}}{t}+(4\pi)^{-s}\right)^{-\frac{1}{s}} \text{ if }s<0\\ \left(\frac{a_{\varepsilon}^{-s}}{t}+b_{\varepsilon}^{-s}\right)^{-\frac{1}{s }}\text{ if }s>0\end{cases},\]
so that if \(s<0\), we obtain \(\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)\geq 4\pi-O\left(\frac{1}{t}\right)\) as \(t\to+\infty\) and if \(s>0\),
\[\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)\geq 4\pi\left(\frac{\ln\left(\frac{1 }{\varepsilon}\right)^{s}2^{s}}{t}\left(1+O\left(\frac{1}{\ln\left(\frac{1}{ \varepsilon}\right)}\right)\right)+1+4s\varepsilon(1+o(1))\right)^{-\frac{1}{ s}}\]
as \(\varepsilon\to 0\), so that setting \(t=\varepsilon^{-1}\),
\[\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)\geq 4\pi-O\left(\frac{\left(\ln t \right)^{\max\left\{s,0\right\}}}{t}\right) \tag{2.10}\]
as \(t\to+\infty\).
Knowing that the critical metrics of \(F_{s,t}\) satisfy the mass conditions of Proposition 2.1 coming from the choice of the combination \(h_{s,t}\), we obtain
\[t=\frac{\int_{\partial D_{s,t}}(x_{1}^{2}+x_{2}^{2})dL_{g_{s,t}}}{\int_{\partial D _{s,t}}x_{0}^{2}dL_{g_{s,t}}}\frac{1}{p_{s,t}^{s+1}}\]
for any \(s\), and since \(L_{s,t}=p_{s,t}\int_{\partial D_{t}}x_{0}^{2}dL_{g_{s,t}}+\int_{\partial D_{s,t}}(x_{1}^{2}+x_{2}^{2})dL_{g_{t}}\), we obtain
\[\int_{\partial D_{s,t}}x_{0}^{2}dL_{g_{s,t}}=\frac{L_{s,t}}{tp_{s,t}^{s+1}+p_{ s,t}}\to 0\text{ as }t\to+\infty,\]
since \(\bar{\sigma}_{1}\left(D_{s,t},g_{s,t}\right)=p_{s,t}L_{s,t}\) satisfies (2.9) and \(\bar{\sigma}_{2}\left(D_{s,t},g_{s,t}\right)=L_{s,t}\to 4\pi\) by (2.10). Then \(x_{0}\) converges to \(0\) in \(H^{1}\left(D_{s,t}\right)\). Then using again the arguments of Proposition 2.4, we obtain the remaining convergence results as \(t\to+\infty\) since \(H^{1}\) bubble tree convergence implies varifold convergence.
## 3. Spectral gaps
### Notations and preliminaries
On the Euclidean disk \((\mathbb{D},\xi)\), we set the following competitor \(g_{\varepsilon}=e^{2\omega_{\varepsilon}}\xi\), where for \(z=(z_{1},z_{2})\in\mathbb{D}^{2}\) we set
\[e^{\omega_{\varepsilon}(z)}=\frac{\beta_{\varepsilon}^{2}-1}{\left|\beta_{ \varepsilon}-z\right|^{2}}+\frac{\beta_{\varepsilon}^{2}-1}{\left|\beta_{ \varepsilon}+z\right|^{2}}\;,\]
for
\[\beta_{\varepsilon}=\frac{1+\varepsilon}{1-\varepsilon}>1\;.\]
Let's describe the surface \((\mathbb{D},g_{\varepsilon})\) in conformal coordinates. We often prefer to be in the chart of the \(Im(x)\geq 0\) half plane \(\mathbb{H}_{+}\) via the holomorphic maps \(f_{\pm}:\mathbb{D}\to\mathbb{H}_{+}\) defined by
\[f_{\pm}(z)=i\frac{1\mp z}{1\pm z}\text{ and }f_{\pm}^{-1}(x)=\pm\frac{1+ix}{1- ix}\;.\]
Notice that
\[\left(f_{\pm}^{-1}\right)^{\star}\xi=\frac{4}{\left|1-ix\right|^{4}}\xi=\frac{ 4}{\left(\left(1+x_{2}\right)^{2}+x_{1}^{2}\right)^{2}}\xi=e^{2u}\xi\]
where the function
\[u(x_{1},x_{2})=\ln\left(\frac{2}{\left(1+x_{2}\right)^{2}+x_{1}^{2}}\right)\]
satisfies the Liouville equation
\[\begin{cases}\Delta u=0\text{ in }\mathbb{H}_{+}\\ -\partial_{y}u=e^{u}\text{ on }\mathbb{R}\times\{0\}\;.\end{cases}\]
Notice that the first Steklov eigenfunctions on the flat disk \(z_{1}\) and \(z_{2}\) become on \(\mathbb{H}_{+}\) endowed with the metric \(e^{2u}\xi\):
\[Z_{1}(x)=Re(f_{+}^{-1}(x))=\frac{1-\left|x\right|^{2}}{(1+x_{2})^{2}+(x_{1})^{ 2}}\text{ and }Z_{2}(x)=Im(f_{+}^{-1}(x))=\frac{2x_{1}}{(1+x_{2})^{2}+(x_{1})^{2}}\;. \tag{3.1}\]
The function \(\omega_{\varepsilon}:\mathbb{D}\to\mathbb{R}\) was defined such that \(\omega_{\varepsilon}:=\tilde{\omega}_{\varepsilon}\circ f_{+}-u\circ f_{+}\) where
\[\tilde{\omega}_{\varepsilon}=\ln\left(e^{\tilde{u}_{\varepsilon}}+e^{\tilde{v}_ {\varepsilon}}\right),\ \ e^{\tilde{u}_{\varepsilon}(x)}=\frac{1}{\varepsilon}e^{u \left(\frac{x}{\varepsilon}\right)}\text{ and }e^{\tilde{u}_{\varepsilon}(x)}= \varepsilon e^{u(\varepsilon x)}\]
where we notice that \(u_{\varepsilon}:=\tilde{u}_{\varepsilon}\circ f_{+}-u\circ f_{+}\) and \(v_{\varepsilon}:=\tilde{v}_{\varepsilon}\circ f_{+}-u\circ f_{+}\) satisfy
\[e^{2u_{\varepsilon}}=\frac{\beta_{\varepsilon}^{2}-1}{\left|\beta_{ \varepsilon}-z\right|^{2}}\text{ and }e^{2v_{\varepsilon}}=\frac{\beta_{ \varepsilon}^{2}-1}{\left|\beta_{\varepsilon}+z\right|^{2}}\;.\]
We denote by
\[e^{\hat{\omega}_{\varepsilon}(x)}=\varepsilon e^{\tilde{\omega}_{\varepsilon} (\varepsilon x)}\]
the rescaled potential close to the point \((1,0)\) at the scale \(\varepsilon\). We see that the metric \(e^{2\tilde{\omega}_{\varepsilon}}\xi\) converges to \(e^{2u}\xi\) in any compact set of \(\mathbb{H}_{+}\). By symmetry, the rescaled potential close to the point \((-1,0)\) at the scale \(\varepsilon\) also converges to \(e^{2u}\xi\) on any compact set of \(\mathbb{H}_{+}\). The surface \((\mathbb{D},g_{\varepsilon})\) then represents two flat disks attached by a thin strip.
Now, except for the potentials \(e^{\tilde{\omega}_{\varepsilon}}\), and \(e^{\hat{\omega}_{\varepsilon}(x)}\) just defined before, we denote for any function \(\varphi:\mathbb{D}\to\mathbb{R}\):
\[\tilde{\varphi}(x)=\varphi\circ f_{+}^{-1}(x)\text{ and }\hat{\varphi}(x)= \varphi\circ f_{+}^{-1}(\varepsilon x)=\tilde{\varphi}(\varepsilon x)\]
Notice that all the functions we consider in the following satisfy that \(\varphi\circ f_{+}^{-1}(x)=\pm\varphi\circ f_{-}^{-1}(x)\) for any \(x\) so that the analysis occuring for eigenfunctions at the neighbourhood of \((1,0)\) is the same as the one occuring at the neighbourhood of \((-1,0)\).
We notice that \(L_{g_{\varepsilon}}(\partial\mathbb{D})=4\pi\) as a sum of potentials isometric to radius one flat disks. We set \(\sigma_{\varepsilon,1}:=\sigma_{1}(\mathbb{D},g_{\varepsilon})\) and \(\sigma_{\varepsilon,2}:=\sigma_{2}(\mathbb{D},g_{\varepsilon})\) the first and second eigenvalues associated to \((\mathbb{D},g_{\varepsilon})\). We aim at computing the asymptotic expansion of these quantities as \(\varepsilon\to 0\).
### Upper bounds for first and second eigenvalue
In this part, we aim at proving that
\[\sigma_{\varepsilon,1}\leq\frac{1}{2\ln\frac{1}{\varepsilon}}+O\left(\frac{1 }{\left(\ln\frac{1}{\varepsilon}\right)^{2}}\right) \tag{3.2}\]
\[\sigma_{\varepsilon,2}\leq 1+O(\varepsilon) \tag{3.3}\]
as \(\varepsilon\to 0\). We set \(f_{1}:\mathbb{D}\to\mathbb{R}\) and \(f_{2}:\mathbb{D}\to\mathbb{R}\) such that
\[f_{1}\circ f_{+}^{-1}(x)=\begin{cases}1&\text{ on }\mathbb{D}_{\varepsilon}^{+} \\ \frac{\ln(|x|)}{\ln\varepsilon}&\text{ on }\mathbb{D}_{\frac{1}{\varepsilon}}^{+} \setminus\mathbb{D}_{\varepsilon}^{+}\\ -1&\text{ on }\mathbb{R}^{2}\setminus\mathbb{D}_{\frac{1}{\varepsilon}}^{+} \;.\end{cases} \tag{3.4}\]
and
\[f_{2}(z)=\sqrt{\beta_{\varepsilon}^{2}-1}\frac{z_{2}}{\left|\beta_{\varepsilon }-z\right|^{2}}\;, \tag{3.5}\]
defined in order to have that
\[f_{2}\circ f_{+}^{-1}(\varepsilon x)=\frac{2x_{1}}{(1+x_{2})^{2}+x_{1}^{2}}\;, \tag{3.6}\]
represents the first Steklov eigenfunction \(z_{2}\) of the flat disk. By symmetry, we have that
\[\int_{\mathbb{S}^{1}}f_{1}d\theta=\int_{\mathbb{S}^{1}}f_{2}d\theta=\int_{ \mathbb{S}^{1}}f_{1}f_{2}d\theta=0\;.\]
By conformal invariance of the Dirichlet energy, we have that
\[\int_{\mathbb{D}}\left|\nabla f_{1}\right|_{g_{\varepsilon}}^{2}dA_{g_{ \varepsilon}}=\int_{\mathbb{D}}\left|\nabla f_{1}\right|^{2}=\int_{\mathbb{R}_{ +}^{2}}\left|\nabla f_{1}\circ f_{+}^{-1}\right|^{2}=\pi\int_{\varepsilon}^{ \frac{1}{\varepsilon}}\frac{dr}{r\left(\ln\varepsilon\right)^{2}}=\frac{2\pi}{ \ln\frac{1}{\varepsilon}}\]
and we compute the \(L^{2}\) norm on the boundary as
\[\int_{\mathbb{S}^{1}}\left(f_{1}\right)^{2}dL_{g_{\varepsilon}}=\int_{\mathbb{ R}\times\left\{0\right\}}\left(\tilde{f}_{1}\right)^{2}e^{\tilde{\omega}_{ \varepsilon}}dx=2\int_{[-1,1]\times\left\{0\right\}}\left(\tilde{f}_{1}\right) ^{2}e^{\tilde{\omega}_{\varepsilon}}=2\int_{-\frac{1}{\varepsilon}}^{\frac{1} {\varepsilon}}\left(\hat{f}_{1}\right)^{2}e^{\hat{\omega}_{\varepsilon}}\]
by symmetry and we get
\[\int_{\mathbb{S}^{1}}\left(f_{1}\right)^{2}dL_{g_{\varepsilon}}=2\int_{-\frac {1}{\varepsilon}}^{\frac{1}{\varepsilon}}\left(1-\frac{\ln\left|x_{1}\right|}{ \ln\varepsilon}\right)^{2}\frac{2}{1+\left(x_{1}\right)^{2}}dx_{1}+O( \varepsilon)=4\pi+O\left(\frac{1}{\ln\frac{1}{\varepsilon}}\right)\;.\]
Therefore the Rayleigh quotient gives (3.2).
Again, the conformal invariance of the Dirichlet energy gives that
\[\int_{\mathbb{D}}\left|\nabla f_{2}\right|_{g_{\varepsilon}}^{2}dA_{g_{ \varepsilon}}=\int_{\mathbb{D}}\left|\nabla x_{2}\right|^{2}=\pi\]
and we have by symmetry and neglecting small terms in \(e^{\hat{\omega}_{\varepsilon}}\):
\[\int_{\mathbb{S}^{1}}\left(f_{2}\right)^{2}dL_{g_{\varepsilon}}=2\int_{-\frac {1}{\varepsilon}}^{\frac{1}{\varepsilon}}\left(\hat{f}_{2}\right)^{2}e^{\hat{ \omega}_{\varepsilon}}=2\int_{-\frac{1}{\varepsilon}}^{\frac{1}{\varepsilon}} \left(\frac{2x_{1}}{1+\left(x_{1}\right)^{2}}\right)^{2}\frac{2}{1+\left(x_{ 1}\right)^{2}}dx_{1}+O\left(\varepsilon\right)\]
as \(\varepsilon\to 0\). Easy computations by integration by parts finally give
\[\int_{\mathbb{S}^{1}}\left(f_{2}\right)^{2}dL_{g_{\varepsilon}}=\pi+O\left( \varepsilon\right)\]
as \(\varepsilon\to 0\), so that taking the Rayleigh quotient, we get (3.3).
### Convergence of the first eigenfunction and the first eigenvalue
In this subsection, we aim at proving that
\[\sigma_{\varepsilon,1}=\frac{1}{2\ln\frac{1}{\varepsilon}}+O\left(\frac{1}{ \left(\ln\varepsilon\right)^{2}}\right) \tag{3.7}\]
as \(\varepsilon\to 0\). Let \(\varphi_{\varepsilon,1}\) be a first eigenfunction associated to \(\sigma_{1,\varepsilon}\). We have the following equation
\[\begin{cases}\Delta\varphi_{\varepsilon,1}=0&\text{in }\mathbb{D}\\ \partial_{r}\varphi_{\varepsilon,1}=\sigma_{\varepsilon,1}e^{\omega_{ \varepsilon}}\varphi_{\varepsilon,1}&\text{in }\mathbb{S}^{1}\end{cases}\text{ and }\int_{\mathbb{S}^{1}}\left(\varphi_{ \varepsilon,1}\right)^{2}dL_{g_{\varepsilon}}=4\pi\;. \tag{3.8}\]
Up to symmetrize or antisymmetrize in case of multiplicity (that a posteriori do not happen), we can assume that
\[\varphi_{\varepsilon,1}(z_{1},z_{2})^{2}=\varphi_{\varepsilon,1}(-z_{1},z_{2} )^{2}=\varphi_{\varepsilon,1}(z_{1},-z_{2})^{2} \tag{3.9}\]
for any \(z=(z_{1},z_{2})\in\mathbb{D}\). Then \(\hat{\varphi}_{\varepsilon,1}\) satisfies the following equation in \(\mathbb{R}_{+}^{2}\):
\[\begin{cases}\Delta\hat{\varphi}_{\varepsilon,1}=0&\text{in }\mathbb{D}\\ -\partial_{x_{2}}\hat{\varphi}_{\varepsilon,1}=\sigma_{\varepsilon,1}\frac{2} {1+(x_{1})^{2}}\hat{\varphi}_{\varepsilon,1}+\sigma_{\varepsilon,1}\frac{2 \varepsilon^{2}}{1+(x_{1})^{2}\varepsilon^{4}}\hat{\varphi}_{\varepsilon,1}& \text{on }\mathbb{R}\times\left\{0\right\}\end{cases} \tag{3.10}\]
and
\[\int_{\mathbb{R}\times\{0\}}\left(\hat{\varphi}_{\varepsilon,1}\right)^{2}\left( \frac{2}{1+(x_{1})^{2}}+\frac{2\varepsilon^{2}}{1+(x_{1})^{2}\varepsilon^{4}} \right)dx_{1}=4\pi\;.\]
In particular, \(\int_{\mathbb{R}\times\{0\}}\left(\hat{\varphi}_{\varepsilon,1}\right)^{2} \frac{2}{1+(x_{1})^{2}}dx_{1}\leq 4\pi\) and by standard Elliptic theory, since \(\sigma_{\varepsilon,1}\to 0\) as \(\varepsilon\to 0\), we obtain that
\[\hat{\varphi}_{\varepsilon,1}\to\varphi_{\star,1}^{+}\text{ in }\mathcal{C}^{2} \left(\mathbb{D}_{\rho}\right)\]
as \(\varepsilon\to 0\), for any \(\rho>0\). Letting (3.10) pass to the limit as \(\varepsilon\to 0\), we obtain that
\[\begin{cases}\Delta\hat{\varphi}_{\star,1}^{+}=0&\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\hat{\varphi}_{\star,1}^{+}=0&\text{ on }\mathbb{R}\times\{0\} \end{cases} \tag{3.11}\]
Since \(\int_{\mathbb{R}\times\{0\}}\left(\hat{\varphi}_{\varepsilon,1}\right)^{2} \frac{2}{1+(x_{1})^{2}}dx_{1}\leq 4\pi\), we get that \(\varphi_{\star}^{+}\) is a constant function. Up to take \(-\varphi_{\varepsilon,1}\) instead of \(\varphi_{\varepsilon,1}\), we can assume that \(\varphi_{\star,1}^{+}\geq 0\). Looking at \(\varphi_{\varepsilon,1}\circ\left(f_{-}\right)^{-1}(\varepsilon x)\) instead of \(\hat{\varphi}_{\varepsilon,1}(x):=\varphi_{\varepsilon,1}\circ\left(f_{+} \right)^{-1}(\varepsilon x)\), we also get a constant function \(\varphi_{\star,1}^{-}\) at the limit and by symmetry, \(\left(\varphi_{\star,1}^{+}\right)^{2}=\left(\varphi_{\star,1}^{-}\right)^{2}\).
We denote by
\[m_{\varepsilon,1}(r)=\frac{1}{\pi}\int_{0}^{\pi}\tilde{\varphi}_{\varepsilon, 1}(r\cos\theta,r\sin\theta)d\theta\]
the mean value of \(\tilde{\varphi}_{\varepsilon,1}\) on half circle (corresponding to the mean value of \(\varphi_{\varepsilon,1}\) on the lines \(\left|\frac{1-z}{1+z}\right|=constant\)). It is a radial function. We also set
\[\psi_{\varepsilon,1}(x)=\tilde{\varphi}_{\varepsilon,1}(x)-m_{\varepsilon, 1}(|x|)\;.\]
We have
\[\begin{cases}\Delta\psi_{\varepsilon,1}=-\Delta m_{\varepsilon,1}=-\frac{ \sigma_{\varepsilon,1}}{\pi r}e^{\tilde{\omega}_{\varepsilon}}\left(\tilde{ \varphi}_{\varepsilon,1}(r,0)+\tilde{\varphi}_{\varepsilon,1}(-r,0)\right)& \text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\psi_{\varepsilon,1}=\sigma_{\varepsilon,1}e^{\tilde{\omega }_{\varepsilon}}\tilde{\varphi}_{\varepsilon,1}&\text{ on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.12}\]
Let \(x_{\varepsilon}\in\mathbb{D}_{+}\) be such that \(\psi_{\varepsilon}(x_{\varepsilon})=\left\|\psi_{\varepsilon,1}\right\|_{\infty}\). We aim at proving that
\[\left\|\psi_{\varepsilon,1}\right\|_{\infty}\leq C\sigma_{\varepsilon,1} \tag{3.13}\]
for some positive constant \(C\). We have two cases:
If \(\left|x_{\varepsilon}\right|=O\left(\varepsilon\right)\), then \(\hat{\psi}_{\varepsilon,1}\) satisfies
\[\begin{cases}\Delta\hat{\psi}_{\varepsilon,1}=-\frac{\sigma_{\varepsilon,1}}{ \pi r}\left(\frac{2}{1+r^{2}}+\frac{2\varepsilon^{2}}{1+\varepsilon^{4}r^{2}} \right)(\hat{\varphi}_{\varepsilon,1}(r,0)+\hat{\varphi}_{\varepsilon,1}(-r,0 ))&\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\hat{\psi}_{\varepsilon,1}=\sigma_{\varepsilon,1}\frac{2}{1+( x_{1})^{2}}\hat{\varphi}_{\varepsilon,1}+\sigma_{\varepsilon,1}\frac{2\varepsilon^{2}}{1+ \varepsilon^{4}(x_{1})^{2}}\hat{\varphi}_{\varepsilon,1}&\text{ on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.14}\]
We also know by assumption that for any \(\rho>0\) we have \(\int_{\mathbb{D}_{\rho}^{+}}\psi_{\varepsilon,1}=0\). We also know that \(\psi_{\varepsilon,1}\) has a bounded energy. By asumption, the right-hand side term of the second equation is bounded in \(L^{2}\left(\mathbb{R}_{+}^{2}\right)\). Moreover, by Holder inequalities, the right-hand side term in the first equation is bounded in \(L^{p}\) for any \(p<2\). By standard elliptic theory (3.13) holds true.
If \(\varepsilon=o\left(\left|x_{\varepsilon}\right|\right)\), then we set \(\phi_{\varepsilon}(x)=\psi_{\varepsilon,1}(\left|x_{\varepsilon}\right|x)\) and we have the equation
\[\begin{cases}\Delta\phi_{\varepsilon}=&-\frac{\sigma_{\varepsilon,1}}{\pi r} \left(\frac{2\frac{\varepsilon}{\left|x_{\varepsilon}\right|}}{\frac{ \varepsilon^{2}}{\left|x_{\varepsilon}\right|^{2}}+r^{2}}+\frac{2\varepsilon^ {2}\left|x_{\varepsilon}\right|}{1+\varepsilon^{4}\left|x_{\varepsilon}\right|^ {2}r^{2}}\right)\left(\tilde{\varphi}_{\varepsilon,1}(r\left|x_{\varepsilon} \right|,0)+\tilde{\varphi}_{\varepsilon,1}(-r\left|x_{\varepsilon}\right|,0) \right)\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\phi_{\varepsilon}=&\sigma_{\varepsilon,1}\frac{2\frac{ \varepsilon}{\left|x_{\varepsilon}\right|}}{\frac{\varepsilon^{2}}{\left|x_{ \varepsilon}\right|^{2}}+(x_{1})^{2}}\tilde{\varphi}_{\varepsilon,1}(-x_{1} \left|x_{\varepsilon}\right|,0)\\ &+\sigma_{\varepsilon,1}\frac{2\varepsilon^{2}\left|x_{\varepsilon}\right|}{ 1+\varepsilon^{4}\left|x_{\varepsilon}\right|^{2}(x_{1})^{2}}\tilde{\varphi} _{\varepsilon,1}(x_{1}\left|x_{\varepsilon}\right|,0)\text{ on }\mathbb{R} \times\{0\}\;.\end{cases} \tag{3.15}\]
so that by standard elliptic estimates in \(A=\mathbb{D}_{2}^{+}\setminus\mathbb{D}_{\frac{1}{2}}^{+}\), knowing that \(\int_{A}\phi_{\varepsilon}=0\), we obtain thanks to \(L^{p}\) boundedness in the left-hand side terms of the equation for \(p<2\) that \(\left\|\psi_{\varepsilon}\right\|_{\infty}\leq C\sigma_{1}^{\varepsilon}\) for some positive constant \(C\) independent from \(\varepsilon\) and (3.13) holds true.
We deduce from (3.13) that
\[4\pi=\int_{\mathbb{S}^{1}}\left(\varphi_{\varepsilon,1}\right)^{2}dL_{g_{ \varepsilon}}=\int_{\mathbb{S}^{1}}\left(m_{\varepsilon,1}\right)^{2}dL_{g_{ \varepsilon}}+O\left(\sigma_{\varepsilon,1}\right) \tag{3.16}\]
as \(\varepsilon\to 0\).
By the Courant nodal theorem, and symmetry assumptions 3.9, the only possibilities for nodal sets are \(\{z_{1}=0\}\) or \(\{z_{2}=0\}\). But if \(\{z_{2}=0\}\) is a nodal set, we obtain that \(\varphi_{\varepsilon,1}(z_{1},z_{2})=\varphi_{\varepsilon,1}(z_{1},-z_{2})\) by the Courant nodal theorem and we would get \(m_{\varepsilon,1}=0\). However, since \(\sigma_{\varepsilon,1}\to 0\) as \(\varepsilon\to 0\), (3.16) gives a contradiction. Therefore, \(\{z_{1}=0\}\) is the nodal line of \(\varphi_{\varepsilon,1}\) and \(\{\left|x\right|=1\}\) is the zero set for \(m_{\varepsilon,1}\). Moreover \(m_{\varepsilon,1}\) and \(\tilde{\varphi}_{\varepsilon,1}\) are positive in \(\mathbb{D}\) and negative in \(\mathbb{R}^{2}\setminus\mathbb{D}\). \(m_{\varepsilon,1}\) satisfies:
\[\Delta m_{\varepsilon,1}:=-\frac{1}{r}\partial_{r}\left(rm^{\prime}_{ \varepsilon,1}\right)=\frac{\sigma_{\varepsilon,1}}{\pi r}e^{\tilde{\omega}_{ \varepsilon}}\left(\tilde{\varphi}_{\varepsilon,1}(r,0)+\tilde{\varphi}_{ \varepsilon,1}(-r,0)\right)\;.\]
Integrating between \(0\) and \(r\leq 1\), we get
\[m_{\varepsilon,1}(r)-m_{\varepsilon,1}(0)=-\int_{0}^{r}\frac{1}{s}\left(\int_ {[-s,s]\times\{0\}}\sigma_{\varepsilon,1}e^{\tilde{\omega}_{\varepsilon}} \tilde{\varphi}_{\varepsilon,1}\right)\leq 0 \tag{3.17}\]
Then, \(m_{\varepsilon,1}\) realizes its maximum at \(0\). Then, we have that for any \(\rho>0\),
\[\int_{\rho\varepsilon}^{1}\left(\tilde{\varphi}_{\varepsilon,1}\right)^{2}e^{ \tilde{\omega}_{\varepsilon}}dx_{1}\leq\left(\varphi_{\varepsilon,1}(0)\right) ^{2}\int_{\rho\varepsilon}^{1}e^{\tilde{\omega}_{\varepsilon}}dx_{1}\leq \frac{C}{\rho}\]
for a positive constant \(C\) independent from \(\rho\) and \(\varepsilon\). Letting \(\varepsilon\to 0\), and then \(\rho\to 0\) in (3.16), we obtain that \(\left(\varphi_{\star,1}^{+}\right)^{2}+\left(\varphi_{\star,1}^{-}\right)^{2}=2\). Then, since we know that \(\varphi_{\varepsilon,1}(z_{1},z_{2})=-\varphi(-z_{1},z_{2})\), and \(\varphi_{\star,1}^{+}\geq 0\), we have that \(\varphi_{\star,1}^{+}=1\) and \(\varphi_{\star,1}^{-}=-1\).
Taking \(r=1\) in (3.17), we obtain
\[m_{\varepsilon,1}(0) = \frac{\sigma_{\varepsilon,1}}{\pi}\int_{0}^{1}\frac{1}{s}\left( \int_{[-s,s]\times\{0\}}e^{\tilde{\omega}_{\varepsilon}}\tilde{\varphi}_{ \varepsilon,1}\right)\] \[\leq \frac{\sigma_{\varepsilon,1}}{\pi}\tilde{\varphi}_{\varepsilon,1} (0)\int_{0}^{1}\frac{1}{s}\int_{-s}^{s}\frac{2\varepsilon}{\varepsilon^{2}+u^ {2}}du+O\left(\varepsilon\right)\] \[\leq \frac{\sigma_{\varepsilon,1}}{\pi}\tilde{\varphi}_{\varepsilon,1} (0)\int_{0}^{\frac{1}{\varepsilon}}\frac{2}{1+r^{2}}\left(\ln\frac{1}{r \varepsilon}\right)dr+O\left(\varepsilon\right)\] \[\leq \frac{\sigma_{\varepsilon,1}}{\pi}\tilde{\varphi}_{\varepsilon,1} (0)\left(\ln\frac{1}{\varepsilon}\right)\int_{0}^{+\infty}\frac{2}{1+r^{2}} dr+O\left(\sigma_{\varepsilon,1}\right)+O\left(\varepsilon\right)\]
and knowing that \(\tilde{\varphi}_{\varepsilon,1}(0)\to 1\) as \(\varepsilon\to 0\), we get
\[\sigma_{\varepsilon,1}\geq\frac{1}{2\ln\frac{1}{\varepsilon}}+O\left(\frac{ \sigma_{\varepsilon,1}}{\ln\frac{1}{\varepsilon}}\right)\]
so that with (3.2), we obtain (3.7).
### The second eigenfunction and second eigenvalue
In this section, we aim at proving that
\[\sigma_{\varepsilon,2}=1-4\varepsilon+o(\varepsilon) \tag{3.18}\]
as \(\varepsilon\to 0\).
We focus on the equation satisfied by \(\varphi_{\varepsilon,2}\), an eigenfunction associated to the second non-zero eigenvalue \(\sigma_{\varepsilon,2}\), satisfying
\[\int_{\mathbb{S}^{1}}\left(\varphi_{\varepsilon,2}\right)^{2}e^{\omega_{ \varepsilon}}dz=\pi\text{ and }\int_{\mathbb{S}^{1}}\varphi_{\varepsilon,2}e^{\omega_{ \varepsilon}}dz=\int_{\mathbb{S}^{1}}\varphi_{\varepsilon,1}\varphi_{ \varepsilon,2}e^{\omega_{\varepsilon}}dz=0\;, \tag{3.19}\]
where \(\varphi_{\varepsilon,1}\) is the radial first eigenfunction of the previous section. Up to symmetrization again, in case of multiplicity, we assume in addition that
\[\varphi_{\varepsilon,2}(z_{1},z_{2})^{2}=\varphi_{\varepsilon,2}(-z_{1},z_{2 })^{2}=\varphi_{\varepsilon,2}(z_{1},-z_{2})^{2} \tag{3.20}\]
In fact, we must have
\[\varphi_{\varepsilon,2}(z_{1},z_{2})=\varphi_{\varepsilon,2}(-z_{1},z_{2})\;. \tag{3.21}\]
Indeed, if not, we would have antisymmetry and in particular that \(\varphi_{\varepsilon,2}(0,z_{2})=0\). By orthogonality with \(\varphi_{\varepsilon,1}\) which is anti-symmetric, \(\varphi_{\varepsilon,2}\) must vanish elsewhere: \(\varphi_{\varepsilon,2}(z_{1},z_{2})=0\) for \(z_{1}>0\). The nodal line containing \(z\) is a line connecting two points of \(\left(\mathbb{S}^{1}\cap\{z_{1}>0\}\right)\cup\{z_{1}=0\}\). Then \(\varphi_{\varepsilon,2}\) has at least two nodal domains in \(\{z_{1}>0\}\). By symmetry, \(\varphi_{\varepsilon,2}\) has at least four nodal domains, contradicting the Courant nodal theorem. We also deduce from the orthogonality with the constants that
\[\int_{\mathbb{S}^{1}\cap\{z_{1}>0\}}\varphi_{\varepsilon,2}e^{\omega_{ \varepsilon}}dz=\int_{\mathbb{S}^{1}\cap\{z_{1}<0\}}\varphi_{\varepsilon,2}e^ {\omega_{\varepsilon}}dz=0\;. \tag{3.22}\]
Rescaling at the neighbourhood of \((1,0)\), \(\hat{\varphi}_{\varepsilon,2}\) satisfies the equation
\[\begin{cases}\Delta\hat{\varphi}_{\varepsilon,2}=0&\text{ in }\mathbb{D}\\ -\partial_{x_{2}}\hat{\varphi}_{\varepsilon,2}=\sigma_{\varepsilon,2}\frac{2} {1+(x_{1})^{2}}\hat{\varphi}_{\varepsilon,2}+\sigma_{\varepsilon,2}\frac{2 \varepsilon^{2}}{1+(x_{1})^{2}\varepsilon^{4}}\hat{\varphi}_{\varepsilon,2}& \text{ on }\mathbb{R}\times\{0\}\end{cases} \tag{3.23}\]
with
\[\int_{\mathbb{R}\times\{0\}}\left(\frac{2}{\left(1+(x_{1})^{2}\right)^{2}}+\frac{2 \varepsilon^{2}}{1+\varepsilon^{4}(x_{1})^{2}}\right)(\hat{\varphi}_{\varepsilon,2})^{2}\,dx=\pi\;.\]
In particular, \(\int_{\mathbb{R}\times\{0\}}\frac{2}{\left(1+(x_{1})^{2}\right)^{2}}\left(\hat{ \varphi}_{\varepsilon,2}\right)^{2}dx\leq\pi\). \(\sigma_{\varepsilon,2}\) is bounded by (3.3) and converges up to the extraction of a subsequence to \(\sigma_{\star,2}\). By standard elliptic estimates, up to the extraction of a subsequence,
\[\hat{\varphi}_{\varepsilon,2}\to\varphi_{\star,2}^{+}\text{ in }\mathcal{C}^{2} \left(\mathbb{D}_{\rho}^{+}\right) \tag{3.24}\]
for any \(\rho>0\) and
\[\begin{cases}\Delta\varphi_{\star,2}^{+}=0&\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\varphi_{\star,2}^{+}=\sigma_{\star,2}e^{u}\varphi_{\star,2}^ {+}&\text{ on }\mathbb{R}\times\{0\}\end{cases} \tag{3.25}\]
By boundedness of the energy, \(\varphi_{\star,2}^{+}\circ f_{+}\) has to be a Steklov eigenfunction associated to \(\sigma_{\star,2}\) on the disk. Since \(\sigma_{\star,2}\leq 1\) by (3.3), we have either \(\sigma_{\star,2}=0\) or \(\sigma_{\star,2}=1\). We aim at proving that \(\sigma_{\star,2}=1\) and at getting an estimate on \(\delta_{\varepsilon}:=\sigma_{\varepsilon,2}-1\) as \(\varepsilon\to 0\).
Let's deal with the second symmetry in (3.20). In the following subsections we separate two cases:
\[\varphi_{\varepsilon,2}(z_{1},z_{2})=-\varphi_{\varepsilon,2}(z_{1},-z_{2}) \tag{3.26}\]
is called the antisymmetric case and is handled in subsection 3.4.1 and in this case, we assume in addition that \(\varphi_{\varepsilon,2}(0,1)\geq 0\) up to take \(-\varphi_{\varepsilon,2}\).
\[\varphi_{\varepsilon,2}(z_{1},z_{2})=\varphi_{\varepsilon,2}(z_{1},-z_{2}) \tag{3.27}\]
is called the symmetric case and is handled in subsection 3.4.2, and in this case, we assume in addition that \(\varphi_{\varepsilon,2}(1,0)\geq 0\) up to take \(-\varphi_{\varepsilon,2}\). We a posteriori prove that this case cannot occur for the second eigenfunction.
#### 3.4.1. The antisymmetric case
By antisymmetry (3.26), we must have that for any \(r>0\),
\[\int_{0}^{\pi}\tilde{\varphi}_{\varepsilon,2}(r\cos\theta,r\sin\theta)d\theta=0\]
so that we can handle elliptic estimates in \(\tilde{\varphi}_{\varepsilon,2}\) at any scales. In particular we have that \(\tilde{\varphi}_{\varepsilon,2}\) is uniformly bounded. We then obtain that
\[\int_{\rho\varepsilon}^{1}\tilde{\varphi}_{\varepsilon,2}e^{\tilde{\omega}_{ \varepsilon}}dx_{1}\leq\frac{C}{\rho} \tag{3.28}\]
for a constant \(C>0\) independant from \(\rho>0\) and \(\varepsilon>0\). By (3.19), (3.22) and (3.24) we obtain that
\[\int_{\mathbb{R}\times\{0\}}\left(\varphi_{\star,2}^{+}\right)^{2}e^{u}dx_{1} =\frac{\pi}{2}\text{ and }\int_{\mathbb{R}\times\{0\}}\varphi_{\star,2}^{+}e^{u}dx_{1}=0\]
By (3.25), and remarks below, we must have \(\sigma_{\star,2}=1\) and \(\varphi_{\star,2}^{+}\circ f_{+}\) has to be a first Steklov eigenfunction on the disk, that is
\[\varphi_{\star,2}^{+}=a_{\star,1}Z_{1}+a_{\star,2}Z_{2}\;,\]
where \(Z_{1}=Re(f_{+}^{-1})\) and \(Z_{2}=Im(f_{+}^{-1})\) and \(\left(a_{\star,1}\right)^{2}+\left(a_{\star,2}\right)^{2}=1\). By antisymmetry (3.26), \(a_{\star,1}=0\) and \(a_{\star,2}=1\).
Now, let's work on \(\delta_{\varepsilon}=\sigma_{\varepsilon,2}-1\).
We define \(R_{\varepsilon}:\mathbb{D}\to\mathbb{R}\) such that the function \(\tilde{R}_{\varepsilon}=R_{\varepsilon}\circ f_{+}^{-1}\) satisfies
\[\tilde{R}_{\varepsilon}(x)=\varphi_{\varepsilon,2}(x)-\eta_{\varepsilon}\left(Z _{2}\left(\frac{x}{\varepsilon}\right)+Z_{2}\left(x\varepsilon\right)\right) \tag{3.29}\]
and such that \(R_{\varepsilon}\circ f_{-}^{-1}\) satisfies the same equation, where \(\eta_{\varepsilon}\) is defined in order to have \(\nabla R_{\varepsilon}(-1,0)=\nabla R_{\varepsilon}(1,0)=0\). Such a \(\eta_{\varepsilon}\) exists because of anti-symmetry of the involved functions. Notice that \(\eta_{\varepsilon}=1+o(1)\) as \(\varepsilon\to 0\). We obtain the following equation on \(R_{\varepsilon}\)
\[\begin{cases}\Delta R_{\varepsilon}=0&\text{in }\mathbb{D}\\ \partial_{r}R_{\varepsilon}-e^{\omega_{\varepsilon}}R_{\varepsilon}=(\sigma_ {\varepsilon,2}-1)e^{\omega_{\varepsilon}}\varphi_{\varepsilon,2}+\eta_{ \varepsilon}\left(e^{u_{\varepsilon}}Z_{2,\varepsilon}^{+}+e^{v_{\varepsilon }}Z_{2,\varepsilon}^{-}\right)&\text{on }\mathbb{S}^{1}\end{cases} \tag{3.30}\]
where \(Z_{2,\varepsilon}^{\pm}=Z_{2}\left(\frac{f_{+}^{-1}(x)}{\varepsilon}\right)\) and then integrating against \(R_{\varepsilon}\) in \(\mathbb{D}\),
\[\int_{\mathbb{D}}|\nabla R_{\varepsilon}|^{2}= 2\int_{\mathbb{S}^{1}}\left(R_{\varepsilon}\right)^{2}e^{u_{ \varepsilon}}+(\sigma_{\varepsilon,2}-1)\int_{\mathbb{S}^{1}}e^{\omega_{ \varepsilon}}\varphi_{\varepsilon,2}R_{\varepsilon}+\eta_{\varepsilon}\int_{ \mathbb{S}^{1}}\left(e^{u_{\varepsilon}}Z_{2,\varepsilon}^{+}+e^{v_{ \varepsilon}}Z_{2,\varepsilon}^{-}\right)R_{\varepsilon}\] \[\leq \left\|R_{\varepsilon}\right\|_{\infty}\left(\left\|R_{ \varepsilon}\right\|_{\infty}+2\pi\left\|\varphi_{\varepsilon,2}\right\|_{ \infty}\left|\sigma_{\varepsilon,2}-1\right|+O\left(\varepsilon\right)\right)\] \[\leq C\left\|R_{\varepsilon}\right\|_{\infty}\left(\left\|R_{ \varepsilon}\right\|_{\infty}+\delta_{\varepsilon}+\varepsilon\right)\;, \tag{3.31}\]
where we computed \(I:=\int_{\mathbb{S}^{1}}\left(e^{u_{\varepsilon}}Z_{2,\varepsilon}^{+}+e^{v_{ \varepsilon}}Z_{2,\varepsilon}^{-}\right)R_{\varepsilon}\) as
\[I= 2\int_{\mathbb{S}^{1}_{-}}\left(e^{u_{\varepsilon}}Z_{2,\varepsilon }^{+}+e^{v_{\varepsilon}}Z_{2,\varepsilon}^{-}\right)R_{\varepsilon}\] \[= 2\int_{-1}^{1}\left(\frac{2\varepsilon}{1+\varepsilon^{2}(x_{1} )^{2}}\frac{2x_{1}\varepsilon}{\varepsilon^{2}+(x_{1})^{2}}+\frac{2 \varepsilon}{\varepsilon^{2}+(x_{1})^{2}}\frac{2x_{1}\varepsilon}{1+ \varepsilon^{2}(x_{1})^{2}}\right)\tilde{R}_{\varepsilon}dx_{1}\] \[\leq 4\left\|R_{\varepsilon}\right\|_{\infty}\varepsilon\int_{0}^{ \frac{1}{\varepsilon}}\left(\frac{2\varepsilon}{1+\varepsilon^{4}u^{2}}\frac{2 u}{1+u^{2}}+\frac{2}{1+u^{2}}\frac{2u}{1+\varepsilon^{4}u^{2}}\right)du\] \[\leq O\left(\left\|R_{\varepsilon}\right\|_{\infty}\varepsilon\right)\]
as \(\varepsilon\to 0\). We set
\[\alpha_{\varepsilon}=\left\|R_{\varepsilon}\right\|_{L^{\infty}(\mathbb{S}^{1 })}+\left|\delta_{\varepsilon}\right|+\varepsilon\;, \tag{3.32}\]
and we get that
\[\int_{\mathbb{D}}|\nabla R_{\varepsilon}|^{2}\leq O\left(\alpha_{\varepsilon}^{2}\right) \tag{3.33}\]
as \(\varepsilon\to 0\). Letting \(\hat{R}_{\varepsilon}(x)=\tilde{R}_{\varepsilon}(\varepsilon x)\), we get from (3.30) that
\[\begin{cases}\Delta\hat{R}_{\varepsilon}=0&\text{in }\mathbb{R}^{2}_{+}\\ -\partial_{x_{2}}\hat{R}_{\varepsilon}-e^{u}\hat{R}_{\varepsilon}=&(\sigma_{ \varepsilon,2}-1)e^{u}\hat{\varphi}_{\varepsilon,2}+\eta_{\varepsilon}e^{u}Z_{ 2}(\varepsilon^{2}x)&\\ &+\frac{2\varepsilon^{2}}{1+\varepsilon^{4}(x_{1})^{2}}\left(\sigma_{ \varepsilon,2}\hat{\varphi}_{\varepsilon,2}-\eta_{\varepsilon}Z_{2}( \varepsilon^{2}x)\right)&\text{on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.34}\]
Dividing this equation by \(\alpha_{\varepsilon}\), by standard elliptic theory, we obtain up to the extraction of a subsequence that
\[\frac{R_{\varepsilon}^{+}}{\alpha_{\varepsilon}}\to R_{\star}^{+}\in \mathcal{C}^{2}\left(\mathbb{D}_{\rho}\right)\]
as \(\varepsilon\to 0\) for any \(\rho>0\). We also have that up to the extraction of a subsequence,
\[\frac{\delta_{\varepsilon}}{\alpha_{\varepsilon}}\to\delta_{\star}\text{ and }\frac{\varepsilon}{\alpha_{\varepsilon}}\to e_{\star}\text{ as }\varepsilon\to 0\;.\]
Passing to the limit in (3.34) divided by \(\alpha_{\varepsilon}\), \(R_{\star}^{+}\) satisfies the equation
\[\begin{cases}\Delta R_{\star}^{+}=0&\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}R_{\star}^{+}-e^{u}R_{\star}^{+}=\delta_{\star}e^{u}Z_{2}+2e^ {u}x_{1}e_{\star}&\text{ on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.35}\]
We set \(R_{\star}:=R_{\star}^{+}\circ f_{+}\) in \(\mathbb{D}\setminus\{(-1,0)\}\) and we obtain
\[\begin{cases}\Delta R_{\star}=0&\text{ in }\mathbb{D}\\ -\partial_{x_{2}}R_{\star}-e^{u}R_{\star}=\left(\delta_{\star}+e_{\star}\frac{ 2}{|1+z|^{2}}\right)z_{2}&\text{ on }\mathbb{S}^{1}\setminus\{(-1,0)\}\;.\end{cases} \tag{3.36}\]
Notice that the left-hand side in the secons equation is uniformly bounded and that by (3.33), \(R_{\star}\in W^{1,2}\left(\mathbb{D}\right)\) so that \(R_{\star}\) can be extended in \(\mathbb{D}\) such that the equation holds in \(\mathbb{D}\). Integrating this equation against \(z_{2}\), we obtain that
\[\delta_{\star}=-\frac{2\int_{\mathbb{S}^{1}}\frac{(2z)^{2}}{|1+z|^{2}}d\theta }{\int_{\mathbb{S}^{1}}(z_{2})^{2}d\theta}e_{\star}=-\frac{2\pi}{\frac{\pi}{2 }}e_{\star}\;.\]
Notice that if \(e_{\star}\neq 0\), then
\[\frac{\delta_{\varepsilon}}{\varepsilon}=\frac{\frac{\delta_{\varepsilon}}{ \varepsilon}}{\frac{\varepsilon}{\alpha_{\varepsilon}}}\to\frac{\delta_{\star }}{e_{\star}}=-4\;, \tag{3.37}\]
and (3.18) would be proved in the antisymmetric case.
From now to the end of the subsection 3.4.1, we assume by contradiction that \(e_{\star}=0\). This implies that \(\delta_{\star}=0\). By (3.32), and definition of \(\delta_{\star}\) and \(e_{\star}\), we get
\[\delta_{\varepsilon}=o\left(\left\|R_{\varepsilon}\right\|_{\infty}\right) \text{ and }\varepsilon=o\left(\left\|R_{\varepsilon}\right\|_{\infty}\right) \tag{3.38}\]
as \(\varepsilon\to 0\). Moreover, (3.35) becomes \(\Delta R_{\star}=0\) and \(-\partial_{x_{2}}R_{\star}-R_{\star}=0\). Then \(R_{\star}\) is a first Steklov eigenfunction in \(\mathbb{D}\) satisfying \(R_{\star}(-1,0)=0\) and \(\nabla R_{\star}(-1,0)=0\) since \(\tilde{R}_{\varepsilon,2}(0)=0\) and \(\nabla\tilde{R}_{\varepsilon,2}(0)=0\) held for any \(\varepsilon>0\). Therefore, \(R_{\star}=0\) and we obtain
\[R_{\varepsilon}(\rho\varepsilon)=o\left(\left\|R_{\varepsilon}\right\|_{ \infty}\right) \tag{3.39}\]
as \(\varepsilon\to 0\), for any \(\rho>0\).
Let \(x_{\varepsilon}\in\mathbb{D}\cap\left(\mathbb{R}\times\{0\}\right)\) be such that \(R_{\varepsilon}(x_{\varepsilon})=\left\|R_{\varepsilon}\right\|_{\infty}\). We obtain from (3.39) that \(\varepsilon=o\left(r_{\varepsilon}\right)\), letting \(r_{\varepsilon}=|x_{\varepsilon}|\). We set \(\psi_{\varepsilon}(x)=R_{\varepsilon}(r_{\varepsilon}x)\) and \(\phi_{\varepsilon}(x)=\tilde{\varphi}_{\varepsilon,2}(r_{\varepsilon}x)\) and we obtain from (3.34)
\[\begin{cases}\Delta\psi_{\varepsilon}=0&\text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\psi_{\varepsilon}=&\left(\psi_{\varepsilon}+\left(\sigma_{ \varepsilon,2}-1\right)\phi_{\varepsilon}+\eta_{\varepsilon}Z_{2}(r_{ \varepsilon}\varepsilon x)\right)\frac{2\frac{\varepsilon}{r_{\varepsilon}}}{ \frac{\varepsilon^{2}}{r_{\varepsilon}^{2}}+(x_{1})^{2}}&\text{ on }\mathbb{R}\times\{0\}\;.\\ &+\left(\sigma_{\varepsilon,2}\phi_{\varepsilon}-\eta_{\varepsilon}Z_{2}(r_{ \varepsilon}\varepsilon x)\right)\frac{2\varepsilon r_{\varepsilon}}{1+ \varepsilon^{2}r_{\varepsilon}^{2}(x_{1})^{2}}&\end{cases}\text{ on }\mathbb{R} \times\{0\}\;. \tag{3.40}\]
so that dividing by \(\alpha_{\varepsilon}\), we get that for any \(\rho>1\),
\[\left\|\partial_{x_{2}}\psi_{\varepsilon}\right\|_{L^{\infty}\left(\left([- \rho,\rho]\setminus[-\frac{1}{\rho},\frac{1}{\rho}]\right)\times\{0\}\right)}= O\left(\left(\left\|\psi_{\varepsilon}\right\|_{\infty}+\left|\delta_{ \varepsilon}\right|\right)\frac{\varepsilon}{r_{\varepsilon}}\right)+o(\varepsilon)= O\left(\left\|\psi_{\varepsilon}\right\|_{\infty}\frac{\varepsilon}{r_{ \varepsilon}}\right)+o(\varepsilon)\;.\]
By standard elliptic theory, we obtain By standard elliptic theory, we obtain up to the extraction of a subsequence that
\[\frac{\psi_{\varepsilon}}{\alpha_{\varepsilon}}\to\psi_{\star}\text{ in }\mathcal{C}^{2}\left(\mathbb{D}_{\rho}^{+}\setminus\mathbb{D}_{\frac{1}{ \rho}}^{+}\right)\]
as \(\varepsilon\to 0\) for any \(\rho>0\). We also define up to the extraction of a subsequence
\[\frac{x_{\varepsilon}}{r_{\varepsilon}}\to x_{\star}\in\{(\pm 1,0)\}\text{ as } \varepsilon\to 0\;.\]
Passing to the limit in (3.40) divided by \(\alpha_{\varepsilon}\), \(\psi_{\star}\) is a harmonic function on \(\mathbb{R}_{+}^{2}\setminus\{0\}\) satisfying \(\partial_{x_{2}}\psi_{\star}=0\). Since \(\psi_{\star}\) is bounded by \(1\), \(\psi_{\star}\) is a constant function. But since the mean value of \(\psi_{\star}\) is equal to \(0\) on any circle centered at \(0\), we obtain that \(\psi_{\star}(x_{\star})=0\). Therefore \(\left\|R_{\varepsilon}\right\|_{L^{\infty}(\mathbb{S}_{1})}=o\left(\alpha_{ \varepsilon}\right)\) and the definition of \(\alpha_{\varepsilon}\) (3.32) and (3.38) give a contradiction.
Therefore, \(e_{\star}\neq 0\) and we proved (3.18) thanks to (3.37) in the anti-symmetric case.
#### 3.4.2. The symmetric case
We assume (3.27). We aim at proving that \(\delta_{\varepsilon,2}:=\sigma_{\varepsilon,2}-1=o(\varepsilon)\) as \(\varepsilon\to 0\), so that \(\varphi_{\varepsilon,2}\) cannot be a second eigenfunction by the previous case and it is a contradiction, and then, only (3.37) occurs.
Since \(\varphi_{\varepsilon,2}\) has to vanish somewhere but has at most three nodal domains, the symmetries (3.21) and (3.27) and the orthogonality with the first eigenfunction imply that there is a unique value \(r_{\varepsilon}\in(0,1)\) such that \(\tilde{\varphi}_{\varepsilon,2}(r_{\varepsilon},0)=\tilde{\varphi}_{ \varepsilon,2}(-r_{\varepsilon},0)=0\). Morover \(\tilde{\varphi}_{\varepsilon,2}\) is positive on \((-r_{\varepsilon},r_{\varepsilon})\times\{0\}\) and negative on \(([-1,1]\setminus[-r_{\varepsilon},r_{\varepsilon}])\times\{0\}\).
As for the first eigenfunction, we set
\[m_{\varepsilon,2}(r)=\frac{1}{\pi}\int_{0}^{\pi}\tilde{\varphi}_{\varepsilon, 2}(r\cos\theta,r\sin\theta)d\theta\]
the mean value of \(\tilde{\varphi}_{\varepsilon,2}\) on half circles. It is a radial function. We also set
\[\psi_{\varepsilon,2}(x)=\tilde{\varphi}_{\varepsilon,2}(x)-m_{\varepsilon,2 }(|x|)\;.\]
We have
\[\begin{cases}\Delta\psi_{\varepsilon,2}=-\Delta m_{\varepsilon,2}=-\frac{ \sigma_{\varepsilon,2}}{\pi r}e^{\tilde{\omega}_{\varepsilon}}\left(\tilde{ \varphi}_{\varepsilon,2}(r,0)+\tilde{\varphi}_{\varepsilon,2}(-r,0)\right)& \text{ in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\psi_{\varepsilon,2}=\sigma_{\varepsilon,2}e^{\tilde{\omega} _{\varepsilon}}\tilde{\varphi}_{\varepsilon,2}&\text{ on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.41}\]
As for the first eigenfunction, we easily prove that \(\psi_{\varepsilon,2}\) is uniformly bounded so that we have for any \(\rho>0\)
\[\int_{\rho\varepsilon}^{1}\left(\psi_{\varepsilon,2}\right)^{2}e^{\tilde{ \omega}_{\varepsilon}}\leq\frac{C}{\rho}\]
as \(\varepsilon\to 0\). Integrating once the equation on \(m_{\varepsilon,2}\) (3.41), we have
\[m^{\prime}_{\varepsilon,2}(r)=+\frac{\sigma_{\varepsilon,2}}{r}\int_{r}^{1}e^ {\tilde{\omega}_{\varepsilon}}\left(\tilde{\varphi}_{\varepsilon,2}(s,0)+ \tilde{\varphi}_{\varepsilon,2}(-s,0)\right)=-\frac{\sigma_{\varepsilon,2}}{r }\int_{0}^{r}e^{\tilde{\omega}_{\varepsilon}}\left(\tilde{\varphi}_{\varepsilon,2}(s,0)+\tilde{\varphi}_{\varepsilon,2}(-s,0)\right)\]
so that by the sign properties of \(\tilde{\varphi}_{\varepsilon,2}\), \(m_{\varepsilon,2}\) is a decreasing function. It realizes its maximum for \(r=0\) and its minimum for \(r=1\).
Now, we prove that \(m_{\varepsilon,2}\) is uniformly bounded. We have that
\[\begin{split} m_{\varepsilon,2}&(r)-m_{\varepsilon,2}( 1)=2\int_{r}^{1}\frac{1}{s}\left(\int_{s}^{1}\sigma_{\varepsilon,1}e^{\tilde{ \omega}_{\varepsilon}}(-\tilde{\varphi}_{\varepsilon,2})dt\right)ds\\ &\leq-2\tilde{\varphi}_{\varepsilon,2}(1,0)\sigma_{\varepsilon,2 }\int_{r}^{1}\frac{1}{s}\left(\int_{s}^{1}e^{\tilde{\omega}_{\varepsilon}}dt \right)ds\\ &\leq-2\tilde{\varphi}_{\varepsilon,2}(1,0)\sigma_{\varepsilon,2 }\left(\ln\frac{\varepsilon}{r}\left(\arctan\frac{1}{\varepsilon}-\arctan \frac{r}{\varepsilon}\right)+\int_{\frac{r}{\varepsilon}}^{\frac{1}{ \varepsilon}}\frac{2\ln u}{1+u^{2}}du+O(\varepsilon)\right)\end{split} \tag{3.42}\]
by a straightforward integral computation. Notice also that elliptic estimates on (3.41) imply that
\[\tilde{\varphi}_{\varepsilon,2}(1,0)-m_{\varepsilon,2}(1)=\psi_{\varepsilon,2 }(1,0)=O\left(\varepsilon\right)\]
as \(\varepsilon\to 0\). Then, choosing \(r=\rho\varepsilon\) in (3.42), we obtain
\[m_{\varepsilon,2}(r)-m_{\varepsilon,2}(1)\leq-2m_{\varepsilon,2}(1)\sigma_{ \varepsilon,2}\left(\ln\frac{1}{\rho}\left(\frac{\pi}{2}-\arctan\rho\right)\right) \tag{3.43}\]
for any \(\varepsilon>0\) small enough. Then, for \(\rho\) large enough too,
\[-m_{\varepsilon,2}(1)\left(1-2\sigma_{\varepsilon,2}\left(\ln\frac{1}{\rho} \left(\frac{\pi}{2}-\arctan\rho\right)\right)\right)\leq-m_{\varepsilon,2}(\rho) \tag{3.44}\]
proves that \(m_{\varepsilon,2}\) is uniformly bounded. We then have for any \(\rho>0\) that
\[\int_{\rho\varepsilon}^{1}\left(m_{\varepsilon,2}\right)^{2}e^{\tilde{\omega }_{\varepsilon}}\leq\frac{C}{\rho}\]
as \(\varepsilon\to 0\). We deduce from this estimate on \(m_{\varepsilon,2}\) and the similar one on \(\psi_{\varepsilon,2}\) that for any \(\rho>0\),
\[\int_{\rho\varepsilon}^{1}\left(\tilde{\varphi}_{\varepsilon,2}\right)^{2}e^ {\tilde{\omega}_{\varepsilon}}\leq\frac{C}{\rho}\]
as \(\varepsilon\to 0\). Letting \(\varepsilon\to 0\) and then \(\rho\to 0\), remembering (3.19), (3.22) and (3.24), we get that
\[\int_{\mathbb{R}\times\{0\}}\left(\varphi_{\star,2}^{+}\right)^{2}e^{u}dx= \frac{\pi}{2}\text{ and }\int_{\mathbb{R}\times\{0\}}\varphi_{\star,2}^{+}e^{u}dx=0\;.\]
By orthogonality with the constant function, we must have \(\sigma_{\star,2}=2\) and \(\varphi_{\star,2}^{+}\circ f_{+}\) is a first Steklov eigenfunction of the disk that is
\[\varphi_{\star,2}^{+}=a_{\star,1}Z_{1}+a_{\star,2}Z_{2}\;,\]
where \(Z_{1}=Re(f_{+}^{-1})\) and \(Z_{2}=Im(f_{+}^{-1})\) and \((a_{\star,1})^{2}+(a_{\star,2})^{2}=1\). By symmetry (3.27), \(a_{\star,1}=1\) and \(a_{\star,2}=0\).
Now, we aim at estimating \(\delta_{\varepsilon}=|\sigma_{\varepsilon,2}-1|\).
We set
\[R_{\varepsilon}(x)=\tilde{\varphi}_{\varepsilon,2}(x)-\tilde{\varphi}_{ \varepsilon,2}(0)Z_{1}\left(\frac{x}{\varepsilon}\right)\]
so that \(R_{\varepsilon}\) satisfies \(R_{\varepsilon}(0)=0\) and the symmetry \(R_{\varepsilon}(x_{1},x_{2})=R_{\varepsilon}(-x_{1},x_{2})\) for any \((x_{1},x_{2})\in\mathbb{D}_{+}\). We have
\[\begin{cases}\Delta R_{\varepsilon}=0&\text{ in }\mathbb{D}_{+}\\ -\partial_{x_{2}}R_{\varepsilon}-e^{\tilde{u}_{\varepsilon}}R_{\varepsilon}= \left(\sigma_{\varepsilon,2}-1\right)e^{\tilde{u}_{\varepsilon}}\tilde{ \varphi}_{\varepsilon,2}+\sigma_{\varepsilon,2}e^{\tilde{v}_{\varepsilon}} \tilde{\varphi}_{\varepsilon,2}\text{ on }[-1,1]\times\{0\}\;.\end{cases} \tag{3.45}\]
Integrating this equation against \(R_{\varepsilon}\), we obtain
\[\int_{\mathbb{D}_{+}}\left|\nabla R_{\varepsilon}\right|^{2}= \int_{\mathbb{S}_{+}^{1}}R_{\varepsilon}\partial_{\nu}R_{ \varepsilon}+\left(\sigma_{\varepsilon,2}-1\right)\int_{[-1,1]\times\{0\}}e^{ \tilde{u}_{\varepsilon}}\tilde{\varphi}_{\varepsilon,2}R_{\varepsilon}+\sigma _{\varepsilon}^{2}\int_{[-1,1]\times\{0\}}e^{\tilde{v}_{\varepsilon}}\tilde{ \varphi}_{\varepsilon,2}R_{\varepsilon}\] \[\leq \left\|\varphi_{\varepsilon,2}\right\|_{\infty}\left\|R_{ \varepsilon}\right\|_{\infty}\left(\int_{\mathbb{S}_{+}^{1}}\left|\left( \partial_{\nu}Z_{1}\left(\frac{x}{\varepsilon}\right)\right)\right|+\left\|R_ {\varepsilon}\right\|_{\infty}+2\left|\sigma_{\varepsilon,2}-1\right|+O\left( \varepsilon\right)\right)\] \[\leq C\left\|R_{\varepsilon}\right\|_{\infty}\left(\left\|R_{ \varepsilon}\right\|_{\infty}+\delta_{\varepsilon}+\varepsilon\right) \tag{3.46}\]
for some constant \(C\) independant from \(\varepsilon\), where \(\left\|.\right\|_{\infty}\) denotes the uniform norm in \(\mathbb{S}_{+}^{1}\). Here we easily computed that \(\left|\partial_{\nu}\left(Z_{1}\left(\frac{x}{\varepsilon}\right)\right)\right| =O(\varepsilon)\) uniformly on \(\mathbb{S}_{+}^{1}\). Letting
\[\alpha_{\varepsilon}=\left\|R_{\varepsilon}\right\|_{\infty}+\delta_{ \varepsilon}+\varepsilon^{2}\;, \tag{3.47}\]
we get that
\[\int_{\mathbb{D}_{+}}\left|\nabla R_{\varepsilon}\right|^{2}\leq O\left( \alpha_{\varepsilon}^{2}\right) \tag{3.48}\]
as \(\varepsilon\to 0\). Letting \(\hat{R_{\varepsilon}}(x)=R_{\varepsilon}(\varepsilon x)\), we get from (3.45) that
\[\begin{cases}\Delta\hat{R}_{\varepsilon}=0&\text{in }\mathbb{D}_{\frac{1}{ \varepsilon}}^{+}\\ -\partial_{x_{2}}\hat{R}_{\varepsilon}-e^{u}\hat{R}_{\varepsilon}=\left( \sigma_{\varepsilon,2}-1\right)e^{u}\hat{\varphi}_{\varepsilon,2}+\sigma_{ \varepsilon,2}\frac{2\varepsilon^{2}}{1+\varepsilon^{4}|x|^{2}}\hat{\varphi} _{\varepsilon,2}&\text{on }[-\frac{1}{\varepsilon},\frac{1}{\varepsilon}]\times\{0\}\;.\end{cases} \tag{3.49}\]
Dividing by \(\alpha_{\varepsilon}\), by standard elliptic theory, we obtain up to the extraction of a subsequence that for any \(\rho>0\),
\[\frac{\hat{R}_{\varepsilon}}{\alpha_{\varepsilon}}\to\hat{R}_{\star}\text{ in }\mathcal{C}^{2}\left(\mathbb{D}_{\rho}^{+}\right) \tag{3.50}\]
as \(\varepsilon\to 0\). We also have up to the extraction of a subsequence that \(\frac{\delta_{\varepsilon}}{\alpha_{\varepsilon}}\to\delta_{\star}\) as \(\varepsilon\to 0\) and \(\frac{\varepsilon^{2}}{\alpha_{\varepsilon}}\to 0\) as \(\varepsilon\to 0\). Letting \(\varepsilon\to 0\) in (3.49), we deduce
\[\begin{cases}\Delta\hat{R}_{\star}=0&\text{in }\mathbb{R}_{+}^{2}\\ -\partial_{x_{2}}\hat{R}_{\star}-e^{u}\hat{R}_{\star}=\delta_{\star}e^{u}Z_{1 }&\text{on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.51}\]
Setting \(R_{\star}=\hat{R}_{\star}\circ f_{+}\) in \(\mathbb{D}\setminus\{(-1,0)\}\), we obtain
\[\begin{cases}\Delta R_{\star}=0&\text{in }\mathbb{D}\\ -\partial_{r}R_{\star}-R_{\star}=\delta_{\star}z_{1}&\text{on }\mathbb{S}^{1} \setminus\{(-1,0)\}\;,\end{cases} \tag{3.52}\]
By (3.48), we can extend \(R_{\star}\) in \(\mathbb{D}\) so that (3.52) is satisfied in \(\mathbb{D}\). We integrate (3.51) against \(z_{1}\) and we immediatly get \(\delta_{\star}=0\). By definition of \(\delta_{\star}\), \(\delta_{\varepsilon}=o(\alpha_{\varepsilon})\) as \(\varepsilon\to 0\) and we obtain by (3.47):
\[\delta_{\varepsilon}=o\left(\left\|R_{\varepsilon}\right\|_{\infty}+\varepsilon\right) \tag{3.53}\]
as \(\varepsilon\to 0\). Moreover, we obtain that \(\Delta R_{\star}=0\) and \(\partial_{r}R_{\star}=R_{\star}\). This means that \(R_{\star}\) is a first eigenfunction in \(\mathbb{D}\). Since \(R_{\star}\) satisfies
\[R_{\star}(z_{1},z_{2})=-R_{\star}(-z_{1},z_{2})=R_{\star}(z_{1},-z_{2})\text{ and }R_{\star}(-1,0)=0\;,\]
we obtain that \(R_{\star}=0\). We obtain that for any \(\rho>0\)
\[R_{\varepsilon}(\rho\varepsilon)=o\left(\left\|R_{\varepsilon}\right\|_{\infty} +\varepsilon\right) \tag{3.54}\]
as \(\varepsilon\to 0\).
We prove from now to the end of the subsection that
\[\left\|R_{\varepsilon}\right\|_{\infty}=O\left(\varepsilon\right) \tag{3.55}\]
as \(\varepsilon\to 0\). We split \(R_{\varepsilon}(x)=A_{\varepsilon}(x)+M_{\varepsilon}(\left|x\right|)\) such that
\[M_{\varepsilon}(r)=\frac{1}{\pi}\int_{0}^{\pi}R_{\varepsilon}(r\cos\theta,r \sin\theta)d\theta=m_{\varepsilon}(r)-\frac{1}{\pi}\int_{0}^{\pi}Z_{1}\left( \frac{\left(rcos\theta,r\sin\theta\right)}{\varepsilon}\right)d\theta\]
is the mean value of \(R_{\varepsilon}\) on circles. From (3.54), we know that
\[\psi_{\varepsilon,2}(\rho\varepsilon)=o\left(\left\|R_{\varepsilon}\right\|_ {\infty}+\varepsilon\right)\text{ and }M_{\varepsilon}(\rho\varepsilon)=o\left(\left\|R_{ \varepsilon}\right\|_{\infty}+\varepsilon\right) \tag{3.56}\]
as \(\varepsilon\to 0\) for any \(\rho>0\).
Let \(0\leq r_{\varepsilon}\leq 1\) be such that \(R_{\varepsilon}(r_{\varepsilon})=\left\|R_{\varepsilon}\right\|_{\infty}\). If \(r_{\varepsilon}=O(\varepsilon)\), we deduce (3.55) from (3.54).
Let's prove that uniformly on \(x\in\mathbb{D}_{2}\setminus\mathbb{D}_{\frac{1}{2}}\),
\[A_{\varepsilon}(r_{\varepsilon}x)=O\left(\frac{\varepsilon}{r_{\varepsilon}} \left\|R_{\varepsilon}\right\|_{\infty}+r_{\varepsilon}\varepsilon\right) \text{ and }A_{\varepsilon}(\varepsilon x)=O\left(\varepsilon\right) \tag{3.57}\]
as \(\varepsilon\to 0\).
We have that \(A_{\varepsilon}=\psi_{\varepsilon,2}-\mu_{\varepsilon}\). Similarly to equation (3.41) on \(\psi_{\varepsilon,2}\), we obtain
\[\begin{cases}\Delta A_{\varepsilon}=-\Delta M_{\varepsilon}=&-\frac{\sigma_{ \varepsilon,2}}{\pi r}e^{\tilde{\omega}_{\varepsilon}}\left(\tilde{\varphi}_{ \varepsilon}(r,0)+\tilde{\varphi}_{\varepsilon}(-r,0)\right)\\ &+\frac{1}{\pi r}e^{\tilde{u}_{\varepsilon}}\left(Z_{1}\left(\frac{(r,0)}{ \varepsilon}\right)+Z_{1}\left(\frac{(-r,0)}{\varepsilon}\right)\right)\\ -\partial_{x_{2}}A_{\varepsilon}=\sigma_{\varepsilon,2}e^{\tilde{\omega}_{ \varepsilon}}\tilde{\varphi}_{\varepsilon}-e^{\tilde{u}_{\varepsilon}}Z_{1} \left(\frac{x}{\varepsilon}\right)\end{cases}\text{ in }\mathbb{R}_{+}^{2} \tag{3.58}\]
we rewrite as
\[\begin{cases}\Delta A_{\varepsilon}=&-\frac{\sigma_{\varepsilon,2}}{\pi r}e ^{\tilde{\omega}_{\varepsilon}}\left(R_{\varepsilon}(r,0)+R_{\varepsilon}(-r,0 )\right)\\ &+\frac{1}{\pi r}\left((\sigma_{\varepsilon,2}-1)e^{\tilde{u}_{ \varepsilon}}+\sigma_{\varepsilon,2}e^{\tilde{v}_{\varepsilon}}\right)\left(Z _{1}\left(\frac{(r,0)}{\varepsilon}\right)+Z_{1}\left(\frac{(-r,0)}{ \varepsilon}\right)\right)\\ -\partial_{x_{2}}A_{\varepsilon}=\sigma_{\varepsilon,2}e^{\tilde{\omega}_{ \varepsilon}}R_{\varepsilon}+(\sigma_{\varepsilon,2}-1)e^{\tilde{u}_{ \varepsilon}}Z_{1}\left(\frac{x}{\varepsilon}\right)+\sigma_{\varepsilon,2}e^{ \tilde{v}_{\varepsilon}}Z_{1}\left(\frac{x}{\varepsilon}\right)&\text{ on }\mathbb{R}\times\{0\}\;.\end{cases} \tag{3.59}\]
Similarly to arguments on (3.40) at the end of subsection 3.4.1, working at the scales \(r_{\varepsilon}\) and \(\varepsilon\) give (3.57).
Now, from the equation (3.45), we deduce
\[\Delta M_{\varepsilon}=\frac{e^{\tilde{u}_{\varepsilon}}}{r}\left(R_{ \varepsilon}(r,0)+R_{\varepsilon}(-r,0)\right)+\left(\sigma_{\varepsilon,2}-1 \right)\Delta m_{\varepsilon}+\frac{e^{\tilde{v}_{\varepsilon}}}{r}\left( \tilde{\varphi}_{\varepsilon,2}(r,0)+\tilde{\varphi}_{\varepsilon,2}(-r,0) \right)\;. \tag{3.60}\]
Integrating on this equation, we have for \(r\leq 1\):
\[\begin{split} M_{\varepsilon}(r)&-M_{\varepsilon}(1)=- \int_{1}^{r}\frac{1}{s}\left(\int_{1}^{s}\Delta M_{\varepsilon}tdt\right)ds\\ =&-\int_{1}^{r}\frac{1}{s}\left(\int_{1}^{s}e^{ \tilde{u}_{\varepsilon}}\left(R_{\varepsilon}(t,0)+R_{\varepsilon}(-t,0) \right)dt\right)ds+\left(\sigma_{\varepsilon,2}-1\right)\left(m_{\varepsilon} (r)-m_{\varepsilon}(1)\right)\\ &-\int_{1}^{r}\frac{1}{s}\left(\int_{1}^{s}e^{\tilde{v}_{ \varepsilon}}\left(\tilde{\varphi}_{\varepsilon,2}(t,0)+\tilde{\varphi}_{ \varepsilon,2}(-t,0)\right)dt\right)ds+\left(1-r\right)\left(R_{\varepsilon} \right)^{\prime}(1)\end{split} \tag{3.61}\]
So that since \(\left(R_{\varepsilon}\right)^{\prime}(1)=O\left(\varepsilon\right)\), that
\[\int_{1}^{r}\frac{1}{s}\left(\int_{1}^{s}e^{\tilde{u}_{\varepsilon}}dt\right) ds=\ln\frac{\varepsilon}{r}\left(\arctan\frac{1}{\varepsilon}-\arctan\frac{r}{ \varepsilon}\right)+\int_{\frac{r}{\varepsilon}}^{\frac{1}{\varepsilon}} \frac{2\ln u}{1+u^{2}}du\]
that \(m_{\varepsilon}\) is uniformly bounded and by definition of \(\delta_{\varepsilon}\) we obtain
\[\left|M_{\varepsilon}(r)-M_{\varepsilon}(1)\right|\leq\left\|R_{\varepsilon} \right\|_{\infty}\left(\ln\frac{\varepsilon}{r}\left(\arctan\frac{1}{ \varepsilon}-\arctan\frac{r}{\varepsilon}\right)+\int_{\frac{r}{\varepsilon} }^{\frac{1}{\varepsilon}}\frac{2\ln u}{1+u^{2}}du\right)+O(\delta_{\varepsilon }+\varepsilon)\;. \tag{3.62}\]
In particular, for \(r=r_{\varepsilon}\), with (3.53), \(R_{\varepsilon}(r_{\varepsilon})=\left\|R_{\varepsilon}\right\|_{\infty}\), \(\varepsilon=o(r_{\varepsilon})\) and (3.57), we obtain that
\[\left\|R_{\varepsilon}\right\|_{\infty}\leq\left|R_{\varepsilon}(1)\right|+O(\varepsilon) \tag{3.63}\]
and applying again (3.62) for \(r=\rho\varepsilon\) with \(\rho\) large enough, we obtain that
\[\left|R_{\varepsilon}(1)\right|\leq 2\left|R_{\varepsilon}(\rho\varepsilon) \right|+O\left(\varepsilon\right) \tag{3.64}\]
as \(\varepsilon\to 0\). By (3.54), we obtain (3.55).
We conclude that \(\delta_{\varepsilon}=o(\varepsilon)\) from (3.53) and (3.55), and subsection 3.4.2 is complete.
|
2306.03665 | A New Approach to Measure Fundamental Microstructural Influences on the
Magnetic Properties of Electrical Steel using a Miniaturized Single Sheet
Tester | Magnetic properties of electrical steel are usually measured on Single Sheet
Testers, Epstein frames or ring cores. Due to the geometric dimensions and
measurement principles of these standardized setups, the fundamental
microstructural influences on the magnetic behavior, e.g., deformation
structures, crystal orientation or grain boundaries, are difficult to separate
and quantify. In this paper, a miniaturized Single Sheet Tester is presented
that allows the characterization of industrial steel sheets as well as from in
size limited single, bi- and oligocrystals starting from samples with
dimensions of 10x22 mm. Thereby, the measurement of global magnetic properties
is coupled with microstructural analysis methods to allow the investigation of
micro scale magnetic effects. An effect of grain orientation, grain boundaries
and deformation structures has already been identified with the presented
experimental setup. In addition, a correction function is introduced to allow
quantitative comparisons between differently sized Single Sheet Testers. This
approach is not limited to the presented Single Sheet Tester geometry, but
applicable for the comparison of results of differently sized Single Sheet
Testers. The results of the miniaturized Single Sheet Tester were validated on
five industrial electrical steel grades. Furthermore, first results of
differently oriented single crystals as well as measurements on grain-oriented
electrical steel are shown to prove the additional value of the miniaturized
Single Sheet Tester geometry. | Nora Leuning, Martin Heller, Markus Jaeger, Sandra Korte-Kerzel, Kay Hameyer | 2023-06-06T13:26:58Z | http://arxiv.org/abs/2306.03665v1 | A New Approach to Measure Fundamental Microstructural Influences on the Magnetic Properties of Electrical Steel using a Miniaturized Single Sheet Tester
###### Abstract
Magnetic properties of electrical steel are usually measured on Single Sheet Testers, Epstein frames or ring cores. Due to the geometric dimensions and measurement principles of these standardized setups, the fundamental microstructural influences on the magnetic behavior, e.g., deformation structures, crystal orientation or grain boundaries, are difficult to separate and quantify. In this paper, a miniaturized Single Sheet Tester is presented that allows the characterization of industrial steel sheets as well as from in size limited single, bi- and oligocrystals starting from samples with dimensions of \(10\times 22\) mm. Thereby, the measurement of global magnetic properties is coupled with microstructural analysis methods to allow the investigation of micro scale magnetic effects. An effect of grain orientation, grain boundaries and deformation structures has already been identified with the presented experimental setup. In addition, a correction function is introduced to allow quantitative comparisons between differently sized Single Sheet Testers. This approach is not limited to the presented Single Sheet Tester geometry, but applicable for the comparison of results of differently sized Single Sheet Testers. The results of the miniaturized Single Sheet Tester were validated on five industrial electrical steel grades. Furthermore, first results of differently oriented single crystals as well as measurements on grain-oriented electrical steel are shown to prove the additional value of the miniaturized Single Sheet Tester geometry.
keywords: Electrical steel, Miniaturized Single Sheet Tester, Single Crystals, Deformation, Grain boundaries, Electrical steel, FeSi +
Footnote †: journal: Journal of Magnetism and Magnetic Materials
## 1 Introduction
Magnetic properties of non-grain oriented (NO) and grain oriented (GO) electrical steels are usually obtained by measurements on standardized measurement sensors according to international standards, such as IEC-60404. For the macroscopic evaluation of the magnetic properties of electrical steel sheet this is sufficient. However, for a more detailed consideration of microstructural effects on the magnetic properties these standardized sensors are not suitable as the results are insufficiently spatially resolved. In order to improve the understanding of the interrelations between grain orientation, grain boundaries and deformation mechanisms on the one hand and magnetic properties on the other hand, a miniaturized Single Sheet Tester (SST) was constructed and initial results are presented in this paper.
Electrical steel sheet is used as magnetic core material for electrical machines, thus the magnetic properties are of main concern in the material selection [1]. Electromagnetic simulations are performed during the design stage of electrical machines to determine the relation between design, material choice and operational behavior of the machine. For such electromagnetic simulations, the magnetic properties of the electrical steel sheet material in question have to be modeled based on magnetic measurements of the magnetic permeability and iron loss [2]. These characteristic values are used to compare different electrical steel grades. With standardized measurement setups, the non-linear material behavior can be analyzed, different grades can be compared and iron loss models can be parameterized. Consequently, standardized measurements are crucial for the general application of electrical steel in electrical machines. For the development of improved electrical steel grades and advanced material models, advanced magnetic characterization approaches have to be utilized that go beyond standardized characterization techniques. Advanced methods that are used today include the consideration of vector characteristics of magnetic flux \(B\) and magnetic field \(H\), two-dimensional excitation conditions, rotating magnetic fields or local magnetic properties [3; 4; 5]. However, these techniques are mainly designed for polycrystalline sheet materials. In order to further study fundamental magnetization mechanisms of NO and GO steel another approach has to be developed that enables the quantification of effects on a grain scale.
In order to study individual fundamental microstructure influences, we developed a Miniature Single Sheet Tester (Mini-SST). The minimum sample size is \(10\times 23\) mm. These sample dimensions allow the investigation of grown single crystals with specific orientations, grown bi-crystals with defined grain boundaries or oligocrystals with specifically adjusted deformation structures [6]. The measurements of the magnetic properties are not locally resolved within the \(10\times 17\) mm measurement area of the sample, but the Mini-SST results are cou
pled with materials science microstructure investigation methods, i.e., hardness measurements, optical microscopy, X-ray diffraction, or electron backscatter diffraction (EBSD) [6]. This approach allows a correlation between crystallographic texture [7], grain boundaries [8] and deformation mechanisms [9] with the magnetic properties. In addition, due to the small sample size, characterization of polycrystalline materials on a laboratory or industrial scale becomes possible even if the sample volume is small, e.g. during sample preparation of a manufactured motor.
## 2 Miniaturization of a SST
### Standardized Measurement Setups
In general, three methods are used for the standardized magnetic characterization of electrical steel, namely Epstein frames, SST and ring core measurements. In this study, the SST was miniaturized to allow the investigation of fundamental microstructural effects. In practice, none of the three standardized characterization methods outperformed the others, as they all have different advantages and disadvantages. Detailed information on these methods can be found in DIN EN 60404 [10], nevertheless a brief comparison is made here to illustrate the idea of the Mini-SST.
The measurement principle of all three methods can be summarized as follows: A magnetic field is generated by a current running through a copper winding (magnetization coil). The rectangular or ring shaped electrical steel sample is placed in this magnetic field and a secondary copper winding (induction coil) is placed as close as possible to the sample. The voltage, which is induced in this secondary induction winding is proportional to the flux density within the sample. Differences between the setups stem from the sample geometry on the one hand and the magnetic flux path on the other hand. In a ring shaped sample, the magnetic flux is closed entirely by the sample. In an Epstein frame, four electrical steel strip legs are positioned in a rectangle with overlapping edges to close the magnetic flux path entirely by the sample material. In a SST, there is only one sheet sample, which makes it impossible to close the magnetic path over the sample, thus, a double c-yoke is required. These macroscopically different magnetic flux paths lead to differences in the magnetic flux distribution across the sample cross sections. For example, due to the different magnetic path length at the outer and inner circumference of the ring cores and on the four legs of an Epstein frame, a flux concentration at the inner diameter occurs before the material is fully saturated. The resulting flux density distribution in the sample cross sections is inhomogeneous and thus, describes an averaged flux density within the sample. For the purpose of this study, it is necessary to have a homogenous flux density condition within the sample. In a SST, the flux density distribution is homogeneous over the cross section. However, the yoke that is needed to close the magnetic flux can lead to additional losses and are determined by the yokes geometry. This is an effect that needs to be accounted for during the validation of the Mini-SST setup and is discussed in the following sections. Sample sizes for a SST can be as large as \(500\times 500\) mm, whereas samples for the Epstein frame are \(280\times 30\) mm. The newly designed Mini-SST allows a minimum size of \(10\times 22\) mm.
### Mini SST - Geometry
The purpose of the Mini-SST is to enable a detailed characterization of the magnetic properties of single, bi- and oligocrystals with dimensions achievable by crystal growth [6]. Since the size of samples produced by crystal growth are in the range of \(2\,\mathrm{cm}^{2}\), a correspondingly small SST was developed in cooperation with _Brockhaus Measurements_. The outer distance between the magnet yoke legs of the Mini-SST is about \(22\,\mathrm{mm}\) defining the minimum sample length, however, the free magnetic path length \(l_{\mathrm{m}}\) between the yoke poles is about \(16\,\mathrm{mm}\). Consequently, the pole thickness is \(3\,\mathrm{mm}\) on each side. Both, the primary winding \(N_{1}\) and secondary winding \(N_{2}\) have \(60\) turns each. The Mini-SST is controlled by an _MPG 200_ test bench from _Brockhaus Measurements_. A picture of the Mini-SST is displayed in Fig. 1.
Magnetic properties are measured by means of electric measurements, i.e., electric current and voltage. The required current is supplied by a power amplifier. Thereby, a magnetic field is created by the primary magnetization winding. The current is measured by means of a temperature-stable, low inductivity precision resistor. Polarization is determined by measuring the induced voltage in the secondary induction winding. The parallel recording of magnetic field \(H\) and magnetic polarization \(J\) with separate analogue-digital converters enables simultaneous measurement. A control algorithm is used to ensure sinusoidal excitation, where the secondary voltage can be checked and constantly regulated in accordance with the nominal value. The nominal voltage is supplied by a highly stable, digital frequency generator. Amplitude and frequency are set by software according to the sample data entered and the default values, according to the _MPG 200_ and _MPG Expert_ Software.
Figure 1: Photograph of the \(10\times 22\) mm Mini-SST.
## 3 Description of the Correction Function
It is well known that the characterization of magnetic properties strongly depends on the geometric conditions of the measurement setup, e.g., Epstein frame to SST as well as differently sized SST, which are summarized in [11]. Even different laboratories with similar geometric conditions, as presented in [12] for Epstein measurements, come to different results. Therefore, the comparability between measurement results becomes a matter of reference samples as well as measurement and correction techniques. To ensure comparability between the Mini-SST measurements and previous measurements of the authors, a general correction function is developed and parameterized to the reference SST of the Institute of Electrical Machines (IEM).
In Fig. 2, magnetization curves tested at \(50\,\mathrm{Hz}\) are displayed for three differently sized SSTs of \(120\times 120\) mm, \(60\times 60\) mm and \(10\times 20\) mm. To ensure comparability, the same sample was measured on all SSTs. A strip of \(d_{\mathrm{strip}}=$10\,\mathrm{mm}$\) and \(l_{\mathrm{strip}}=$120\,\mathrm{mm}$\) was utilized for this. To account for the cut-edge effect, the sample for the filled \(120\times 120\) mm SST consisted of \(12\) sample strips each with a width of \(10\,\mathrm{mm}\) and a length of \(120\,\mathrm{mm}\) that are taped together according to [13]. When looking at the magnetization curves, two effects can be observed. At low magnetic fields \(H\), as can be seen in Fig. 2 (a), the smaller SSTs generally show lower magnetization compared to larger SSTs. Furthermore, the curves for one \(10\,\mathrm{mm}\) sample strip are identical to those for a fully filled SST (\(12\times 10\,\mathrm{mm}\)) at low magnetic fields. At high magnetic fields \(H\), the fully filled reference SST shows the hardest magnetization behavior right before the \(10\times 20\) mm SST. Both, the \(60\times 60\) mm SST and the reference SST with just one sample strip need much lower field strengths to seemingly magnetize the sample to \(1.8\,\mathrm{T}\).
The observed behavior is attributed to two separate effects. The first one is linked to the magnetic resistance of the yoke, whereas the second effect is linked to the influence of stray flux within the unfilled coil cross section. Due to the required size of the yoke's pole width, non-ideal conditions in the ratio of the free magnetic path length and the yoke height occur, when the SST is downscaled. Thereby, the permeability of the yoke becomes an important factor. Moreover, at high magnetic fields, the air flux needs to be considered. The free space in the coil depends on the solenoid housing, which is fixed and the respective sample cross section (i.e. sample thickness and width). This explains the strong difference between the results for the filled and non-filled reference SST, as well as for the differently sized SSTs with different solenoid housing.
To account for these differences and ensure comparability, a correction function for the \(B(H)\) measurement results has been developed and parametrized to the \(120\times 120\) mm IEM SST, which, as previously stated, serves as the reference SST within this study. Again, the correction function for the Mini-SST is necessary to account for the magnetic resistance of the yoke and the air flux in the solenoid, which enables a quantitative comparison to the reference SST. The proposed method describes a general approach, that can be transferred to other research facilities and SST sizes to improve comparability between measurement setups and research facilities.
For a designated reference SST, certain assumptions have to be made. Firstly that the reference SST is ideal and only measures the resistance of the sample. Hence, there is no stray flux outside the yoke, which has a permeability \(\mu_{\mathrm{r}}=\infty\). Therefore, the magnetic resistance is only composed of the resistance of the sample and the air in the coil. As a framework for the correction function a magnetic equivalent circuit is used (Fig. 3), where:
* Mini-SST geometry is known: \(A_{\mathrm{yoke}}\)\(l_{\mathrm{yoke}}\)
* Sample geometry is known: \(A_{\mathrm{sample}}\)\(l_{\mathrm{sample}}\)
* Magnetic flux is known: \(\Phi=I\cdot N\) and
* Total flux linkage \(\Psi=\Psi_{\mathrm{Air}}+\Psi_{\mathrm{Sample}}\)
with \(A\) representing a cross section, \(l\) representing a length, \(I\) describing an electric current and \(N\) the number of turns. The magnetic resistance can generally be calculated using the following equation:
\[R_{\mathrm{mag}}=\frac{l}{\mu_{0}\mu_{\mathrm{r}}A}. \tag{1}\]
Figure 2: Measurements of a M330-50A (M1) reference sample of \(d_{\mathrm{strip}}=$1\,\mathrm{cm}$\) on differently sized SSTs for the low (a) and high magnetic field region (b) and a measurement of \(12\) strips resp. of \(d_{\mathrm{strip}}=$1\,\mathrm{cm}$\).
### Correction of the Magnetic Field Strength
The measured magnetization curves \(B_{\rm meas}(H_{\rm meas.})\) of the Mini-SST cannot be directly compared to the reference, as previously stated, thus a corrected \(B_{\rm meas.\ corr}(H_{\rm meas.\ corr})\) has to be calculated. Inputs for the calculation of the corrected magnetic field \(H_{\rm meas.\ corr}\) are the measured flux density \(B_{\rm meas.\ in\ T}\) and the measured magnetic field strength \(H_{\rm meas.\ in\ A/m}\). With the following equations the corrected field strength can be determined.
The total flux linkage \(\Psi\) can be calculated from the flux density \(B_{\rm meas.\ in\ the\ sample}\) as given by the measurement system and the cross section of the sample \(A_{\rm sample}\), which is determined geometrically (thickness \(x\) width).
\[\Psi=B_{\rm meas.\ }\cdot A_{\rm sample} \tag{2}\]
The magnetic flux \(\Phi\) is the product of the measured magnetic field strength \(H_{\rm meas.\ and\ the\ magnetic\ path\ length\ l_{\rm m}}\) between the yoke legs of the Mini-SST according to
\[\Phi=H_{\rm meas.\ }\cdot l_{\rm m}. \tag{3}\]
With the information of \(\Psi\) (eq. (2)) and the cross section of the yoke pole surfaces \(A_{\rm yoke}\), the magnetic flux density in the yoke \(B_{\rm yoke}\) can be calculated
\[B_{\rm yoke}=\frac{\Psi}{A_{\rm yoke}} \tag{4}\]
For the calculation of the magnetic field in the yoke \(H_{\rm yoke}\), the permeability of the yoke needs to be determined. As this value cannot be directly measured, a fitting has been performed. As depicted in Fig. 4, the permeability of the yoke is fitted to \(B_{\rm yoke}\) measurement values below \(0.07\,\rm T\) of two materials and can be described by the following empirical equation:
\[\mu_{\rm z,\ yoke}=\frac{7750}{1+2^{-160\,B_{\rm yoke}}}-2000 \tag{5}\]
\[H_{\rm yoke}=\frac{B_{\rm yoke}}{\mu_{\rm z,\ yoke}\cdot\mu_{0}} \tag{6}\]
With equations (2) to (6), all the variables needed to calculate the corrected magnetic field taking the yoke permeability into account are known, resulting in the following equation:
\[H_{\rm meas.\ corr.}=\frac{1}{l_{\rm m}}\cdot(\Phi-H_{\rm yoke}l_{\rm yoke}). \tag{7}\]
### Correction of the Magnetic Flux Density
To account for the influence of stray flux, \(H_{\rm stray}\) (8) and \(B_{\rm stray}\) (9) are calculated with the help of the magnetic flux \(\Phi\) and the properties of the yoke. These parameters are used to subsequently determine the flux \(\Psi_{\rm stray}\) and \(\Psi_{\rm sample}\) in equations (10) and (11).
\[H_{\rm stray}=\frac{1}{l\ _{\rm stray}}\cdot(\Phi-H_{\rm yoke}l_{\rm yoke}) \tag{8}\]
\[B_{\rm stray}=H_{\rm stray}\cdot\mu_{0} \tag{9}\]
\[\Psi_{\rm stray}=B_{\rm stray}\cdot A_{\rm coil} \tag{10}\]
\[\Psi_{\rm sample}=\Psi-\Psi_{\rm stray} \tag{11}\]
The value for the length of the magnetic stray lines \(l_{\rm stray}\) is based on the assumption that the flux lines are longer than the direct connection between the poles of the magnetic yoke \(l_{\rm m}\), which is \(16\,\rm mm\). A value of \(+12.5\%\) is assumed, which, in our case, corresponds to \(18\,\rm mm\). The free cross section of the solenoid \(A_{\rm coil}\) is approximated with the help of geometrical measurements to be \(42\,\rm mm^{2}\). Finally, the corrected flux density \(B_{\rm meas.\ corr}\) in the sample can be determined according to (12):
\[B_{\rm meas.\ corr.}=\frac{\Psi_{\rm sample}}{A_{\rm sample}}. \tag{12}\]
In Fig. 5 results of the corrected \(B(H)\)-curves are displayed in comparison to the uncorrected measurements on the Mini-SST and measurements of the same material on the reference SST. It is evident that the identified and fitted parameters of the correction function lead to a measurement data correction that enables a quantitative comparison between the differently sized SSTs. In both the low and high \(H\) region, the curves for the reference and corrected \(B(H)\) characteristics are virtually congruent.
Figure 4: Determination of the fitting function to account for the permeability of the Mini-SST yoke.
Figure 3: Magnetic equivalent circuit framework to calculate a SST correction function.
## 4 Validation of the Correction Function
In order to validate the correction function, the approach is tested on a total of five different materials with different chemical compositions, grain sizes, sheet thicknesses and resulting magnetic properties. An overview of the tested materials is given in Table 1.
In Fig 6 to Fig. 9 the results are displayed for all materials. In general, the correction function improves the compatibility for all materials, however, the results for M1 show the best result. This could be due to the small grain size, together with the effect of crystal orientation and resulting stray fields. This is a topic
Figure 8: Measurements and corrected \(B(H)\)-curves of Material 4.
Figure 6: Measurements and corrected \(B(H)\)-curves of Material 2.
Figure 7: Measurements and corrected \(B(H)\)-curves of Material 3.
Figure 9: Measurements and corrected \(B(H)\)-curves of Material 5.
that can further be studied with the validated Mini-SST. The correction function can also be applied to GO material, which helps the interpretation of results for single crystals due to the large grains and variable sample orientation. As the purpose of the Mini-SST is the characterization of samples that cannot be tested on a reference SST due to their size, the function cannot be parametrized on each material but must be generally applicable. The results presented suggest that a quantitative comparison is enabled with sufficient accuracy.
## 5 Measurements of undeformed single crystals and GO material
In this section, preliminary results of Mini-SST measurements of single crystal as well as GO material coupled with EBSD measurements are presented.
The single crystals were produced on a self build induction furnace that works according to the Bridgman Stockbarger method. Subsequently, the sheet geometry, necessary for the Mini-SST, was cut out of the cylindric crystal growth geometry with the help of an electrical discharging machine. To make sure that no heat affected layer or deformation layer remains the sample was etched with nitric acid (100 mL HNO3, 150 mL H2O), mechanically grinded and polished and the final surface finish for EBSD was achieved with electro polishing (A2 without water for 15 s at 24 V). A detailed description can be found in [6; 15]. The GO material was cut from industrial transformer sheets with a strong Goss texture along different orientations relative to rolling direction (RD).
The single crystals were characterized on the Mini-SST at 50 Hz with peak inductions between 0.1 T and 1.5 T in 0.1 T-steps. The magnetization curves \(J(H)\) are shown in Fig. 10 (a). The magnetic anisotropy of the three common axes is clearly visible. Magnetization of the [100] single crystal is easiest, as expected. At first sight, the curves for the [110] and [111] single crystals show an unexpected behavior as the magnetization in [110] seems to be harder compared to the [111] direction. In order to examine this behavior further, the results have been compared to the data of Honda et al. [14] and their work on the magnetization behavior of single crystalline iron, which is shown in Fig. 10 (b). It can be seen that the results actually show similar behavior when looking at higher field strengths. Due to the different measurement setup, Honda was able to measure higher field strength up to saturation. The slight differences of the curves can stem from the difference in chemical composition as well as the manufacturing and preparation of samples. Looking at the low magnetic field region, as displayed
\begin{table}
\begin{tabular}{l c c c} & \(d_{\text{sheet}}\) & orientation & \(d_{\text{GS}}\) \\ \hline Material M1 & 0.50 mm & NO, RD & 52 \(\upmu\)m \\ Material M2 & 0.35 mm & NO, RD & 69 \(\upmu\)m \\ Material M3 & 0.27 mm & NO, RD & 134 \(\upmu\)m \\ Material M4 & 0.20 mm & NO, RD & 95 \(\upmu\)m \\ Material M5 & 0.18 mm & GO, TD & 1 cm \\ \hline \end{tabular}
\end{table}
Table 1: Nominal thickness \(d_{\text{sheet}}\), orientation and mean grain diameter \(d_{\text{GS}}\) of the studied materials.
Figure 10: Results of single crystal measurements and comparison to literature data [14].
in Fig. 10 (c) the same initial crossing of the [110] and [111] curves can be observed, which could have been due to measurement scatter of Honda's results but shows a smooth transition for the Mini-SST measurements. In ongoing work, the single crystals, from which the here analyzed samples have been cut, are subsequently plastically deformed in several steps. After each step, one sheet sample is cut for Mini-SST measurements. This is an additional advantage of the Mini-SST setup as all deformed samples can be cut out of the same single crystal [15].
In a second example, results of industrial GO Mini-SST measurements are displayed in Fig. 11. Six samples per sheet direction in RD (0\({}^{\circ}\)), transverse direction (TD) (90\({}^{\circ}\)) and diagonal sheet plane direction (45\({}^{\circ}\)) have been cut from industrial GO sheet. Due to the crystallographic orientation of the present Goss grains, the sample directions along magnetization direction correspond to the [100], [110] and [111] direction of the unit cells. EBSD measurements were performed on the samples to validate the crystallographic orientation. Inverse pole figures, as depicted in Fig. 11 (a), show which crystallographic plane normal is parallel to a particular sample axis, in this case the rolling direction. Therefore, points in the triangle show the exact orientation of these crystallographic plane normals relative to the rolling direction and the colour groups them into near 100 (red), 110 (green) and 111 (blue). The corresponding magnetization curves are shown in Fig. 11 (b). These results correspond very well to the single crystal measurements in Fig. 10 (a). Both, GO and the single crystals have pronounced crystal orientations. With a SST, magnetic field \(H\) and magnetic polarization \(J\) are treated as scalar properties, although they are actually vectors. For the [100] single crystal and GO cut in RD, the vector and scalar values are expected to be equal, as the easy magnetization direction, an thus, orientation of magnetic domains are aligned parallel with the magnetic field generated perpendicular to the magnetizing coil. For differently oriented single crystals and GO which is cut in unfavourable directions, this is not the case and the domains want to align in the easy directions, which are not parallel to the induced magnetic field. Only the polarizations vector component parallel to the magnetic field is obtained. When the sample approaches saturation, the domains are forced out of the easy directions into the direction of the applied magnetic field. As a result, the mismatch of vector and scalar properties decreases at high polarizations. The systematic error of neglecting vector properties needs to be accounted for in the evaluation and interpretation of results of SST measurements in general. However, this systematic error is inherent of the SST setups and its scalar consideration of vector properties. As the approach wants to enable a comparison between different SST, the fundamental measurement principle is not changed.
In the examples given, the chemical composition and sample preparation are different, nevertheless both examples show how the influence of orientation can be analyzed without the influence of high angle grain boundaries. In ongoing work, bi-crystals have been successfully grown and subsequently deformed to study the effect of high angle grain boundaries as well as their deformation behavior. In the future, these measurements may be correlated in more detail with microstructural parameters that can be controlled and quantified in such small samples, such as dislocation density and domain distribution. The measurements and validations shown here highlight that small scale characterisation is a promising method to better understand the characteristics of microstructural parameters of electrical steel.
## 6 Conclusions
In this paper a miniaturized SST is presented that has been designed to study fundamental microstructural effects on the magnetic properties of electrical steels. A correction function is developed to account for the non-ideal geometric conditions of the setup, i.e., the air flux in the solenoid and the yoke pole to sample ratio, to allow a comparison to a reference SST. A validation of the correction function is performed on five industrial materials. The results of this paper can be summarized in the following points:
* The validation shows that measurement results of the Mini-SST can be quantitatively compared to those established reference SST setups, after a one-time parametrization on industrial steel sheet.
* The correction function approach can be transferred to other SST setups, as it mainly depends on geometric con
Figure 11: Results of industrial GO sheet measurements.
ditions and a measurement at low magnetic fields to determine a fitting function for the permeability of the yoke.
* Fundamental micro magnetic effects of orientation, deformation and grain boundaries can be assessed as sample the minimum sample size allows the analysis of grown single-, bi- and oligo crystals. Additional microstructural analysis is necessary to link the effects to the magnetic results.
* In case of industrial NO and GO material, the Mini-SST can be useful in cases where sample material is sparse, i.e., experimentally produced laboratory grades or materials of manufactured machines.
With the Mini-SST and parametrized correction function, studies of the microstructure influences and of industrial NO can be performed analogous to usually sized SST. A quantitative comparison is enabled. The challenge in the transfer of the correction function to various sized SST lies in the determination of the geometric parameters and especially in the assumption of the stray field length. The value cannot be directly measured so needs to be fitted empirically or possibly simulated.
## Acknowledgement
The Mini-SST was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of the DFG-research group - "FOR 1897 - Low-Loss Electrical Steel for Energy-Efficient Electrical Drives".
|
2302.00843 | Computational Dualism and Objective Superintelligence | The concept of intelligent software is flawed. The behaviour of software is
determined by the hardware that "interprets" it. This undermines claims
regarding the behaviour of theorised, software superintelligence. Here we
characterise this problem as "computational dualism", where instead of mental
and physical substance, we have software and hardware. We argue that to make
objective claims regarding performance we must avoid computational dualism. We
propose a pancomputational alternative wherein every aspect of the environment
is a relation between irreducible states. We formalise systems as behaviour
(inputs and outputs), and cognition as embodied, embedded, extended and
enactive. The result is cognition formalised as a part of the environment,
rather than as a disembodied policy interacting with the environment through an
interpreter. This allows us to make objective claims regarding intelligence,
which we argue is the ability to "generalise", identify causes and adapt. We
then establish objective upper bounds for intelligent behaviour. This suggests
AGI will be safer, but more limited, than theorised. | Michael Timothy Bennett | 2023-02-02T03:02:16Z | http://arxiv.org/abs/2302.00843v6 | # Enactivism & Objectively Optimal
###### Abstract
Software's effect upon the world hinges upon the hardware that interprets it. This tends not to be an issue, because we standardise hardware. AI is typically conceived of as a software "mind" running on such interchangeable hardware. This formalises mind-body dualism, in that a software "mind" can be run on any number of standardised bodies. While this works well for simple applications, we argue that this approach is less than ideal for the purposes of formalising artificial general intelligence (AGI) or artificial super-intelligence (ASI). The general reinforcement learning agent AIXI is pareto optimal. However, this claim regarding AIXI's performance is highly subjective, because that performance depends upon the choice of interpreter. We examine this problem and formulate an approach based upon enactive cognition and pancomputationalism to address the issue. Weakness is a measure of plausibility, a "proxy for intelligence" unrelated to compression or simplicity. If hypotheses are evaluated in terms of weakness rather than length, then we are able to make objective claims regarding performance (how effectively one adapts, or "generalises" from limited information). Subsequently, we propose a definition of AGI which is objectively optimal given a "vocabulary" (body etc) in which cognition is enacted, and of ASI as that which finds the optimal vocabulary for a purpose and then constructs an AGI1.
Footnote 1: Technical appendices are available on GitHub [1].
## 1 Introduction
AIXI [2] provides us with a mathematically precise notion of AGI. Its performance is measured according to Legg-Hutter intelligence [3], a proxy for "the ability to satisfy goals in a wide range of environments" [4]. It employs Solomonoff Induction [5, 6] to make accurate inferences from minimal data. Because of this it is pareto optimal, meaning there is no agent which outperforms AIXI in one environment and equals its performance in all others. Unfortunately, this claim is highly subjective, because it depends upon the choice of Universal Turing Machine (UTM) [7]. We explore this problem, and formulate an approach that combines enactive cognition [8], pancomputationalism [9] and weakness as a proxy for intelligence [10, 1].
### An informal explanation of AIXI
Our purpose is to explain the aforementioned subjectivity and how it might be addressed, rather than every detail of how AIXI functions. This paper is as philosophical as it is mathematical. As such, the following explanation of AIXI is informal and involves significant abuse of notation.
#### 1.1.1 Models:
A model can be understood as a program [2; 11] or set of rules [10] describing how aspects of the world relate to one another. A model can be used as a hypothesis, to _explain_ aspects of the present by pointing out which aspects of the past caused the present [12]. Likewise, the more distant past can explain the more recent past, and the present can explain the future. Of course, a model of the world is not the world itself. Some models will more accurately represent the world than others. AIXI is not a comment on desirable behaviour, values or goals, but is built upon the assumption that such things are measured by a reward function that is given [13]. To satisfy goals, AIXI must predict2 the consequences of its actions. To make predictions, an agent requires a model. If a model approximates the environment well enough, then the agent can accurately predict the consequences of its actions, and so form a plan that will cause its goals to become satisfied. The more accurate a model is, the more likely an agent will be able to satisfy its goals. AIXI is able to satisfy goals because it has a means of discerning which models will be most accurate.
Footnote 2: To accurately predict the future means to infer which future among possible futures has the highest probability of occurring.
**Universal priors:** How AIXI obtains an accurate representation of the world can be informally understood in two parts3. First, AIXI considers only models that explain the past and present precisely (by which we mean that each model is a lossless archive of past and present). Any model that would predict a different outcome to past events than what actually took place is discarded, leaving AIXI only with models consistent with what it knows to be true. While these models are equivalent with respect to the past, they may differ in what future they predict. AIXI must identify which of those models most accurately predicts the future. For this purpose it is assumed that simpler models are more plausible representations of the world (in line with Ockham's Razor [14]). Simplicity is measured in terms of Kolmogorov Complexity (KC) [15]. The KC of an object is the length of the shortest self extracting archive of that object. To give some intuition as to what this means, there may exist many models that behave in exactly the same manner in all circumstances. Those models are really the same model represented in different ways, and KC is the length of its shortest representation in a language. Models with smaller KC tend to make more accurate predictions, formalising Ockham's Razor. This is why some believe that compression and intelligence are closely related [16], because compression can be
used to measure simplicity and so identify explanations that are more likely to be true. AIXI prefers models that have smaller KC, and in doing so maximises the accuracy of its predictions4. AIXI estimates one thing (model accuracy), by measuring another seemingly unrelated thing (KC). In other words, it uses compression as a proxy. This proxy for intelligence (defined in terms of the ability to satisfy goals across a wide range of environments) gives AIXI what is called "a universal prior" [5, 6], a means of deciding which among valid models are best. This is also why AIXI is also called a _universal_ artificial intelligence [17]. So to reiterate, AIXI's intelligent behaviour stems from an accurate model. How AIXI obtains an accurate model can be understood (very informally) in two steps:
Footnote 4: This is a simplification. More formally, if the model which generated past data is indeed computable, then the simplest model will dominate the Bayesian posterior as more and more data is observed. Eventually, you will have identified the correct model and can use that model to generate the next sample (predict the future).
1. Collect models whose predictions are consistent with what we've observed5.
Footnote 5: Meaning they all “predict” the exact same past.
2. Use a proxy for intelligence (Kolmogorov Complexity) to decide which among those models will most accurately predict the future.
### Subjectivity
KC is measured in the context of a UTM [7]. By itself, changing the UTM would not meaningfully affect performance. When used in a universal prior to predict deterministic binary sequences, the number of incorrect predictions a model will make is bounded by a multiple of the KC of that model [18]. If the UTM is changed the number of errors only changes by a constant [19, pp. 2.1.1 & 3.1.1], so changing the UTM doesn't change which model is considered most plausible. However, when AIXI employs this prior in an _interactive_ setting, a problem occurs [7]. To explain in simplified terms (with abuse of notation), assume a program \(f_{1}\) is software, \(f_{2}\) is an interpreter and \(f_{3}\) is the reality (an environment, body etc) within which goals are pursued. AIXI is the optimal choice of \(f_{1}\) to maximise the performance of \(f_{3}(f_{2}(f_{1}))\). However, in an interactive setting one's perception of success may not match reality.
"Legg-Hutter intelligence [3] is measured with respect to a fixed UTM.
AIXI is the most intelligent policy if it uses the same UTM." [7, p.10]
If intelligence is measured with respect to one UTM while AIXI runs on another, then this is like AIXI being engaged in one reality, while success is determined by another, entirely different reality. Using our analogy of functions, performance in terms of \(f_{3}(f_{2}(f_{1}))\) depends upon \(f_{2}(f_{1})\), not \(f_{1}\) alone. Thus a claim regarding the performance of \(f_{1}\) alone would be _subjective_, in that it depends upon \(f_{2}\).
"This undermines all existing optimality properties for AIXI." [7, p.1]
A UTM is an interpreter. As Leike and Hutter pointed out, Legg-Hutter intelligence is measured with respect to a fixed interpreter. The problem disappears if AIXI uses that same interpreter, which is easier said than done. This paper explores how we might formalise cognition in a different manner, so that performance is indepenedent of the choice of interpreter. To do so we need to formalise the mind as part of the environment, and the environment as software. Using the analogy from earlier, this would give us \(f_{2}(f_{3}(f_{1}))\) instead of \(f_{3}(f_{2}(f_{1}))\). In that case, performance would then be measured in terms of \(f_{3}(f_{1})\), and would be unaffected by interpreter \(f_{2}\).
## 2 Formalising Enactivism
AI is typically conceived of as a software "mind" running on an interchangeable hardware body. The hardware interacts with an environment, and the software interacts with the hardware. This formalises mind-body dualism, in that we could take the software "mind" and run it on any number of different bodies. However, this portrayal of cognition is flawed. What computer code does depends on the interpreter, we just tend to standardise system architectures. An alternative to dualism is enactivism [8] which holds that mind and body are inseparable, embedded in time and place. Cognitive activity extends into the environment, and is enacted through what the organism does. For example, if someone uses pen and paper to compute the solution to a math problem, then their cognition is extending into and enacted within the environment [20]. Formalising enactivism can address problems associated with dualism. However it is unclear how enactive cognition might work computationally, because it blurs the boundary between the agent and environment. To address this, we look to pancomputationalism [9]. Pancomputationalism holds that everything is a computational system. It follows that we may regard the interpreter \(f_{2}\) as the universe, and the environment \(f_{3}\) as software that runs on \(f_{2}\). Consequently we have \(f_{2}(f_{3}...)\) rather than \(f_{3}(f_{2}...)\). The distinction between mental (software) and physical (hardware) can be discarded. This means we need to represent the model \(f_{1}\) as a part of the environment \(f_{3}\). We do so by merging agent, body and environment into a task [10], formalising instances of intent in such a way as may bear resemblance to Heidegger's Dasein (Being-in-the-world and bound by context) [21].
### A model of the environment within the environment
There exists an isomorphism between declarative and imperative programs (the Curry-Howard isomorphism [22]). As such, we may treat both the model \(f_{1}\) and then environment \(f_{3}\) as declarative programs. Assume a set of declarative programs represents the logical conjunction of its members. Then, for every set of declarative programs there exists a declarative program which is equivalent. If \(f_{1}\) and \(f_{3}\) are sets, we can define \(f_{1}\) as a subset of \(f_{3}\) to represent the model as part of the environment. Because \(f_{1}\subset f_{3}\), the ability to satisfy goals is now measured in terms of \(f_{2}(f_{3})\), we can now reason about the model in objective terms. Going
forward we'll discard \(f_{3},f_{2}\) and \(f_{1}\) in favour of more formal notation, and will refer to the UTM \(f_{2}\) as the pancomputationalist's universe.
Definition 1 (environment): \(\bullet\) We assume a set \(\Phi\) whose elements we call **states**, one of which we single out as the **present state6**. \(\bullet\) A **declarative program** is a function \(f:\Phi\rightarrow\{true,false\}\), and we write \(P\) for the set of all declarative programs. By an **objective truth** about a state \(\phi\), we mean a declarative program \(f\) such that \(f(\phi)=true\).
Footnote 6: Each state is just reality from the perspective of a point along one or more dimensions. States of reality must be separated by something, or there would be only one state. E.G. two different states may be reality at two different points in time.
### We need only model the task, not all of the environment
Enactivism blurs the line between agent and environment, making the distinction unclear. As such, we abandon these separate notions entirely. The distinction is a convenient but unnecessary abstraction [10]. As Heidegger maintained, Being is bound by context [21]. There is no need to define an agent that has no environment, and so there seems to be little point in preserving the distinction. Furthermore, we do not need a model of the environment.
"The best model of the world is the world itself." - Rodney Brooks [23]
The only aspects of the environment that we might actually need model are those necessary to satisfy goals [24]. What is needed is not a model of the environment but a model describing how to satisfy a goal _while_ embodied and embedded in a particular local environment. Rather than the environment, we model a task. Intuitively, a task might be seen as the instantiation of intent. To avoid confusion going forward we will refer to "the mechanism" by which decisions are made, instead of "the agent". Where a model of an environment may include details needed to predict the environment but not satisfy goals, a model of a task can ignore anything which is not necessary to satisfy the goal. As a result, a separate description of a goal is unnecessary because it is implied by which aspects of the environment are modelled. If we only need to model those aspects of the environment necessary to complete a task, then we are dealing with the necessarily finite physical circuitry with which cognition is enacted. We can represent that circuitry using a finite subset of \(P\) (the set of all declarative programs as per definition 1). This finite circuitry is a language, albeit one whose meanings are implemented in the pancomputationalist's universe rather than interpreted by a human mind. It will be used to formally describe tasks.
Definition 2 (implementable language): \(\bullet\)\(\mathfrak{V}=\{V\subset P:V\ is\ finite\}\) is a set whose elements we call **vocabularies**, one of which7 we single out as **the vocabulary \(\mathfrak{v}\)**. \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(
* \(L_{\mathfrak{v}}=\{l\subseteq\mathfrak{v}:\exists\phi\in\Phi\ (\forall p\in l:p(\phi)=true)\}\) _is a set whose elements we call_ _statements_. \(L_{\mathfrak{v}}\) _follows_ \(\Phi\) _and_ \(\mathfrak{v}\)_, and is called_ _implementable language_.
* \(l\in L_{\mathfrak{v}}\) _is_ _true_ _iff the present state is_ \(\phi\) _and_ \(\forall p\in l:p(\phi)=true\)_._
* _The_ _extension of a statement_ \(a\in L_{\mathfrak{v}}\) _is_ \(Z_{a}=\{b\in L_{\mathfrak{v}}:a\subseteq b\}\)_._
* _The_ _extension of a set of statements_ \(A\subseteq L_{\mathfrak{v}}\) _is_ \(Z_{A}=\bigcup\limits_{a\in A}Z_{a}\)_._
(Notation) \(Z\) _with a subscript is the extension of the subscript_8_._
Footnote 8: E.G. \(Z_{s}\) is the extension of \(s\).
The programs in \(\mathfrak{v}\) are the circuitry with which cognition is enacted, and only programs in \(\mathfrak{v}\) affect decision making. We assume cognition always takes place in the context of a physical machine or sensorimotor system, represented by the implementable language. With these, we can define a task.
Definition 3 (\(\mathfrak{v}\)-task): For a chosen \(\mathfrak{v}\), a task9\(\alpha\) is \(\langle S_{\alpha},D_{\alpha},M_{\alpha}\rangle\) where:
Footnote 9: E.G. this could represent chess as a supervised learning problem where \(s\in S_{\alpha}\) is the state of a chessboard, \(z\in Z_{s}\) is a sequence of moves by two players that begins in \(s\), and \(d\in D_{\alpha}\cap Z_{s}\) is such a sequence of moves that resulted in victory for one player in particular (the one undertaking the task).
* \(S_{\alpha}\subset L_{\mathfrak{v}}\) is a set whose elements we call _situations_ of \(\alpha\).
* \(S_{\alpha}\) has the extension \(Z_{S_{\alpha}}\), whose elements we call _decisions_ of \(\alpha\).
* \(D_{\alpha}=\{z\in Z_{S_{\alpha}}:z\ is\ correct\}\) is the set of all decisions which complete \(\alpha\).
* \(M_{\alpha}=\{l\in L_{\mathfrak{v}}:Z_{S_{\alpha}}\cap Z_{l}=D_{\alpha}\}\) whose elements we call _models_ of \(\alpha\).
\(\Gamma_{\mathfrak{v}}\) is the set of all tasks for our chosen \(\mathfrak{v}\in\mathfrak{V}\).
(Notation) _If \(\omega\in\Gamma_{\mathfrak{v}}\), then we will use subscript \(\omega\) to signify parts of \(\omega\), meaning one should assume \(\omega=\langle S_{\omega},D_{\omega},M_{\omega}\rangle\) even if that isn't written_.
(How a task is completed) _Assume we've a \(\mathfrak{v}\)-task \(\omega\) and a hypothesis \(\vec{h}\in L_{\mathfrak{v}}\) s.t._
1. _we are presented with a situation_ \(s\in S_{\omega}\)_, and_
2. _we must select a decision_ \(z\in Z_{s}\cap Z_{\vec{h}}\)_._
3. _If_ \(z\in D_{\omega}\)_, then_ \(z\) _is correct and the task is complete. This occurs if_ \(\vec{h}\in M_{\omega}\)_._
\(\omega\in\Gamma_{\mathfrak{v}}\) s.t. \(S_{\omega}\subset S_{\alpha}\), \(D_{\omega}\subset D_{\alpha}\) and \(D_{\omega}\subset Z_{S_{\omega}}\) can serve as an ostensive definition [25] of \(\alpha\) from which to infer \(\vec{h}\). Then, if \(\vec{h}\in M_{\alpha}\), then \(z\in D_{\alpha}\).
#### 3.2.1 A solitary decision instead of a sequence:
Where AIXI deals in sequential decisions over time [2], a \(\mathfrak{v}\)-task is completed with only one. This is because:
1. For every sequence of decisions there exists an equivalent single decision, in much the same way as any planning problem can be represented as a boolean satisfiability problem [26]. Not all tasks involve a sequence, but all involve at least one decision. If a single decision will suffice, why complicate matters?
2. A single decision may set in motion continuous interactions. The preference for sequences may suit reinforcement learners using discrete, pre-defined actions, however in the enactive context such abstractions are not given.
3. Whether behaviour is the result of one decision or many does not matter. What matters is whether the task is completed as a result.
**Binary correctness:** To further simplify matters, correctness is binary. Given a task, a decision is considered to be either correct or incorrect. It may be that a decision is correct if it causes the task to become complete to some acceptable degree with some acceptable probability - what is otherwise known as satisficing [27]10. Degrees of complete or correct just reflect different task definitions. Preferences that determine what is considered complete, methods of attributing task completion to past decisions are beyond this paper's scope. Preferences and the emergence of identity are formalised in a companion to this paper [28; 29].
Footnote 10: Alternatively, feedback (feelings / reward signals) may be given through the implementable language (as declarative programs), and each situation expresses a threshold with respect to what is considered “good enough”.
**Representing the past to predict the future:** Earlier we described (very informally) how an accurate model can be obtained by discarding any model that "predicts" a different outcome from past events than what eventuated, and then using a proxy for intelligence to determine which among those that remain will most accurately predict the future. There exists a set of all decisions that _might_ ever be made which are correct, which we can use to specify a \(\mathfrak{v}\)-task \(\omega\). Likewise, the past can be represented as the set of all decisions that _have_ been made. From the past we can construct an ostensive definition of \(\omega\), by specifying a \(\mathfrak{v}\)-task \(\alpha\) where \(D_{\alpha}\) is the set of all decisions which _have_ been made _and_ were deemed correct, given the situations \(S_{\alpha}\) in which they were made (this assumes a means of attributing correctness to past decisions). For each \(m\in M_{\alpha}\) the past is \(Z_{m}\cap Z_{S_{\alpha}}=D_{\alpha}\) and the future decisions implied are \(Z_{m}\cap(Z_{S_{\omega}}-Z_{S_{\alpha}})\) (the decisions implied for all situations that have not yet been experienced). In other words, the models in \(M_{\alpha}\) are equivalent with respect to the past but may disagree about the future. We know \(D_{\alpha}\subset D_{\omega}\), so the larger \(|Z_{m}\cap D_{\omega}|\) is, the more accurate \(m\)'s predictions. We would use a proxy for intelligence to determine which \(m\in M_{\alpha}\) is most accurate.
## 3 The objectively optimal hypothesis
Having formulated cognition as a task, merging agent and environment, we have ensured that any claims regarding performance are now unaffected by the choice of interpreter. This addresses subjectivity as it pertained to AIXI. Unfortunately, it introduces other problems we must now address. First, Legg-Hutter intelligence is not well defined for a task. Second, we can no longer use Kolmogorov Complexity because everything must be represented in the same implementable language. We could use minimum description length [30] (compressing data written using a vocabulary to an archive written using that same vocabulary), however selecting hypotheses by length would still render claims about performance subjective. Third, we must now show that not only is an optimal hypothesis objectively so given \(\mathfrak{v}\), but define the objectively optimal choice of \(\mathfrak{v}\). For this we
require a measure of performance, and an alternative proxy for intelligence. Both are addressed [31] by companion to this paper concerning optimal hypotheses.
### Performance
AIXI is asymptotically optimal, meaning given enough data on the past it will predict the future accurately (adapting to its environment). However, because only finitely many tasks can be expressed in an implementable language that yardstick is no longer particularly meaningful (we can vote learn a finite set). Instead, we will define performance in terms of how _quickly_ a mechanism adapts. Thus we take performance to be the ability to generalise from limited information11, citing arguments to the effect that such a thing is intelligence [10; 11].
Footnote 11: The speed of adaptation, or how few examples one needs to understand a concept.
Definition 4 (generalisation): A statement \(l\) generalises to \(\alpha\in\Gamma_{\mathfrak{v}}\) iff \(l\in M_{\alpha}\), because then \(D_{\alpha}=Z_{l}\cap S_{\alpha}\). We say \(l\) generalises from \(\alpha\) to \(\mathfrak{v}\)-task \(\omega\) if we first obtain \(l\) from \(M_{\alpha}\) and then find it generalises to \(\omega\).
We assume a uniform distribution over \(\Gamma_{\mathfrak{v}}\). The probability that \(l\in L_{\mathfrak{v}}\) generalises to a randomly sampled \(\mathfrak{v}\)-task \(\omega\) is \(p(l\in M_{\omega}\mid l\in L_{\mathfrak{v}})=\frac{2^{|Z_{l}|}}{2^{|L_{ \mathfrak{v}}|}}\). Assume \(\alpha\) and \(\omega\) are \(\mathfrak{v}\)-tasks s.t. \(S_{\alpha}\subset S_{\omega}\), \(D_{\alpha}\subset D_{\omega}\) and \(D_{\alpha}\subset Z_{S_{\alpha}}\). We wish to generalise from \(\alpha\) to \(\omega\)12. The mechanism selects a hypothesis \(\mathbf{h}\in M_{\alpha}\), and performance is measured as \(p(\mathbf{h}\in M_{\omega}\mid\mathbf{h}\in M_{\alpha})\).
Footnote 12: In the absence of knowledge \(\alpha\), \(p(\mathbf{h}\in M_{\omega}\mid\mathbf{h}\in L_{\mathfrak{v}})\) is maximised when \(l=\emptyset\).
### Weakness as a proxy for intelligence
First, we must explain why description length is an unsuitable proxy. In the context of \(m\in M_{\alpha}\) description length [30] might be most faithfully translated as the cardinality \(|m|\) of \(m\). For every conceivable task \(\alpha\) there exists a program \(u\in P\) such that \(Z_{\{u\}}=D_{\alpha}\). If \(u\in\mathfrak{v}\) then the minimum description length model is \(\{u\}\) and \(p(m\in M_{\omega}\mid m\in M_{\alpha})=0\). Hence, minimising description length does not guarantee optimal performance. Any claim regarding the performance of a mechanism using length as a proxy would still be subjective. Instead of \(|m|\) we can use \(|Z_{m}|\) (the cardinality of \(m\)'s extension \(Z_{m}\)), called the "weakness" of \(m\). It is arguable that intelligence is a measure of the ability to generalise from one task to another, which amounts to a preference to weaker hypotheses [10]. If tasks are uniformly distributed then the probability of a statement \(l\) generalising to an unknown task \(\omega\) proportional to \(l\)'s weakness. If we use weakness as our proxy (to choose between models) instead of description length, then optimal performance is attained by choosing \(\mathbf{h}\in\underset{m\in M_{\alpha}}{\arg\max}\ p(m\in M_{\omega}\mid m\in M _{\alpha})=\underset{m\in M_{\alpha}}{\arg\max}\ |Z_{m}|\). There is no choice of \(\mathfrak{v}\) which can make weaker models less likely to generalise, because one cannot increase \(|Z_{m}|\) without increasing \(\frac{2^{|Z_{m}|}}{2^{|L|}}\). In contrast \(|m|\) need not bear any relationship to \(\frac{2^{|Z_{m}|}}{2^{|L|}}\). It follows that \(\mathbf{h}\in\underset{m\in M_{\alpha}}{\arg\max}\ |Z_{m}|\) is objectively optimal, in the sense that it is optimal given any choice of either \(\mathfrak{v}\) or \(\omega\).
### Objectively optimal AGI and ASI
Given the above and related arguments [17, 32], we propose defining AGI and ASI as follows. These are mathematical ideals we may aim to build, rather than an approach to doing so. An AGI is an agent that selects the optimal hypothesis for any given task. An ASI selects the optimal vocabulary to maximise the utility of intelligence for a task, and then implements an AGI with that vocabulary. If \(\mathbf{h}\in L_{\mathfrak{v}}\) is our AGI's hypothesis and \(\alpha\in\Gamma_{\mathfrak{v}}\) its knowledge13, then \(\mathbf{h}\in\underset{m\in M_{\alpha}}{\operatorname{arg\,max}}\;|Z_{m}|\). Let \(\Gamma_{\mathfrak{V}}=\underset{\mathfrak{t}\in\mathfrak{V}}{\bigcup}\; \Gamma_{\mathfrak{t}}\) be the set of all tasks across all vocabularies. Let \(\lambda\) be a function \(\lambda:\mathfrak{V}\to\Gamma_{\mathfrak{V}}\) that takes a vocabulary and returns a task in that vocabulary. \(\lambda\) lets us represent a version of the same task in different vocabularies. Every task \(\gamma\in\Gamma_{\mathfrak{V}}\) has a utility of intelligence value (how useful it is), computed by \(\epsilon:\Gamma_{\mathfrak{V}}\to\mathbb{N}\) s.t. \(\epsilon(\gamma)=\underset{m\in M_{\gamma}}{\operatorname{arg\,max}}(|Z_{m}| -|D_{\gamma}|)\). If \(\lambda\) is our ASI's knowledge and \(\mathbf{h}\) its hypothesis, then it uses \(\mathfrak{v}\) s.t.
Footnote 13: This assumes either that knowledge only consists of an ostensive definition of what “good enough” is, or that feedback is programs in \(\mathfrak{v}\), and that each situation expresses a threshold with respect to what is considered “good enough”.
\[\mathfrak{v}\in\underset{\mathfrak{v}\in\mathfrak{V}}{\operatorname{arg\, max}}\;\epsilon\left(\lambda(\mathfrak{v})\right)\;\text{and}\;\mathbf{h}\in \underset{m\in M_{\lambda(\mathfrak{v})}}{\operatorname{arg\,max}}\;|Z_{m}|\]
If \(\mathfrak{J}\subseteq\mathfrak{V}\) is the set of vocabularies for which \(\epsilon\) has been computed, then an anytime computable alternative is \(\mathfrak{v}\in\underset{\mathfrak{v}\in\mathfrak{J}}{\operatorname{arg\,max}}\; \epsilon(\lambda(\mathfrak{v}))\).
|
2304.02302 | Generic consistency and nondegeneracy of vertically parametrized systems | We determine the generic consistency, dimension and nondegeneracy of the zero
locus over $\mathbb{C}^*$, $\mathbb{R}^*$ and $\mathbb{R}_{>0}$ of vertically
parametrized systems: parametric polynomial systems consisting of linear
combinations of monomials scaled by free parameters. These systems generalize
sparse systems with fixed monomial support and freely varying parametric
coefficients. As our main result, we establish the equivalence among three key
properties: the existence of nondegenerate zeros, the zero set having
generically the expected dimension, and the system being generically
consistent. Importantly, we prove that checking whether a vertically
parametrized system has these properties amounts to an easily computed matrix
rank condition. | Elisenda Feliu, Oskar Henriksson, Beatriz Pascual-Escudero | 2023-04-05T08:50:15Z | http://arxiv.org/abs/2304.02302v3 | Dimension and degeneracy of solutions of parametric polynomial systems arising from reaction networks
###### Abstract.
We study the generic dimension of the solution set over \(\mathbb{C}^{*}\), \(\mathbb{R}^{*}\) and \(\mathbb{R}_{>0}\) of parametric polynomial systems that consist of linear combinations of monomials scaled by free parameters. We establish a relation between this dimension, Zariski denseness of the set of parameters for which the system has solutions, and the existence of nondegenerate solutions, which enables fast dimension computations. Systems of this form are used to describe the steady states of reaction networks modeled with mass-action kinetics, and as a corollary of our results, we prove that weakly reversible networks have finitely many steady states for generic reaction rate constants and total concentrations.
## 1. Introduction
In this work we study parametric (Laurent) polynomial systems of the form
\[g(\alpha,x)=0,\quad x\in\mathcal{X}\subseteq(\mathbb{C}^{*})^{n},\quad\alpha \in\mathcal{A}\subseteq\mathbb{C}^{\ell}, \tag{1.1}\]
where \(g(\alpha,x)\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) is a tuple of \(s\) linearly independent polynomials in parameters \(\alpha=(\alpha_{1},\dots,\alpha_{\ell})\) restricted to some set \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\), and variables \(x=(x_{1},\dots,x_{n})\) restricted to some set \(\mathcal{X}\subseteq(\mathbb{C}^{*})^{n}\).
Such systems arise naturally in many applications. Important examples include reaction network theory, where \(\mathcal{X}=\mathbb{R}_{>0}^{n}\) and \(x\) represents concentrations or abundances at steady states [11, 12], algebraic statistics, where one often takes \(\mathcal{X}\) to be the interior of the probability simplex and \(x\) to be parameters of discrete probability distributions [10], or robotics, where \(\mathcal{X}=(\mathbb{R}^{*})^{n}\) and \(x\) represents configurations of various mechanisms [10].
A fundamental problem is to determine the _generic dimension_ of the algebraic variety defined by the solutions to the system of interest, and, in particular, whether it agrees with the lower bound \(n-s\) derived from the number of linearly independent equations.
For sparse polynomial systems with fixed support and freely varying parametric coefficients, this lower bound is always the generic dimension. As an illustration of this, consider the simplest example, namely a system of linear equations with free coefficients:
\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\alpha_{3}=0,\quad\alpha_{4}x_{1}+\alpha_{5}x _{2}+\alpha_{6}=0,\quad x\in(\mathbb{C}^{*})^{2},\quad\alpha\in\mathbb{C}^{4}. \tag{1.2}\]
The generic dimension of the solution set is \(0\), as there will be a unique solution in \((\mathbb{C}^{*})^{2}\) whenever \((\alpha_{1}\alpha_{5}-\alpha_{2}\alpha_{4})(\alpha_{1}\alpha_{6}-\alpha_{3} \alpha_{4})(\alpha_{2}\alpha_{6}-\alpha_{3}\alpha_{5})\neq 0\). On the other hand, when there are algebraic dependencies between the coefficients, the situation can be more complicated. Consider for example the system
\[\alpha_{1}x_{1}-\alpha_{2}x_{2}=0,\quad\alpha_{1}^{2}x_{1}^{2}-\alpha_{2}^{2}x _{2}^{2}=0,\quad x\in(\mathbb{C}^{*})^{2},\quad\alpha\in\mathbb{C}^{2}. \tag{1.3}\]
As the first polynomial divides the second, the generic complex dimension is \(1\), despite the system having two (linearly independent) equations, and hence a lower dimension bound of \(0\).
The motivation of this work comes from the theory of reaction networks, where the parametric systems of interest are of the form
\[f_{\kappa}(x)=N\operatorname{diag}(\kappa)x^{B},\quad x\in\mathbb{R}_{>0}^{n}, \quad\kappa\in\mathbb{R}_{>0}^{r}, \tag{1.4}\]
with \(N\in\mathbb{Z}^{s\times r}\), \(\operatorname{rk}(N)=s\), and \(B\in\mathbb{Z}^{n\times r}\). This system describes the steady states of a reaction network under the assumption of mass-action kinetics (more details are given in Section 4). Even though the study of reaction networks in the current mathematical formalism goes back at least to Feinberg, Horn and Jackson in the 70's [10, 11], many fundamental questions about the system (1.4) remain unclear, including the question about the generic dimension of the solution sets.
Since each parameter \(\kappa_{i}\) might appear as a coefficient in several equations, it is at first glance not obvious whether the generic dimension can deviate from \(n-s\), similarly to (1.3). There are well-known examples of networks where the solution set always has a higher dimension than \(n-s\) whenever it is nonempty (see e.g. [10, Appendix IV]), but for all such networks, the solution set has the additional pathology of being empty for almost all parameter values.
Analogous observations have been made for a second parametric system arising in reaction network theory, which describes the steady states constrained by the linear first integrals of the underlying system of differential equations:
\[N\operatorname{diag}(\kappa)x^{B}=0,\quad Wx=c,\quad x\in\mathbb{R}_{>0}^{n}, \quad(\kappa,c)\in\mathbb{R}_{>0}^{r}\times\mathbb{R}^{n-s}, \tag{1.5}\]
with \(W\in\mathbb{Z}^{(n-s)\times n}\) of full rank. Here, the empirical observation is that for all realistic networks, the solution sets are _finite_ (i.e., \(0\)-dimensional) for generic parameters \(\kappa\) and \(c\), and if not, then the solution sets are generically empty.
In this work, we settle these questions and confirm that the pathologies of higher dimension and generic emptiness always go hand in hand. If the complex solutions to (1.4) do not form a variety of dimension \(n-s\) for no \(\kappa\in\mathbb{R}_{>0}^{r}\), then at the same time, it holds that the variety is empty for all \(\kappa\) outside a proper Zariski closed subset. Likewise, if (1.5) has either infinitely many or no solutions for all \((\kappa,c)\), then it also has no solution for any \((\kappa,c)\) outside of a proper Zariski closed subset.
Our second main contribution is that deciding whether the set of parameters for which the system has solutions is Zariski dense reduces to checking whether the system has a nondegenerate solution (in the sense that the rank of the Jacobian agrees with the number of equations) for _at least one choice_ of parameters. This can be easily checked computationally, by inspecting the generic rank of a family of matrices. In practice, this turns out to be a significantly cheaper computation than the standard methods for computing the generic dimension based on Grobner bases (see e.g. [14, Section 9.3] for a discussion of such methods).
To obtain these results, we study first general families of parametric systems of the form (1.1). Background theory establishes that the generic dimension of the complex variety of solutions to \(g(\alpha,x)=0\) for a fixed \(\alpha\) can be determined by employing the Theorem on the Dimension of Fibers, after checking that the incidence variety of the parametric system is irreducible. Then, the generic dimension depends on the dimension of the incidence variety as well as on whether the set of parameters \(\mathcal{D}\) for which the variety is nonempty is Zariski dense in parameter space. This theory applies to the examples in (1.2) and (1.3) above. The underlying arguments are reviewed in Section 2, where we also compare the condition of \(\mathcal{D}\) being Zariski dense to having nonempty Euclidean interior in \(\mathcal{A}\). The latter can be studied by means of the concept of nondegeneracy.
With these tools in place, we proceed in Section 3 to study the systems of the form (1.4) and (1.5), where we allow the coefficient matrix to take complex values. We show that for these systems, the existence or lack of nondegenerate solutions have very strong consequences in terms of the generic dimension of the associated complex varieties. Our results are gathered in the main theorem of this work, Theorem 3.7, which gives several equivalent conditions guaranteeing that the generic dimension is \(n-s\). The theorem is stated for sets \(\mathcal{X}\) and \(\mathcal{A}\) satisfying certain mild algebraic and topological conditions (which are satisfied for the positive orthant, and the real and complex torus). Furthermore, the equivalent conditions of Theorem 3.7 hold for a pair of subsets \(\mathcal{A}\) and \(\mathcal{X}\) satisfying the conditions if and only if they hold for any other such pair, in particular for complex parameters and solutions in the complex torus.
In Section 4 we connect the results of Section 3 to the theory of reaction networks. This section is presented with less technicalities, to make it accessible to the broader reaction network community. Furthermore, we specialize our conclusions to show that for the well-known and well-studied class of so-called _weakly reversible_ networks, the generic dimension of the complex varieties of the two systems (1.4) and (1.5) is _always_ the lower bound, namely \(n-s\) and \(0\), respectively. This settles a question posed by Boros, Craciun and Yu in [1]. We illustrate further the simplicity of our conditions by exploring the database ODEbase of biologically relevant reaction networks [1], and are able to verify within minutes that for the vast majority of them, the systems of interest attain the lower bound on the generic dimension, whereas for a handful of exceptions, the steady state variety is generically empty. Given the size of the networks in the database, these computations are nontrivial with standard Grobner basis techniques.
### Notation and conventions
We let \(\circ\colon\mathbb{C}^{n}\times\mathbb{C}^{n}\to\mathbb{C}^{n}\) denote the Hadamard product, given by \((x\circ y)_{i}=x_{i}y_{i}\). For a field \(\mathbb{F}\), we let \(\mathbb{F}^{*}=\mathbb{F}\setminus\{0\}\) be the group of units of \(\mathbb{F}\). For a matrix \(A=(a_{ij})\in\mathbb{Z}^{n\times m}\) and a vector \(x\in(\mathbb{F}^{*})^{n}\), we let \(x^{A}\in\mathbb{F}^{m}\) be defined by \((x^{A})_{j}=x_{1}^{a_{1j}}\cdots x_{n}^{a_{nj}}\) for \(j\in\{1,\ldots,m\}\).
In this work we will consider both the Euclidean and Zariski topologies on \(\mathbb{C}^{n}\), and their restrictions to subsets. By default, we use the Zariski topology, and for a set \(S\subseteq\mathbb{C}^{n}\), we let \(\overline{S}\) denote the Zariski closure of \(S\) in \(\mathbb{C}^{n}\). When we say that a set \(U\subseteq X\) is open or has nonempty interior _in_\(X\), for some \(X\subseteq\mathbb{C}^{n}\), we mean with respect to the subspace topology topology on \(X\). For example, if we say that \(U\) has nonempty Euclidean interior in \(X\), we mean there exists an open ball \(B\subseteq\mathbb{C}^{n}\) such that \(B\cap X\subseteq U\).
When saying that a property holds _generically_ in a family indexed by some parameters \(\alpha\in\mathcal{A}\), we mean that it holds outside a proper Zariski closed subset of \(\mathcal{A}\).
### Acknowledgements
EF and OH have been funded by the Novo Nordisk project with grant reference number NNF20OC0065582. BP has been funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie IF grant agreement No 794627 and the Spanish Ministry of Economy project with reference number PGC2018-095392-B-I00.
## 2. The dimension of solution sets of parametric polynomial systems
In this section we study the generic dimension of the solution set of parametric systems for which the incidence variety is irreducible of known dimension. This sets the background theory for the application to the systems of polynomials indicated in the introduction. We
start by stating the framework for solutions in the complex torus, and then proceed to restrict the set of solutions to subsets such as the real torus or the positive orthant.
We restrict from the beginning to solutions in a subset of the complex torus \((\mathbb{C}^{*})^{n}\) (and thus also allow Laurent polynomials with negative exponents). This will be required to apply the results in Section 3. However, most of this section also extends to \(\mathbb{C}^{n}\) if one restricts to polynomials with nonnegative exponents.
### Framework
We consider a parametric family of systems of (Laurent) polynomials of the form
\[g(\alpha,x)=0,\quad\alpha=(\alpha_{1},\dots,\alpha_{\ell})\in\mathbb{C}^{\ell},\quad x=(x_{1},\dots,x_{n})\in(\mathbb{C}^{*})^{n}, \tag{2.1}\]
where \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) for some \(s\in\mathbb{Z}_{>0}\), and where we view \(\alpha=(\alpha_{1},\dots,\alpha_{\ell})\) as parameters, and \(x=(x_{1},\dots,x_{n})\) as variables. We consider the _incidence variety_
\[\mathcal{E}_{g}:=\{(\alpha,x)\in\mathbb{C}^{\ell}\times(\mathbb{C}^{*})^{n}:g( \alpha,x)=0\}, \tag{2.2}\]
and the projection map to parameter space
\[\pi\colon\mathcal{E}_{g}\to\mathbb{C}^{\ell},\quad(\alpha,x)\mapsto\alpha. \tag{2.3}\]
For each choice of parameters \(\alpha\in\mathbb{C}^{\ell}\), we get a polynomial \(g_{\alpha}:=g(\alpha,\cdot)\in\mathbb{C}[x^{\pm}]^{s}\), which leads to the specialized system
\[g_{\alpha}(x)=0,\quad x\in(\mathbb{C}^{*})^{n}. \tag{2.4}\]
We will identify the algebraic variety \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) consisting of solutions to (2.4) with the fiber \(\pi^{-1}(\alpha)\) of the projection map. We will sometimes restrict the parameter space to a subset \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\). (The main example will be \(\mathcal{A}=\mathbb{R}_{>0}^{\ell}\).) As there is no guarantee that system (2.4) has solutions, we introduce the following set:
\[\mathcal{D}_{g,\mathcal{A}}=\pi\big{(}\mathcal{E}_{g}\cap(\mathcal{A}\times( \mathbb{C}^{*})^{n})\big{)}=\mathrm{im}(\pi)\cap\mathcal{A}=\{\alpha\in \mathcal{A}:\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\neq\varnothing\}. \tag{2.5}\]
In what follows we will distinguish between the coefficient matrix \(N\) with entries in \(\mathbb{C}\) obtained by viewing \(g\) as a polynomial in \(\mathbb{C}[\alpha,x^{\pm}]\), and the coefficient matrix \(\Sigma_{\alpha}\) with entries in \(\mathbb{C}[\alpha]\), obtained by viewing \(g\) as a polynomial in \(\mathbb{C}[\alpha][x^{\pm}]\).
### Dimension over \(\mathbb{C}^{*}\)
The first concept of interest is the dimension of the family of complex varieties \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) for \(\alpha\in\mathcal{A}\). An immediate first observation is that for each fixed \(\alpha\in\mathcal{A}\), the principal ideal theorem [10, Thm. 10.2] gives that
\[\dim\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\geq n-s, \tag{2.6}\]
since \(g_{\alpha}\) is a tuple of \(s\) polynomials. Moreover, all irreducible components of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) have dimension at least \(n-s\). Hence, if \(\dim(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha}))=n-s\) for a given \(\alpha\in\mathcal{A}\), then all irreducible components have dimension \(n-s\), and we say that \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is _equidimensional_ of dimension \(n-s\). With this in mind, it makes sense to remove redundancies in the entries of \(g(\alpha,x)\) such that the coefficient matrix \(N\) of \(g\) as a polynomial in \(\mathbb{C}[\alpha,x^{\pm}]^{s}\) has full rank.
For a fixed \(\alpha\), the codimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is additionally bounded above by the rank of the coefficient matrix \(\Sigma_{\alpha}\). Therefore, if the generic rank of \(\Sigma_{\alpha}\) is strictly smaller than \(s\), the dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is strictly larger than \(n-s\).
When the incidence variety \(\mathcal{E}_{g}\) is irreducible, the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is well understood. We will use the following version of the Theorem on the Dimension of Fibers. A proof of the result can be found in [12, Thm. 1.25], [21, Section 11.4].
**Theorem 2.1** (Dimension of fibers).: _Let \(\varphi\colon X\to Y\) be a dominant morphism of irreducible varieties. Then, for any \(y\in\varphi(X)\), it holds that_
\[\dim(\varphi^{-1}(y))\geqslant\dim(X)-\dim(Y). \tag{2.7}\]
_Moreover, there exists a nonempty Zariski open subset \(U\subseteq Y\) such that (2.7) holds with equality for all \(y\in U\cap\varphi(X)\)._
With this in place, we state the first theorem on the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\).
**Theorem 2.2** (Generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\)).: _Let \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) and consider the specialized polynomials \(g_{\alpha}\) for parameters \(\alpha\in\mathcal{A}\), with \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\) being Zariski dense. Assume that the incidence variety \(\mathcal{E}_{g}\) is irreducible, and let \(d=\dim(\mathcal{E}_{g})-\ell\). Then the following holds:_
1. _If_ \(\mathcal{D}_{g,\mathcal{A}}\) _is Zariski dense in_ \(\mathbb{C}^{\ell}\)_, then there exists a nonempty Zariski open subset_ \(\mathcal{U}\) _of_ \(\mathcal{D}_{g,\mathcal{A}}\) _such that for all_ \(\alpha\in\mathcal{U}\)_, it holds that_ \[\dim\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})=d.\] _Furthermore, if_ \(d=n-s\)_, then_ \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) _is equidimensional._
2. _If_ \(\mathcal{D}_{g,\mathcal{A}}\) _is not Zariski dense in_ \(\mathbb{C}^{\ell}\)_, then it holds for all_ \(\alpha\in\mathcal{D}_{g,\mathcal{A}}\) _that_ \[\dim\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})>d.\]
Proof.: The projection map \(\pi\) over parameter space defined in (2.3) is a regular map of irreducible varieties. Let \(\overline{\operatorname{im}(\pi)}\) denote the Zariski closure of the image of \(\pi\). We can now apply the Theorem on the Dimension of Fibers (Theorem 2.1), to conclude that nonempty fibers \(\pi^{-1}(\alpha)\) have dimension at least
\[e=\dim(\mathcal{E}_{g})-\dim(\,\overline{\operatorname{im}(\pi)}\,)=\ell+d- \dim(\,\overline{\operatorname{im}(\pi)}\,), \tag{2.8}\]
with generic dimension equal to \(e\). That is, there exists a nonempty Zariski open subset \(U\subseteq\overline{\operatorname{im}(\pi)}\) such that, for any \(\alpha\in U\), either \(\dim(\pi^{-1}(\alpha))=e\) or \(\pi^{-1}(\alpha)=\varnothing\).
Let us consider scenario (i). Note that \(\mathcal{D}_{g,\mathcal{A}}\subseteq\operatorname{im}(\pi)\). Hence, taking Zariski closures in \(\mathbb{C}^{\ell}\) gives \(\mathbb{C}^{\ell}=\overline{\mathcal{D}_{g,\mathcal{A}}}\subseteq\overline{ \operatorname{im}(\pi)}\), and hence \(\overline{\operatorname{im}(\pi)}=\mathbb{C}^{\ell}\). Therefore, the generic dimension of the fibers of \(\pi\) is \(e=\ell+d-\ell=d\). It follows that
\[\dim(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha}))=d\]
for all \(\alpha\in U\cap\mathcal{D}_{g,\mathcal{A}}\), since in this case \(\pi^{-1}(\alpha)=\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\neq\varnothing\). Noting that \(U\cap\mathcal{D}_{g,\mathcal{A}}\neq\varnothing\) as \(\mathcal{D}_{g,\mathcal{A}}\subseteq\mathbb{C}^{\ell}\) is Zariski dense, we obtain statement (i) by taking \(\mathcal{U}=U\cap\mathcal{D}_{g,\mathcal{A}}\).
Consider now scenario (ii). By (2.8), the result follows if \(\overline{\operatorname{im}(\pi)}\subseteq\mathbb{C}^{\ell}\) is proper, and hence of lower dimension than \(\ell\). By Chevalley's theorem [14, 00F5], the image of a constructible set under a projection map is constructible. Hence \(\operatorname{im}(\pi)\) is a constructible set, and it can be written as \(\operatorname{im}(\pi)=\bigcup_{i=1}^{m}Z_{i}\cap U_{i}\), for some \(m\in\mathbb{Z}_{>0}\), irreducible Zariski closed subsets \(Z_{i}\subseteq\mathbb{C}^{\ell}\) and nonempty Zariski open subsets \(U_{i}\subseteq\mathbb{C}^{\ell}\). It follows that
\[\overline{\operatorname{im}(\pi)}=\bigcup_{i=1}^{m}Z_{i}.\]
Assume for a contradiction that \(\overline{\operatorname{im}(\pi)}=\mathbb{C}^{\ell}\). Then \(Z_{i}=\mathbb{C}^{\ell}\) for some \(i\in\{1,\ldots,m\}\), and we have \(U_{i}\subseteq\operatorname{im}(\pi)\). Hence,
\[U_{i}\cap\mathcal{A}\subseteq\operatorname{im}(\pi)\cap\mathcal{A}=\mathcal{D }_{g,\mathcal{A}},\]
where \(U_{i}\cap\mathcal{A}\neq\varnothing\), since \(\mathcal{A}\) is Zariski dense. Taking Zariski closures, as \(\overline{U_{i}\cap\mathcal{A}}=\mathbb{C}^{\ell}\), we have \(\overline{\mathcal{D}_{g,\mathcal{A}}}=\mathbb{C}^{\ell}\), which is a contradiction. This shows that \(\overline{\mathrm{im}(\pi)}\) is proper and concludes the proof of (ii).
Combining cases (i) and (ii) of Theorem 2.2, we obtain that the dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is \(d\) for some \(\alpha\) if and only if \(\mathcal{D}_{g,\mathcal{A}}\) is Zariski dense in \(\mathbb{C}^{\ell}\). Furthermore, if the dimension is \(d\) for one value of \(\alpha\), then it is generically \(d\) in \(\mathcal{D}_{g,\mathcal{A}}\).
**Example 2.3**.: Consider \(g(\alpha,x)\) with \(n=\ell=s=2\) giving rise to the system
\[\alpha_{1}x_{1}-\alpha_{2}x_{2}=0,\quad\alpha_{1}^{2}x_{1}^{2}-\alpha_{2}^{2} x_{2}^{2}=0,\]
which was briefly discussed in the Introduction in (1.3). Let \(\mathcal{A}=\mathbb{C}^{2}\). In this case, \(\mathcal{D}_{g,\mathcal{A}}=\mathbb{C}^{2}\). Furthermore, the incidence variety is defined by the equation \(\alpha_{1}x_{1}-\alpha_{2}x_{2}=0\) as \(\alpha_{1}^{2}x_{1}^{2}-\alpha_{2}^{2}x_{2}^{2}=(\alpha_{1}x_{1}-\alpha_{2}x_{ 2})(\alpha_{1}x_{1}+\alpha_{2}x_{2})\). Hence, \(\mathcal{E}_{g}\) is irreducible of dimension \(3\) and by Theorem 2.2(i), the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is \(d=3-2=1\).
In the previous example, the generic dimension is not the minimal one from (2.6), which would be \(n-s=0\). However, in the systems of interest in the coming sections, the goal will be to determine when the generic dimension is exactly \(n-s\), in which case the variety also is equidimensional.
**Remark 2.4**.: Recall that \(N\) denotes the coefficient matrix of \(g\), and \(\Sigma_{\alpha}\) the coefficient matrix of \(g_{\alpha}\). Let us assume for simplicity that \(N\) has rank \(s\). Then the dimension of \(\mathcal{E}_{g}\) cannot be smaller than \(n+\ell-s\). Under the setting of Theorem 2.2, if \(\mathcal{D}_{g,\mathcal{A}}\) is Zariski dense and additionally \(\dim(\mathcal{E}_{g})=n+\ell-s\), then the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is exactly \(n-s\). In this case, the generic rank of \(\Sigma_{\alpha}\) is necessarily \(s\), as the corank of \(\Sigma_{a}\) is a lower bound for the dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\).
### Restricting the ambient space
Let \(\mathbb{F}\subseteq\mathbb{C}\) be a subfield, and consider \(\mathcal{A}\subseteq\mathbb{F}^{\ell}\). Then, for \(\alpha\in\mathcal{A}\), we let \(\mathbb{V}_{\mathbb{F}^{*}}(g_{\alpha})\) denote the set of solutions to system (2.4) in \((\mathbb{F}^{*})^{n}\). Additionally, for a set \(\mathcal{X}\subseteq(\mathbb{F}^{*})^{n}\) we let
\[\mathbb{V}_{\mathbb{F}^{*}}^{\mathcal{X}}(g_{\alpha})\]
denote the variety defined as the union of the irreducible components of \(\mathbb{V}_{\mathbb{F}^{*}}(g_{\alpha})\) that intersect \(\mathcal{X}\). We will also consider the set of solutions in \(\mathcal{X}\), that is, \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\cap\mathcal{X}\). The main example we have in mind, which will become relevant in the next section, is the case \(\mathbb{F}=\mathbb{R}\) and \(\mathcal{X}=\mathbb{R}_{>0}^{n}\), hence of real varieties. We generalize (2.5), and introduce the set
\[\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})=\pi\big{(}\mathcal{E}_{g}\cap( \mathcal{A}\times\mathcal{X})\big{)}=\{\alpha\in\mathcal{A}:\mathbb{V}_{ \mathbb{C}^{*}}(g_{\alpha})\cap\mathcal{X}\neq\varnothing\}. \tag{2.9}\]
With this notation, \(\mathcal{D}_{g,\mathcal{A}}((\mathbb{C}^{*})^{n})=\mathcal{D}_{g,\mathcal{A}}\) and we have the following inclusions:
\[\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathcal{D}_{g,\mathcal{A}}, \quad\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathcal{D}_{g,\mathbb{C }^{\ell}}(\mathcal{X}).\]
**Remark 2.5**.: The fact that \(\mathbb{V}_{\mathbb{C}^{*}}^{\mathcal{X}}(g_{\alpha})\) is a Zariski closed set gives that
\[\overline{\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\cap\mathcal{X}}\subseteq \mathbb{V}_{\mathbb{C}^{*}}^{\mathcal{X}}(g_{\alpha}) \tag{2.10}\]
but the reverse inclusion is not necessarily true. The inclusion (2.10) holds with equality if \(\mathcal{X}\) is an Euclidean open subset of \((\mathbb{C}^{*})^{n}\), since, in that case, \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\cap\mathcal{X}\) is, whenever nonempty, Zariski dense in \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\). When \(\mathcal{X}\) is an Euclidean open subset of \((\mathbb{R}^{*})^{n}\), a sufficient condition for equality in (2.10) is that each irreducible component of \(\mathbb{V}_{\mathbb{C}^{*}}^{\mathcal{X}}(g_{\alpha})\) has a nonsingular point (see [1], [20, Thm. 6.5], [17] and [21, Thm. 12.6.1]).
On the same line of thought, the lower bound for the complex dimension in (2.6) does not hold over \(\mathbb{R}\), as the next example illustrates.
**Example 2.6**.: A simple example that illustrates the previous remark is the polynomial
\[g(\alpha,x)=\alpha_{1}x_{1}^{2}-\alpha_{2}x_{1}+\alpha_{3}x_{2}^{2}-\alpha_{4}x_ {2}+\alpha_{5}\]
with \(\mathcal{X}=\mathbb{R}^{2}\) and \(\alpha\in\mathbb{R}^{5}\). For \(\alpha=(1,2,1,2,2)\), we get
\[\mathbb{V}^{\mathcal{X}}_{\mathbb{C}^{*}}(g_{\alpha})=\mathbb{V}_{\mathbb{C}^ {*}}((x_{1}-1)^{2}+(x_{2}-1)^{2}))\supsetneq\{(1,1)\}=\overline{\mathbb{V}_{ \mathbb{C}^{*}}(g_{\alpha})\cap\mathcal{X}}.\]
Hence the real variety \(\mathbb{V}_{\mathbb{R}^{*}}(g_{\alpha})\) is zero-dimensional, while the lower bound for the dimension of \(\dim(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha}))\) is \(n-s=1\). Note that the point \((1,1)\) is singular in this case.
From Theorem 2.2 we obtain the following result on the generic dimension of \(\mathbb{V}^{\mathcal{X}}_{\mathbb{C}^{*}}(g_{\alpha})\), which is in general bounded above by the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\).
**Corollary 2.7** (Generic dimension of \(\mathbb{V}^{\mathcal{X}}_{\mathbb{C}^{*}}(g_{\alpha})\)).: _Let \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) and \(g_{\alpha}\) as in (2.4), \(x\in(\mathbb{C}^{*})^{n}\), \(\alpha\in\mathcal{A}\subseteq\mathbb{C}^{\ell}\) with \(\mathcal{A}\) being Zariski dense. Assume that the incidence variety \(\mathcal{E}_{g}\) is irreducible and has dimension \(\ell+d\) for some \(d\geq 0\)._
_For \(\mathcal{X}\subseteq(\mathbb{C}^{*})^{n}\), if \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathcal{A}\) is Zariski dense in \(\mathbb{C}^{\ell}\), then there exists a nonempty Zariski open set \(\mathcal{U}\subseteq\mathbb{C}^{\ell}\) such that \(\mathcal{U}\cap\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\neq\varnothing\) and_
\[\dim\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})=\dim\mathbb{V}^{\mathcal{X}}_{ \mathbb{C}^{*}}(g_{\alpha})=d,\quad\text{for all }\quad\alpha\in\mathcal{U}\cap\mathcal{D}_{g,\mathcal{A}}(\mathcal{X}).\]
_If \(d=n-s\), then \(\mathbb{V}^{\mathcal{X}}_{\mathbb{C}^{*}}\) is equidimensional._
Proof.: This is an immediate consequence of Theorem 2.2(i) as \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathcal{D}_{g,\mathcal{A}}\), with the set \(\mathcal{U}\) in the statement of the theorem, after noting that \(\mathcal{U}\cap\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\neq\varnothing\) as \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is Zariski dense.
Observe that Corollary 2.7 extends only Theorem 2.2(i) to \(\mathcal{X}\). As discussed in Remark 2.5, some special care needs to be taken for certain instances of \(\mathcal{X}\).
### Zariski denseness and nonempty Euclidean interior
One of the hypotheses of Theorem 2.2 and Corollary 2.7 relies on checking that \(\mathcal{D}_{g,\mathcal{A}}\) or \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) are Zariski dense. In this subsection, we relate this hypothesis to the concept of nondegeneracy, under the assumption that \(\mathcal{A}\) has the following property.
**Definition 2.8**.: A subset \(X\subseteq\mathbb{C}^{n}\) is said to be _locally Zariski dense_ (in \(\mathbb{C}^{n}\)) if for any Euclidean open subset \(U\subseteq\mathbb{C}^{n}\) with \(U\cap X\neq\varnothing\), it holds that \(\overline{U\cap X}=\mathbb{C}^{n}\) with respect to the Zariski topology on \(\mathbb{C}^{n}\).
Simple examples of locally Zariski dense sets in \(\mathbb{C}^{n}\) include \(\mathbb{R}^{n}_{>0}\) and \(\mathbb{R}^{n}\). Note that any locally Zariski dense set is in particular Zariski dense, but the converse is not true: for instance, \(\mathbb{Z}^{n}\) is Zariski dense in \(\mathbb{C}^{n}\), but not locally Zariski dense.
**Lemma 2.9**.: _Let \(\mathcal{X}\subseteq(\mathbb{C}^{*})^{n}\) and suppose that \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\) is locally Zariski dense. If \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) has nonempty Euclidean interior in \(\mathcal{A}\), then \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is Zariski dense in \(\mathbb{C}^{\ell}\)._
Proof.: By hypothesis, there exists an open (Euclidean) ball \(B\subseteq\mathbb{C}^{\ell}\) such that \(\varnothing\neq B\cap\mathcal{A}\subseteq D_{g,\mathcal{A}}(\mathcal{X})\). In particular, by local Zariski denseness of \(\mathcal{A}\), the Zariski closures satisfy \(\mathbb{C}^{\ell}=\overline{B\cap\mathcal{A}}\subseteq\overline{D_{g, \mathcal{A}}(\mathcal{X})}\). Hence \(\mathbb{C}^{\ell}=\overline{D_{g,\mathcal{A}}(\mathcal{X})}\).
**Example 2.10**.: The set \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) might be nonempty but not (Euclidean) dense in \(\mathcal{A}\). Consider the single parametric polynomial
\[g(\alpha,x)=\alpha_{1}x-\alpha_{2}x+\alpha_{3}\]
with \(n=1\), \(\ell=3\), \(\mathcal{X}=\mathbb{R}_{>0}\) and \(\mathcal{A}=\mathbb{R}_{>0}^{3}\). Then
\[\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})=\{\alpha\in\mathbb{R}_{>0}^{3}:\alpha _{1}<\alpha_{2}\}.\]
We have that \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is full dimensional in \(\mathbb{R}_{>0}^{3}\) and so is its complement. As \(\mathbb{R}_{>0}^{3}\) is locally Zariski dense in \(\mathbb{C}^{3}\) and \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) has nonempty Euclidean interior in \(\mathbb{R}_{>0}^{3}\), by Lemma 2.9 we obtain that \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is Zariski dense and hence we are in the scenario (i) of Corollary 2.7.
The next results show that for most interesting sets, the converse of Lemma 2.9 also holds, that is, being Zariski dense or having nonempty Euclidean interior are equivalent.
**Lemma 2.11**.:
1. _A constructible set_ \(S\subseteq\mathbb{C}^{n}\) _is Zariski dense in_ \(\mathbb{C}^{n}\) _if and only if it has nonempty Euclidean interior in_ \(\mathbb{C}^{n}\)_._
2. _A semialgebraic set_ \(S\subseteq\mathbb{R}^{n}\) _is Zariski dense in_ \(\mathbb{C}^{n}\) _if and only if it has nonempty Euclidean interior in_ \(\mathbb{R}^{n}\)_._
Proof.: For part (i), note that \(S\) being constructible means that it can be written in the form \(S=\bigcup_{i=1}^{m}Z_{i}\cap U_{i}\), for irreducible Zariski closed sets \(Z_{i}\subseteq\mathbb{C}^{n}\) and Zariski open subsets \(U_{i}\subseteq\mathbb{C}^{n}\), with \(Z_{i}\cap U_{i}\neq\varnothing\) for each \(i=1,\ldots,m\). The statement now follows from the fact that the Zariski closure satisfies \(\overline{S}=\bigcup_{i=1}^{n}Z_{i}\). In case (ii), note that [1, Prop. 2.8.2] gives that \(S\) is Zariski dense in \(\mathbb{C}^{n}\) if and only if its semialgebraic dimension is \(n\), which in turn is equivalent to \(S\) having nonempty Euclidean interior in \(\mathbb{R}^{n}\).
We now apply Lemma 2.11 to \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\).
**Proposition 2.12**.: _Assume that \(\mathcal{A}\) and \(\mathcal{X}\) satisfy any of the following cases:_
1. \(\mathcal{A}\subseteq\mathbb{R}^{\ell}\) _and_ \(\mathcal{X}\subseteq(\mathbb{R}^{*})^{n}\) _are both semialgebraic._
2. \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\) _and_ \(\mathcal{X}\subseteq(\mathbb{C}^{*})^{n}\) _are both constructible._
3. \(\mathcal{A}\subseteq\mathbb{R}^{\ell}\) _is semialgebraic and_ \(\mathcal{X}\subseteq(\mathbb{C}^{*})^{n}\) _is constructible._
_Assume additionally that \(\mathcal{A}\) is locally Zariski dense in \(\mathbb{C}^{\ell}\). Then \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is Zariski dense in \(\mathbb{C}^{\ell}\) if and only if it has nonempty Euclidean interior in \(\mathcal{A}\)._
Proof.: The reverse implication is Lemma 2.9. The forward implication is a consequence of Lemma 2.11. Note that if \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) has nonempty Euclidean interior in \(\mathbb{R}^{\ell}\) in cases (i) and (iii) and in \(\mathbb{C}^{\ell}\) in case (ii), then it has nonempty Euclidean interior in \(\mathcal{A}\). In case (i), the Tarski-Seidenberg theorem [1, Thm. 2.2.1] gives that \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathbb{R}^{\ell}\) is semialgebraic. In case (ii) the set \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\subseteq\mathbb{C}^{\ell}\) is constructible by Chevalley's theorem. Finally, in case (iii), Chevalley's theorem gives that \(\mathcal{D}_{g,\mathbb{C}^{\ell}}(\mathcal{X})\subseteq\mathbb{C}^{\ell}\) is constructible. This in turn implies that
\[\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})=\mathcal{D}_{g,\mathbb{C}^{\ell}}( \mathcal{X})\cap\mathcal{A}=(\mathcal{D}_{g,\mathbb{C}^{\ell}}(\mathcal{X}) \cap\mathbb{R}^{\ell})\cap\mathcal{A}\]
is an intersection of semialgebraic sets and therefore semialgebraic (note that the real points of a constructible set in \(\mathbb{C}^{\ell}\) form a semialgebraic set in \(\mathbb{R}^{\ell}\)).
### Nondegeneracy and real dimension
An approach to decide whether \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) has nonempty Euclidean interior is to use the concept of _nondegeneracy_. In turn, nondegenerate solutions will allow us to compare the real and complex dimension of the varieties in an easier way.
**Definition 2.13**.: Given a polynomial system \(g(x)=0\) with \(g=(g_{1},\ldots,g_{s})\in\mathbb{C}[x^{\pm}]^{s}\), and \(x=(x_{1},\ldots,x_{n})\), a solution \(x^{*}\) is called _nondegenerate_ if \(\operatorname{rk}(J_{g}(x^{*}))=s\). Otherwise, it is called _degenerate_.
Note that the concept of nondegeneracy refers to the system and not to the variety defined by the system. Note also that if the coefficient matrix of \(g\) does not have full rank \(s\), then all solutions to the system will be degenerate. Hence, a natural preprocessing step of any system \(g(x)=0\) is to remove redundant equations by keeping a maximal set of linearly independent equations. In particular, we could assume that the rank of the coefficient matrix of \(g\) is \(s\).
The following proposition gathers well-known results about nondegenerate solutions, which will become relevant later on.
**Proposition 2.14**.: _Let \(g=(g_{1},\ldots,g_{s})\in\mathbb{C}[x_{1}^{\pm},\ldots,x_{n}^{\pm}]^{s}\)._
1. _If_ \(x^{*}\in(\mathbb{C}^{*})^{n}\) _is a nondegenerate solution to_ \(g(x)=0\)_, then_ \(x^{*}\) _is a nonsingular point of the variety_ \(\mathbb{V}_{\mathbb{C}^{*}}(g)\)_, and belongs to a unique irreducible component of dimension_ \(n-s\)_._
2. _The set of nondegenerate solutions of_ \(g(x)=0\) _is either empty or a Zariski open subset of_ \(\mathbb{V}_{\mathbb{C}^{*}}(g)\)_._
3. _If_ \(g\in\mathbb{R}[x_{1}^{\pm},\ldots,x_{n}^{\pm}]^{s}\) _and_ \(x^{*}\) _is a nondegenerate real solution to_ \(g(x)=0\)_, then the irreducible component of_ \(\mathbb{V}_{\mathbb{R}^{*}}(g)\) _containing_ \(x^{*}\) _has dimension_ \(n-s\)_._
Proof.: Statement (i) is Theorem 9 in [15, Section 9.6]. For statement (ii), note that the degenerate locus is given by the vanishing of all \(s\times s\) minors of \(J_{g}(x)\) for \(x\in\mathbb{V}_{\mathbb{C}^{*}}(g)\), and therefore is Zariski closed. For (iii), consider (i) and Remark 2.5.
We note that nonsingular points of a variety are not necessarily nondegenerate solutions of a defining polynomial set.
**Lemma 2.15**.: _Let \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\), and consider the specialized polynomials \(g_{\alpha}\) for \(\alpha\in\mathcal{A}\) for some subset \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\). Suppose that there is some parameter value \(\alpha^{*}\in\mathcal{A}\) such that \(g_{\alpha^{*}}(x)=0\) has a nondegenerate solution in \((\mathbb{C}^{*})^{n}\). Then the following holds:_
1. \(\mathcal{D}_{g,\mathcal{A}}\) _has nonempty Euclidean interior in_ \(\mathcal{A}\)_._
2. _If in addition_ \(\mathcal{A}\) _is locally Zariski dense in_ \(\mathbb{C}^{\ell}\)_, then_ \(\mathcal{D}_{g,\mathcal{A}}\) _is Zariski dense in_ \(\mathbb{C}^{\ell}\)_._
Proof.: For part (i), suppose that \((\alpha^{*},x^{*})\in\mathcal{A}\times(\mathbb{C}^{*})^{n}\) is such that \(\operatorname{rk}(J_{g_{\alpha^{*}}}(x^{*}))=s\). Let \(A\in\mathbb{C}^{(n-s)\times n}\) be a matrix whose rows extend the rows of \(J_{g_{\alpha^{*}}}(x^{*})\) to a basis for \(\mathbb{C}^{n}\). Then \(x^{*}\) is a nondegenerate solution of the square system \(\tilde{g}_{\alpha^{*}}(x)=0\), where
\[\tilde{g}(\alpha^{*},x)=\begin{bmatrix}g(\alpha^{*},x)\\ Ax-Ax^{*}\end{bmatrix}\in\mathbb{C}[\alpha,x^{\pm}]^{n}.\]
The complex implicit function theorem [10, Prop. 1.1.11] now gives that there exists an open Euclidean neighborhood of \(\alpha^{*}\) contained in \(\mathcal{D}_{g,\mathbb{C}^{\ell}}\). Intersecting this open set with \(\mathcal{A}\), we obtain the statement. Part (ii) is immediate from (i) and Lemma 2.9.
**Corollary 2.16**.: _Let \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) and \(g_{\alpha}\) be as in (2.4). Let \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\) be locally Zariski dense. Assume that the incidence variety \(\mathcal{E}_{g}\) is irreducible of dimension \(n+\ell-s\), and that \(g_{\alpha^{*}}(x)=0\) has a nondegenerate solution in \((\mathbb{C}^{*})^{n}\) for some \(\alpha^{*}\in\mathcal{A}\)._
_Then \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) is generically equidimensional of dimension \(n-s\), that is, this holds for all \(\alpha\) in a nonempty Zariski open subset of \(\mathcal{D}_{g,\mathcal{A}}\)._
_If in addition \(\mathcal{A}\subseteq\mathbb{R}^{\ell}\), then \(\mathbb{V}_{\mathbb{R}^{*}}(g_{\alpha})\) is generically of dimension \(n-s\)._
Proof.: Lemma 2.15 gives that \(\mathcal{D}_{g,\mathcal{A}}\) has nonempty Euclidean interior in \(\mathcal{A}\) and by Lemma 2.9, it is Zariski dense in \(\mathcal{C}^{\ell}\). As \(\mathcal{E}_{g}\) is irreducible, Theorem 2.2 gives that \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) has dimension \(\dim(\mathcal{E}_{g})-\ell=n-s\) for generic \(\alpha\in\mathcal{D}_{g,\mathcal{A}}\). For the last statement, we use Proposition 2.14(iii).
Note that in the setting of Corollary 2.16, the real dimension being \(n-s\) does not directly imply equidimensionality.
Corollary 2.16 gives a condition for the generic dimension of \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) to be the smallest possible, namely \(n-s\). This condition is not satisfied in Example 2.3, where there are no nondegenerate solutions. Consistently, we saw that the generic dimension was strictly higher than \(n-s\) in that case.
In the next section we introduce three families of polynomial systems where \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) has nonempty Euclidean interior if and only if there exists a nondegenerate solution, and if and only if the generic dimension is the minimal, namely \(n-s\). This phenomenon does not need to happen for general families, as Example 2.3 shows.
We conclude this subsection by noting that in certain scenarios within the setting of Theorem 2.2, the system \(g_{\alpha}(x)=0\) will only have nondegenerate solutions for all \(\alpha\) in a nonempty Zariski open subset of \(\mathcal{A}\).
**Theorem 2.17**.: _Let \(g\in\mathbb{C}[\alpha,x^{\pm}]^{s}\) and \(g_{\alpha}\) as in (2.4), for parameters \(\alpha\in\mathcal{A}\), with \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\) being Zariski dense. Assume that the incidence variety \(\mathcal{E}_{g}\) is irreducible of dimension \(\ell\) and that there exists a nondegenerate solution to \(g_{\alpha}(x)=0\) for some \(\alpha\in\mathcal{A}\). Then there exists a nonempty Zariski open subset \(\mathcal{U}\subseteq\mathcal{A}\) such that for all \(\alpha\in\mathcal{U}\), any solution to \(g(x)=0\) is nondegenerate._
Proof.: The points \((\alpha,x)\in\mathcal{E}_{g}\) for which \(x\) is a degenerate solution of \(g_{\alpha}\) form a proper Zariski closed subset \(Z\) of \(\mathcal{E}_{g}\), as they are the zeroes of a collection of polynomial equations and there exists a nondegenerate solution. As \(\dim(\mathcal{E}_{g})=\ell\), \(\dim(Z)<\ell\), and hence \(\overline{\pi(Z)}\subseteq\mathbb{C}^{\ell}\) has dimension strictly smaller than \(\ell\). For any \(\alpha\) in the nonempty Zariski open set \(U:=\mathbb{C}^{\ell}\setminus\overline{\pi(Z)}\), all solutions to \(g_{\alpha}(x)=0\) are nondegenerate. As \(\mathcal{A}\) is Zariski dense, the Zariski open set \(\mathcal{U}\) in the statement is \(U\cap\mathcal{A}\neq\varnothing\).
## 3. Generic dimension for the three parametric systems
We consider now three polynomial systems of a specific form, motivated by the application to the study of steady states of reaction networks. These systems are built from parametric systems that are linear in the parameters, and, in addition, each parameter accompanies always the same monomial in \(x\).
### The parametric systems
The first parametric system we consider takes the form
\[f(\kappa,x)=N(\kappa\circ x^{B}),\quad x\in\mathcal{X}\subseteq(\mathbb{C}^{ *})^{n},\quad\kappa\in\mathcal{K}\subseteq\mathbb{C}^{r}, \tag{3.1}\]
with \(N\in\mathbb{C}^{s\times r}\) of full rank \(\operatorname{rk}(N)=s\), and \(B\in\mathbb{Z}^{n\times r}\). We let \(f_{\kappa}\) denote the specialization of \(f\) to a chosen value of \(\kappa\). In the notation of the previous section, \(\mathcal{A}=\mathcal{K}\) and \(\ell=r\).
For a vector subspace \(S\subseteq\mathbb{C}^{n}\) of dimension \(s\), we are also interested in the intersection of \(\mathbb{V}_{\mathbb{C}^{*}}(f_{\kappa})\cap\mathcal{X}\) with parallel translates of \(S\). To this end, consider any (full rank) matrix \(W\in\mathbb{C}^{d\times n}\) such that \(S=\ker(W)\) and \(d=n-s\). Then we consider the family of linear varieties parametrized by \(c=(c_{1},\ldots,c_{d})\in\mathcal{C}\subseteq\mathbb{C}^{d}\) as the solutions to the polynomial family
\[Wx-c=0.\]
The intersections of the parallel translates of \(S\) with \(\mathbb{V}_{\mathbb{C}^{*}}(f_{\kappa})\) are the solutions to the extended polynomial system \(F_{\kappa,c}(x)=0\) with
\[F(\kappa,c,x)=\begin{bmatrix}f(\kappa,x)\\ Wx-c\end{bmatrix},\quad x\in\mathcal{X}\subseteq(\mathbb{C}^{*})^{n},\quad\kappa \in\mathcal{K}\subseteq\mathbb{C}^{r},\quad c\in\mathcal{C}\subseteq\mathbb{ C}^{d}. \tag{3.2}\]
For this second parametrized system, the parameter vector is \(\alpha=(\kappa,c)\) and the parameter space becomes \(\mathcal{A}=\mathcal{K}\times\mathcal{C}\subseteq\mathbb{C}^{r}\times \mathbb{C}^{d}\) and \(\ell=r+d\).
Note that if \(\mathcal{C}\cap W(\mathcal{X})=\varnothing\), then the system \(F_{\kappa,c}(x)=0\) does not have a solution in \(\mathcal{X}\subseteq\mathbb{C}^{n}\). Since \(W\) has full rank \(d\), it defines a continuous and surjective map. If \(\mathcal{X}\subseteq\mathbb{C}^{n}\) is locally Zariski dense, then so is \(W(\mathcal{X})\), and in particular,
\[\overline{W(\mathcal{X})}=\mathbb{C}^{d}.\]
Finally, the third parametric system we consider is given by the restriction of \(F\) to a value of \(c\):
\[F_{\cdot,c}(\kappa,x)=F(\kappa,c,x). \tag{3.3}\]
### The incidence varieties
We now consider the incidence varieties (2.2) for the polynomial functions \(f\), \(F\) and \(F_{\cdot,c}\), and derive some basic facts about their geometry.
**Proposition 3.1**.: _Let \(N\in\mathbb{C}^{s\times r}\) of rank \(s\), \(W\in\mathbb{C}^{(n-s)\times n}\) of rank \(d=n-s\), and \(B\in\mathbb{Z}^{n\times r}\). Construct \(f\), \(F\) and \(F_{\cdot,c}\) as in (3.1), (3.2) and (3.3) for \(\kappa\in\mathbb{C}^{r}\) and \(c\in\mathbb{C}^{d}\)._
1. _The incidence varieties_ \(\mathcal{E}_{f}\subseteq\mathbb{C}^{r}\times(\mathbb{C}^{*})^{n}\) _and_ \(\mathcal{E}_{F}\subseteq\mathbb{C}^{r+d}\times(\mathbb{C}^{*})^{n}\) _admit injective rational parametrizations_ \[\mathbb{C}^{r-s}\times(\mathbb{C}^{*})^{n}\to\mathcal{E}_{f},\quad\mathbb{C}^ {r-s}\times(\mathbb{C}^{*})^{n}\to\mathcal{E}_{F}.\] _Hence,_ \(\mathcal{E}_{f}\) _and_ \(\mathcal{E}_{F}\) _are irreducible varieties of dimension_ \(r+d\)_._
2. _For any_ \(c\in\mathbb{C}^{d}\)_, the incidence variety_ \(\mathcal{E}_{F_{\cdot,c}}\subseteq\mathbb{C}^{r}\times(\mathbb{C}^{*})^{n}\) _admits an injective rational parametrization_ \[\mathbb{C}^{r-n}\times(\mathbb{C}^{*})^{n}\to\mathcal{E}_{F_{\cdot,c}}.\] _Hence,_ \(\mathcal{E}_{F_{\cdot,c}}\) _is an irreducible variety of dimension_ \(r\)_._
3. _The incidence varieties_ \(\mathcal{E}_{f}\)_,_ \(\mathcal{E}_{F}\) _and_ \(\mathcal{E}_{F_{\cdot,c}}\) _(for any_ \(c\in\mathbb{C}^{d}\)_) have no singular points._
Proof.: Viewing \(f\) as a linear function in \(\kappa_{1},\ldots,\kappa_{r}\), the coefficient matrix is \(N\operatorname{diag}(x^{B})\). As \(x\in(\mathbb{C}^{*})^{n}\), the rank of the coefficient matrix is \(\operatorname{rk}(N)=s\). Hence, the equation \(f(\kappa,x)=0\) can be solved for \(s\) of the \(\kappa_{i}\)'s, giving a parametrization \(\varphi(\kappa^{\prime},x)\) of the solution set in terms of the \(r-s\) remaining \(\kappa_{i}\)'s (forming the vector \(\kappa^{\prime}\in\mathbb{C}^{r-s}\)) and the entries of \(x\).
Similarly, as the first \(s\) components of \(F(\kappa,c,x)\) agree with \(f\), we consider the parametrization \(\varphi(\kappa^{\prime},x)\) above. The last \(d\) components are linear in \(x\), with coefficient matrix \(W\) of rank \(d\). Hence, \(d\) of the \(x_{i}\)'s can be solved in terms of the rest of the \(x_{i}\)'s and \(c\), yielding a function
\(\psi(x^{\prime},c)\). The desired parametrization is \(\varphi(\kappa^{\prime},\psi(x^{\prime},c))\), which has \(r-s+n-d+d=r+d\) free parameters.
Finally, the parametrization of \(\mathcal{E}_{F_{,c}}\) arises by specializing \(\varphi(\kappa^{\prime},\psi(x^{\prime},c))\) to the chosen \(c\), and hence has \(r+d-d=r\) free parameters.
To prove (iii) it suffices to show that all points in the variety are nondegenerate solutions of the equations defining the variety as in (2.2). To this end, we find a minor of each of the Jacobian matrices \(J_{f}(\kappa,x)\), \(J_{F}(\kappa,c,x)\), \(J_{F_{,c}}(\kappa,x)\) with maximal rank. For \(f\), the submatrix arising from the partial derivatives of \(f\) with respect to \(\kappa\) is \(N\operatorname{diag}(x^{B})\), which has maximal rank \(s\). For \(F\) and \(F_{,c}\), the Jacobian matrix has block form
\[\begin{bmatrix}N\operatorname{diag}(x^{B})&*\\ 0&W\end{bmatrix},\]
where \(0\) denotes the zero matrix of size \(d\times r\). As \(N\operatorname{diag}(x^{B})\) and \(W\) have maximal rank, so has the Jacobian matrix.
Proposition 3.1 tells us that the hypothesis of Theorem 2.2 on the incidence variety being irreducible holds for the three parametrized systems considered here. Therefore, for each of the varieties, the minimal dimension is attained if scenario (i) of Theorem 2.2 holds.
**Corollary 3.2**.: _Let \(g=F\) as in (3.2), \(g=F_{,c}\) as in (3.3), or \(g=f\) as in (3.1) with additionally \(s=n\). If there exists a nondegenerate solution to \(g_{\alpha^{*}}(x)=0\) for some parameter value \(\alpha^{*}\), then, for \(\alpha\) in a nonempty Zariski open subset of the respective parameter spaces, all solutions to \(g_{\alpha}(x)=0\) are nondegenerate._
Proof.: This is a consequence of Theorem 2.17 and Proposition 3.1.
### Nondegenerate solutions and nonempty Euclidean interior
By Corollary 2.16, if any of the polynomial functions \(g\) in play have a nondegenerate solution, then \(\mathcal{D}_{g,\mathcal{A}}\) has nonempty Euclidean interior and the complex algebraic varieties have the minimal dimension given by the number of equations. We will see next that the converse also holds for the three families \(g\) under consideration, that is, if \(\mathcal{D}_{g,\mathcal{A}}\) has nonempty Euclidean interior for \(\mathcal{A}\) the largest possible parameter space (in each case), then necessarily the system has a nondegenerate solution.
We start with a description of the sets \(\mathcal{D}_{f,\mathcal{K}}(\mathcal{X})\), \(\mathcal{D}_{F_{,c},\mathcal{K}}(\mathcal{X})\) and of \(\mathcal{D}_{F,\mathcal{A}}(\mathcal{X})\) with \(\mathcal{A}\subseteq\mathbb{C}^{r}\times\mathbb{C}^{d}\). We proceed then to study nondegenerate solutions of the systems given by these families and conclude with the main theorem relating nondegenerate solutions to the Euclidean interior of \(\mathcal{D}_{g,\mathcal{A}}\) for \(\mathcal{A}\) begin the maximal complex subspace of parameters under consideration.
**Proposition 3.3**.: _With \(f,F,F_{,c}\) as in (3.1), (3.2) and (3.3), let \(\mathbb{M}\) be a multiplicative subgroup of \(\mathbb{C}^{*}\), \(\mathcal{K}\in\{\mathbb{M}^{r},(\mathbb{M}\cup\{0\})^{r}\}\), and \(\mathcal{X}\subseteq\mathbb{M}^{n}\) a multiplicative subgroup. Let \(\mathcal{A}\subseteq\mathcal{K}\times\mathbb{C}^{d}\). Then it holds that_
\[\mathcal{D}_{f,\mathcal{K}}(\mathcal{X}) =\{w\circ h^{B}:w\in\ker(N)\cap\mathcal{K},\ h\in\mathcal{X}\},\] \[\mathcal{D}_{F_{,c},\mathcal{K}}(\mathcal{X}) =\{w\circ h^{B}:w\in\ker(N)\cap\mathcal{K},h\in\mathcal{X},\ c= Wh^{-1}\},\] \[\mathcal{D}_{F,\mathcal{A}}(\mathcal{X}) =\{(w\circ h^{B},Wh^{-1}):w\in\ker(N)\cap\mathcal{K},h\in\mathcal{ X}\}.\]
Proof.: We note that \(\kappa\in\mathcal{D}_{f,\mathcal{K}}(\mathcal{X})\) if and only if \(\kappa\circ x^{B}\in\ker(N)\cap\mathcal{K}\) for some \(x\in\mathcal{X}\). This holds if and only if \(\kappa=w\circ(x^{-1})^{B}\) for some \(w\in\ker(N)\cap\mathcal{K}\) and \(x\in\mathcal{X}\). By letting
we obtain the desired equality. Similarly, for \(F\) and \(F_{\cdot,c}\), all we need is to include the extra equation \(c=Wx\), which translates into \(c=Wh^{-1}\) as \(h=x^{-1}\).
**Proposition 3.4**.: _With \(f,F,F_{\cdot,c}\) as in (3.1), (3.2) and (3.3), let \(\mathbb{M}\) be a multiplicative subgroup of \(\mathbb{C}^{*}\), \(\mathcal{K}\in\{\mathbb{M}^{\mathbb{r}},(\mathbb{M}\cup\{0\})^{r}\}\), and \(\mathcal{X}\subseteq\mathbb{M}^{n}\) a multiplicative subgroup. Let \(\mathcal{A}\subseteq\mathcal{K}\times\mathbb{C}^{r}\). Then the following holds:_
1. _There exists a nondegenerate solution of_ \(f_{\kappa}(x)=0\) _in_ \(\mathcal{X}\) _for some_ \(\kappa\in\mathcal{K}\) _if and only if_ \(N\operatorname{diag}(w)B^{\top}\in\mathbb{C}^{s\times n}\) _has rank_ \(s\) _for some_ \(w\in\ker(N)\cap\mathcal{K}\)_._
2. _There exists a nondegenerate solution of_ \(F_{\kappa,c}(x)=0\) _in_ \(\mathcal{X}\) _for some_ \((\kappa,c)\in\mathcal{A}\) _(resp. for some_ \(\kappa\in\mathcal{K}\)_,_ \(c\) _fixed) if and only if the matrix_ \[\left[\begin{array}{c}N\operatorname{diag}(w)B^{\top}\operatorname{diag}(h) \\ W\end{array}\right]\in\mathbb{C}^{n\times n}\] _has rank_ \(n\) _for some_ \(w\in\ker(N)\cap\mathcal{K}\) _and some_ \(h\in\mathcal{X}\) _(resp. some_ \(h\in\mathcal{X}\) _such that_ \(c=Wh^{-1}\)_)._
Proof.: An easy computation shows that
\[J_{f_{\kappa}}(x)=N\operatorname{diag}(\kappa\circ x^{B})B^{\top}\operatorname {diag}(x^{-1}).\]
Statement (i) follows directly from this, by noting that the set of vectors \(\kappa\circ x^{B}\) for which \(f(\kappa,x)=0\), \(\kappa\in\mathcal{K}\) and \(x\in\mathcal{X}\) is exactly \(\ker(N)\cap\mathcal{K}\) as \((1,\ldots,1)\in\mathcal{X}\). Statement (ii) follows similarly by noting that
\[J_{F_{\kappa,c}}(x)=\begin{bmatrix}N\operatorname{diag}(\kappa\circ x^{B})B ^{\top}\operatorname{diag}(x^{-1})\\ W\end{bmatrix}\]
and letting \(h=x^{-1}\).
**Remark 3.5**.: Suppose \(\mathbb{M}\subseteq\mathbb{C}^{*}\) is a multiplicative subgroup that is Zariski dense in \(\mathbb{C}\) (for instance \(\mathbb{R}_{>0}\) or \(\mathbb{R}^{*}\)). If in addition \(\ker(N)\cap\mathbb{M}^{\mathbb{r}}\neq\varnothing\), then \(\ker(N)\cap\mathbb{M}^{\mathbb{r}}\) is Zariski dense in \(\ker(N)\). As a consequence, (i) in Proposition 3.4 is equivalent to the existence of some \(w\in\ker(N)\) such that \(\operatorname{rk}(N\operatorname{diag}(w)B^{\top})=s\). Similarly, (ii) is equivalent to the existence of some \(w\in\ker(N)\) and \(h\in\mathcal{X}\) such that
\[\operatorname{rk}\left[\begin{array}{c}N\operatorname{diag}(w)B^{\top} \operatorname{diag}(h)\\ W\end{array}\right]=s.\]
This makes it significantly easier to verify the conditions in (i) and (ii) computationally in concrete examples, as we will see in Subsection 3.5.
We now use these propositions to show the main result of this subsection. The statement is given in the full setting where \(\mathcal{X}=(\mathbb{C}^{*})^{n}\) and the parameter space is the maximal complex space considered in each situation. We will see later on that the statement also holds for common sets \(\mathcal{A},\mathcal{X}\) (Theorem 3.7).
**Theorem 3.6**.: _Consider \(f\) as in (3.1) and \(F\) as in (3.2). Let \((g,\mathcal{A})\) be either_
1. \((f,\mathbb{C}^{r})\)_,_
2. \((F,\mathbb{C}^{r}\times\mathbb{C}^{d})\)_, or_
3. \((F_{\cdot,c},\mathbb{C}^{r})\) _for some fixed_ \(c\)_._
_For each of these cases, it holds that, under the assumption that \(\mathcal{D}_{g,\mathcal{A}}\neq\varnothing\), all solutions \(x^{*}\in(\mathbb{C}^{*})^{n}\) of \(g_{\alpha}(x)=0\) for all \(\alpha\in\mathcal{D}_{g,\mathcal{A}}\) are degenerate if and only if \(\mathcal{D}_{g,\mathcal{A}}\) has empty Euclidean interior in \(\mathcal{A}\)._
Proof.: For the three polynomial systems, the reverse implication follows from Lemma 2.15.
For the forward implication, we will use Proposition 3.3 and 3.4 with \(\mathbb{M}=\mathbb{C}^{*}\) and \(\mathcal{X}=(\mathbb{C}^{*})^{n}\). Let us start with (i). We assume all solutions \(x^{*}\in(\mathbb{C}^{*})^{n}\) of \(f_{\kappa}(x)=0\) degenerate. Proposition 3.3 gives that \(\mathcal{D}_{f,\mathcal{K}}(\mathcal{X})\) is contained in the Zariski closure in \(\mathbb{C}^{r}\) of the image of the map
\[\varphi\colon\mathbb{C}^{r-s}\times(\mathbb{C}^{*})^{n}\to\mathbb{C}^{r}, \quad(u,h)\mapsto Gu\circ h^{B}, \tag{3.4}\]
where \(G\in\mathbb{C}^{r\times(r-s)}\) is any matrix whose columns form a basis for \(\ker(N)\) (recall we are assuming \(N\) has full rank). We show now that \(\overline{\operatorname{im}(\varphi)}\) has dimension strictly less than \(r\), by proving that the rank of the Jacobian of \(\varphi\),
\[J_{\varphi}(u,h)=\left[\ \operatorname{diag}(h^{B})G\mid\operatorname{diag}( Gu\circ h^{B})B^{\top}\operatorname{diag}(h^{-1})\,\right]\in\mathbb{C}^{r \times(r-s+n)}, \tag{3.5}\]
is strictly smaller than \(r\) for all \((u,h)\) Specifically, we will do this by exhibiting more than \(n-s\) linearly independent vectors in \(\ker(J_{\varphi}(u,h))\). The rank of \(J_{\varphi}(u,h)\) agrees with the rank of
\[C(u,h)=\left[\ \operatorname{diag}(h^{B})G\mid\operatorname{diag}(Gu\circ h^{B})B ^{\top}\,\right]\in\mathbb{C}^{r\times(r+n-s)}\]
so we focus on this matrix instead. The first block of \(C(u,h)\) has \(r-s\) columns and the second blob has \(n\) columns.
We can immediately find \(n-\operatorname{rk}(B)\) linearly independent vectors \(\delta_{j}\in\mathbb{C}^{n}\) for \(j=1,\ldots,n-\operatorname{rk}(B)\) from \(\ker(B^{\top})\). Then for each \(j\), \((0,\delta_{j})\in\mathbb{C}^{r-s+n}\) belongs to \(\ker(C(u,h))\). If \(\operatorname{rk}(B)<s\), we are done. If not, then our assumption that all solutions are degenerate gives us that \(\operatorname{rk}(N\operatorname{diag}(w)B^{\top})<s\) for all \(w\in\ker(N)\cap(\mathbb{C}^{*})^{r}\) by Proposition 3.4(i). This in turn implies that \(\operatorname{rk}(N\operatorname{diag}(w)B^{\top})<s\) for all \(w\in\ker(N)\) as \((\mathbb{C}^{*})^{r}\) is Zariski dense in \(\mathbb{C}^{r}\). Hence,
\[n-s <\dim(\ker(N\operatorname{diag}(Gu)B^{\top}))\] \[=\dim(\ker(B^{\top}))+\dim(\ker(N)\cap\operatorname{im}( \operatorname{diag}(Gu)B^{\top}))\] \[=n-\operatorname{rk}(B)+\dim(\operatorname{im}(G)\cap \operatorname{im}(\operatorname{diag}(Gu)B^{\top})),\]
for any \(u\in\mathbb{C}^{r-s}\). From this we conclude that there are \(p>\operatorname{rk}(B)-s\geq 0\) linearly independent vectors
\[\gamma_{i}\in\operatorname{im}(G)\cap\operatorname{im}(\operatorname{diag}( Gu)B^{\top}),\quad\text{for $i=1,\ldots,p$}.\]
For each \(i\), \(\gamma_{i}=G\alpha_{i}=\operatorname{diag}(Gu)B^{\top}\beta_{i}\) for some \(\alpha_{i}\in\mathbb{C}^{r-s}\), \(\beta_{i}\in\mathbb{C}^{n}\). Now, for \(i=1,\ldots,p\), \((-\alpha_{i},\beta_{i})\in\ker(C(u,h))\) by construction.
All that is left is to see that the collection of vectors \((0,\delta_{j})\) and \((-\alpha_{i},\beta_{i})\) is linearly independent. Assume that we have a linear combination
\[\sum_{i=1}^{p}a_{i}(-\alpha_{i},\beta_{i})+\sum_{j=1}^{n-\operatorname{rk}(B) }b_{j}(0,\delta_{j})=0.\]
Then \(\sum a_{i}\alpha_{i}=0\), which after multiplication by \(G\) gives \(\sum a_{i}\gamma_{i}=0\), and we conclude that \(a_{i}=0\) for all \(i\). Then from \(0=\sum b_{j}\delta_{j}\) we conclude that \(b_{j}=0\) for all \(j\). All in all, we have now found \(p+n-\operatorname{rk}(B)>\operatorname{rk}(B)-s+n-\operatorname{rk}(B)=n-s\) linearly independent vectors in the kernel of \(C(u,h)\). This concludes the proof for case (i).
To show the forward implication in (ii), Proposition 3.3 gives that \(\mathcal{D}_{F,\mathcal{A}}(\mathcal{X})\) is contained in the Zariski closure in \(\mathbb{C}^{r}\times\mathbb{C}^{d}\) of the image of the map
\[\psi\colon\mathbb{C}^{r-s}\times(\mathbb{C}^{*})^{n}\to\mathbb{C}^{r}\times \mathbb{C}^{d},\quad(u,h)\mapsto(Gu\circ h^{B},Wh^{-1}), \tag{3.6}\]
where \(G\in\mathbb{C}^{r\times(r-s)}\) is any matrix whose columns form a basis for \(\ker(N)\). We proceed as in case (i). The Jacobian matrix of \(\psi\) is the square matrix
\[J_{\psi}(u,h)=\left[\begin{array}{cc}\operatorname{diag}(h^{B})G& \operatorname{diag}(Gu\circ h^{B})B^{\top}\operatorname{diag}(h^{-1})\\ 0&-W\operatorname{diag}(h^{-2})\end{array}\right]\in\mathbb{C}^{(r+d)\times(r- s+n)}.\]
By multiplying this matrix from the right by \(\operatorname{diag}((1,\dots,1),h^{-2})\in\mathbb{C}^{r-s}\times\mathbb{C}^{n}\), the rank of this matrix agrees with the rank of
\[C(u,h):=\left[\begin{array}{cc}\operatorname{diag}(h^{B})G& \operatorname{diag}(Gu\circ h^{B})B^{\top}\operatorname{diag}(h)\\ 0&W\end{array}\right]\in\mathbb{C}^{(r+d)\times(r+d)}.\]
We now show that \(\overline{\operatorname{im}(\psi)}\) has dimension strictly less than \(r+d\), by finding a nonzero vector in the kernel of \(C(u,h)\) for all \((u,h)\in\mathbb{C}^{r-s}\times(\mathbb{C}^{*})^{n}\). Proceeding as above, by Proposition 3.4(ii), as all solutions of \(F_{\kappa,c}(x)=0\) are degenerate, the kernel of the matrix
\[\left[\begin{array}{cc}N\operatorname{diag}(Gu)B^{\top}\operatorname{diag}( h)\\ W\end{array}\right]\]
is nonzero for all \((u,h)\in\mathbb{C}^{r-s}\times(\mathbb{C}^{*})^{n}\). Let \(\beta\in\mathbb{C}^{n}\) be a nonzero vector in the kernel of this matrix. Hence \(W\beta=0\) and \(N\operatorname{diag}(Gu)B^{\top}\operatorname{diag}(h)(\beta)=0\) and it follows that \(\operatorname{diag}(Gu)B^{\top}(h\circ\beta)=G\alpha\) for some \(\alpha\in\mathbb{C}^{r-s}\). Then, the vector \((-\alpha,\beta)\), which is nonzero, belongs to \(\ker(C(u,h))\). This shows case (ii).
To show the forward implication for (iii), we note that the set \(\{h\in(\mathbb{C}^{*})^{n}:Wh^{-1}=c\}\) admits an injective rational parametrization
\[\Phi\colon U\to(\mathbb{C}^{*})^{n}\]
where \(U\) is a nonempty Zariski open subset of \(\mathbb{C}^{s}\). This parametrization is obtained as the composition of a parametrization of the linear variety \(\{x\in(\mathbb{C}^{*})^{n}:Wx=c\}\) with the inverse map from \((\mathbb{C}^{*})^{n}\) to \((\mathbb{C}^{*})^{n}\) sending \(x\) to \(x^{-1}\). It follows that \(J_{\Phi}(z)\) has full rank \(s\) for all \(z\in U\).
Using this and Proposition 3.3, we obtain that \(\mathcal{D}_{F_{\cdot,c},\mathcal{A}}(\mathcal{X})\) is contained in the Zariski closure in \(\mathbb{C}^{r}\) of the image of the composition
\[\widetilde{\varphi}:=\varphi\circ(\operatorname{Id}_{r-s}\times\Phi)\colon \,\mathbb{C}^{r-s}\times U\to\mathbb{C}^{r},\]
with \(\varphi\) as given in (3.4) and \(\operatorname{Id}_{r-s}\) the identify map on \(\mathbb{C}^{r-s}\), corresponding to the \(u\) variables. We show that the Jacobian of \(\widetilde{\varphi}\) has rank strictly smaller than \(r\), and from this the statement follows as in the previous cases.
By the multivariate chain rule, we have that
\[J_{\widetilde{\varphi}}(u,z)=J_{\varphi}(u,\Phi(z))\left[\begin{array}{cc} \operatorname{Id}_{(r-s)\times(r-s)}&0_{(r-s)\times s}\\ 0_{n\times(r-s)}&J_{\Phi}(z)\end{array}\right]. \tag{3.7}\]
As \(W\Phi(z)=0\) identically, implicit differentiation gives \(WJ_{\Phi}(z)=0.\) Furthermore, as by construction \(J_{\Phi}(z)\in\mathbb{C}^{n\times s}\) has full rank \(s\), its columns form a basis of \(\ker(W)\).
We proceed as in case (ii) and use that the kernel of the matrix \(\left[\begin{array}{cc}N\operatorname{diag}(Gu)B^{\top}\operatorname{diag}( \Phi(z))\\ W\end{array}\right]\) is nonzero for all \(u\) and all \(\Phi(z)\) by assumption, to construct a nonzero vector \((-\alpha,\beta)\) in the kernel of \(J_{\varphi}(u,\Phi(z))\), see (3.5), which additionally satisfies \(W\beta=0\). This last condition implies that \(\beta\in\operatorname{im}J_{\Phi}(z)\). Hence,
\[(-\alpha,\beta)^{\top}=\left[\begin{array}{cc}\operatorname{Id}_{(r-s) \times(r-s)}&0_{(r-s)\times s}\\ 0_{n\times(r-s)}&J_{\Phi}(z)\end{array}\right]\gamma\]
for some \(\gamma\in\mathbb{C}^{r}\). By (3.7), we found a vector \(\gamma\) in the kernel of \(J_{\widetilde{\varphi}}(u,z)\). As this holds for all \((u,z)\in\mathbb{C}^{r-s}\times U\), the closure of the image of \(\widetilde{\varphi}\) is a proper Zariski closed set of \(\mathbb{C}^{r}\).
### The main theorem on generic dimension
Putting all this together, we have obtained the following theorem.
**Theorem 3.7**.: _Consider \(f\) and \(\mathcal{K}\) as in (3.1) and \(F\) and \(\mathcal{C}\) as in (3.2). Let the objects \(g,\mathcal{A}\subseteq\mathbb{C}^{\ell},\ell,\delta\) be either_
1. \(g=f\)_,_ \(\mathcal{A}=\mathcal{K}\)_,_ \(\ell=r\)_,_ \(\delta=d\)_,_
2. \(g=F\)_,_ \(\mathcal{A}=\mathcal{K}\times\mathcal{C}\)_,_ \(\ell=r+d\)_,_ \(\delta=0\)_, or_
3. \(g=F_{\cdot,c}\)_,_ \(\mathcal{A}=\mathcal{K}\)_,_ \(\ell=r\)_,_ \(\delta=0\) _for some fixed_ \(c\in\mathcal{C}\)_._
_We consider any such pair \((g,\mathcal{A})\), and \(\mathcal{X}\in(\mathbb{C}^{*})^{n}\) satisfying:_
* \(\mathcal{A}\) _is locally Zariski dense in_ \(\mathbb{C}^{\ell}\)_,_
* \((\mathcal{A}\times\mathcal{X})\cap\mathcal{E}_{g}\) _is Zariski dense in_ \(\mathcal{E}_{g}\)_,_
* \(\mathcal{A},\mathcal{X}\) _are such that Proposition_ 2.12 _applies._
_Then the following are equivalent:_
1. _There is a nondegenerate solution of_ \(g_{\alpha}(x)=0\) _in_ \((\mathbb{C}^{*})^{n}\) _for some_ \(\alpha\in\mathbb{C}^{\ell}\)_._
2. _There is a nondegenerate solution of_ \(g_{\alpha}(x)=0\) _in_ \(\mathcal{X}\) _for some_ \(\alpha\in\mathcal{A}\)_._
3. _There exists a nonempty Zariski open subset_ \(\mathcal{U}\subseteq\mathcal{E}_{g}\) _such that for any_ \((\alpha^{*},x^{*})\in\mathcal{U}\cap(\mathcal{A}\times\mathcal{X})\)_,_ \(x^{*}\) _is a nondegenerate solution of_ \(g_{\alpha^{*}}(x^{*})=0\)_._
4. \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) _is Zariski dense in_ \(\mathbb{C}^{\ell}\)_._
5. \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) _has nonempty Euclidean interior in_ \(\mathcal{A}\)_._
6. \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) _is equidimensional of dimension_ \(\delta\) _for generic_ \(\alpha\in\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\)_, that is, for all_ \(\alpha\) _in a nonempty Zariski open subset_ \(U\cap\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) _of_ \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\)_._
7. \(\mathbb{V}_{\mathbb{C}^{*}}(g_{\alpha})\) _is equidimensional of dimension_ \(\delta\) _for at least one_ \(\alpha\in\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\)_._
_Furthermore, any of these imply:_
1. \(\mathbb{V}_{\mathbb{C}^{*}_{\mathcal{C}^{*}}}^{\mathcal{X}}(g_{\alpha})\) _is equidimensional of dimension_ \(\delta\) _for generic_ \(\alpha\in\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\)_._
2. _If_ \(\mathcal{A}\subseteq\mathbb{R}^{\ell}\) _and_ \(\mathcal{X}\subseteq(\mathbb{R}^{*})^{n}\)_, then_ \(\mathbb{V}_{\mathbb{R}^{*}}^{\mathcal{X}}(g_{\alpha})\) _and_ \(\mathbb{V}_{\mathbb{R}^{*}}(g_{\alpha})\) _have dimension_ \(\delta\) _for generic_ \(\alpha\in\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\)_._
Proof.: First, we show that **(i)\(\Leftrightarrow\)(ii)\(\Leftrightarrow\)(iii)**. We have that (iii)\(\Rightarrow\)(ii), as \(\mathcal{U}\) is a nonempty Zariski open set and \((\mathcal{A}\times\mathcal{X})\cap\mathcal{E}_{g}\) is Zariski dense in \(\mathcal{E}_{g}\), guaranteeing that the intersection \(\mathcal{U}\cap(\mathcal{A}\times\mathcal{X})\) is nonempty. (ii)\(\Rightarrow\)(i) holds trivially. To prove (i)\(\Rightarrow\)(iii), note that the subset of points \((\alpha,x)\) of \(\mathcal{E}_{g}\) for which \(x\) is a nondegenerate solution of \(g_{\alpha}(x)=0\) is Zariski open if nonempty.
The equivalence **(iv)\(\Leftrightarrow\)(v)** is Proposition 2.12, which applies by the assumptions on \(\mathcal{A}\) and \(\mathcal{X}\).
We now show **(iv)\(\Rightarrow\)(vi)\(\Rightarrow\)(vii)\(\Rightarrow\)(i)**. The implication (iv)\(\Rightarrow\)(vi) is Corollary 2.7. The implication (vi)\(\Rightarrow\)(vii) is clear. For the implication (vii)\(\Rightarrow\)(i), we note that (vii) implies that \(\mathcal{D}_{g,\mathcal{A}}\) is Zariski dense in \(\mathbb{C}^{\ell}\) by Theorem 2.2 but, as \(\mathcal{D}_{g,\mathcal{A}}\subseteq\mathcal{D}_{g,\mathbb{C}^{\ell}}\), so is \(\mathcal{D}_{g,\mathbb{C}^{\ell}}\). Hence \(\mathcal{D}_{g,\mathbb{C}^{\ell}}\) has nonempty Euclidean interior in \(\mathbb{C}^{\ell}\) by Proposition 2.12, which leads to (i) by Theorem 3.6.
All that is left is to show that **(ii)\(\Rightarrow\)(iv)**. By Proposition 2.12 and Theorem 3.6, (ii) gives that \(\mathcal{D}_{g,\mathbb{C}^{\ell}}\) is Zariski dense in \(\mathbb{C}^{\ell}\). The assumption that \((\mathcal{A}\times\mathcal{X})\cap\mathcal{E}_{g}\) is Zariski dense
in \(\mathcal{E}_{g}\), gives that
\[\overline{\mathcal{D}_{g,\mathcal{A}}(X)}=\overline{\pi((\mathcal{A}\times \mathcal{X})\cap\mathcal{E}_{g})}=\ \overline{\pi((\overline{(\mathcal{A}\times\mathcal{X})\cap\mathcal{E}_{g}})}= \overline{\pi(\mathcal{E}_{g})}=\overline{\mathcal{D}_{g,\mathbb{C}^{\ell}}}= \mathbb{C}^{\ell}.\]
Hence, \(\mathcal{D}_{g,\mathcal{A}}(\mathcal{X})\) is Zariski dense in \(\mathbb{C}^{\ell}\), which gives (iv). This concludes the first part of the proof.
Finally, **(iv)\(\Rightarrow\)(1)** follows by Corollary 2.7, and **(ii)\(\Rightarrow\)(2)** follows by Corollary 2.16.
**Remark 3.8**.: For any \((g,\mathcal{A})\) and \(\mathcal{X}\) under the assumptions of Theorem 3.7, any statement (ii)-(vii) in Theorem 3.7 is equivalent to (i), and hence to any of the statements for \(\mathcal{A}=\mathbb{C}^{\ell}\) and \(\mathcal{X}=(\mathbb{C}^{*})^{n}\) instead.
**Remark 3.9**.: We note that the equivalences (i)\(\Leftrightarrow\)(ii)\(\Leftrightarrow\)(iii) in Theorem 3.7 work in the more general setting of Proposition 3.3 and 3.4, but for the equivalences (iv)\(\Leftrightarrow\)(v) and (vii)\(\Rightarrow\)(v), Proposition 2.12 is required, which relies on Lemma 2.11.
**Remark 3.10**.: The following scenarios that are common in applications satisfy the hypotheses in Theorem 3.7:
1. \(\mathcal{K}\in\{\mathbb{R}^{r}_{>0},(\mathbb{R}^{*})^{r},\mathbb{R}^{r}\}\), \(\mathcal{C}\in\{\mathbb{R}^{d}_{>0},(\mathbb{R}^{*})^{d},\mathbb{R}^{d}\}\) and \(\mathcal{X}\in\{\mathbb{R}^{n}_{>0},(\mathbb{R}^{*})^{n},(\mathbb{C}^{*})^{n}\}\).
2. \(\mathcal{K}\in\{(\mathbb{C}^{*})^{r},\mathbb{C}^{r}\}\), \(\mathcal{C}\in\{(\mathbb{C}^{*})^{d},\mathbb{C}^{d}\}\) and \(\mathcal{X}=(\mathbb{C}^{*})^{n}\).
In general, we note that if \(\mathcal{A}\) and \(\mathcal{X}\) are Euclidean open subsets of \(\mathbb{R}^{\ell}\) and \(\mathbb{R}^{n}\) respectively, or of \(\mathbb{C}^{\ell}\) and \(\mathbb{C}^{n}\) respectively, then \((\mathcal{A}\times\mathcal{X})\cap\mathcal{E}_{g}\) is Zariski dense in \(\mathcal{E}_{g}\) if this intersection is nonempty. In the real case, this follows from the fact that \(\mathcal{E}_{g}\) is nonsingular by Proposition 3.1(iii), together with the fact that for a complex irreducible variety defined by polynomials with real coefficients that has at least one nonsingular point, the real points form a Zariski dense subset [19, Thm. 5.1].
### Computational considerations
In the case when the matrix \(N\) has rational entries, the criterion (i) in Theorem 3.7 can be checked computationally through Proposition 3.4.
The idea is the following: Let \(G\in\mathbb{Q}^{r\times(n-s)}\) be a matrix whose columns form a basis for \(\ker(N)\). For \(f\), we want to check whether there exists some \(u\in\mathbb{Q}^{n-s}\) such that
\[\operatorname{rk}(N\operatorname{diag}(Gu)B^{\top})=s.\]
Note that the set of \(u\) for which the rank is \(s\) forms a Zariski open subset of \(\mathbb{Q}^{n-s}\), so if it is nonempty, a randomly chosen \(u\in\mathbb{Q}^{n-s}\) will give \(\operatorname{rk}(N\operatorname{diag}(Gu)B^{\top})=s\) with probability \(1\). Hence, we pick a random \(u\in\mathbb{Q}^{n-s}\), and compute \(N\operatorname{diag}(Gu)B^{\top}\) with exact arithmetic. If the rank is \(s\), we conclude that criterion (i) in Theorem 3.7 holds. If not, we can suspect that it does not hold, and to conclusively prove this, we view \(N\operatorname{diag}(Gu)B^{\top}\) as a symbolic matrix with indeterminate \(u\) and computationally verify that all \(s\times s\) minors are the zero polynomial. For \(F\), we instead want to determine whether there is some \(u\) such that
\[\operatorname{rk}\begin{bmatrix}N\operatorname{diag}(Gu)B^{\top}\\ W\end{bmatrix}=n,\]
which is done analogously.
In the setting of Proposition 3.4 for \(f\), where \(\mathcal{K}\in\{\mathbb{M}^{r},(\mathbb{M}\cup\{0\})^{r}\}\) and \(\mathbb{M}\) is a multiplicative subgroup of \(\mathbb{C}^{*}\), we know by Remark 3.10 that to check denseness of \((\mathcal{K}\times\mathcal{X})\cap\mathcal{E}_{g}\) in \(\mathcal{E}_{g}\) it is enough to check that \((\mathcal{K}\times\mathcal{X})\cap\mathcal{E}_{g}\neq\varnothing\). This, in turn, is equivalent to \(\ker(N)\cap\mathcal{K}\neq\varnothing\). Indeed, if \(\ker(N)\cap\mathcal{K}=\varnothing\), then \(\mathbb{V}_{\mathbb{C}^{*}}(f)=\varnothing\) for all \(\kappa\in\mathcal{K}\). On the other hand, if \(\ker(N)\cap\mathcal{K}\neq\varnothing\), then \((1,\dots,1)\in\mathbb{V}_{\mathbb{C}^{*}}(f)\) for any \(\kappa\in\ker(N)\cap\mathcal{K}\). In the special case \(\mathbb{M}=\mathbb{R}_{>0}\), checking \(\ker(N)\cap\mathbb{R}^{r}_{>0}\neq\varnothing\) corresponds to showing the existence of an
interior point of the polyhedral cone \(\ker(N)\cap\mathbb{R}^{r}_{\geq 0}\), which is a straightforward computation using either linear programming, or satisfiability modulo theories solvers [1].
## 4. Application to Reaction Networks
As explained in the introduction, the main motivation behind the development of the results in Section 3 comes from the study of reaction networks. This connection will be explained in this section. To make this section accessible to readers familiar with reaction networks but not necessarily acquainted with the background and language of algebraic geometry, we will use a less technical approach. In particular, when saying that a property holds for **almost all** parameters in a set \(\mathcal{A}\subseteq\mathbb{C}^{\ell}\), we mean that the property holds generically in the set, that is, in a nonempty Zariski open subset of \(\mathcal{A}\). In the reaction network scenario where \(\mathcal{A}\) is \(\mathbb{R}^{\ell}\) or \(\mathbb{R}^{\ell}_{>0}\), this implies that the property holds outside a subset of \(\mathcal{A}\) of Lebesgue measure zero (so for almost all parameters, measure theoretically speaking, it holds too).
### Reaction networks
In what follows we present some generalities, necessary to establish the connection of our results with some questions arising in this field, but we refer to [10] for further details.
A reaction network is normally pictured by means of a graph, whose vertices correspond to linear combinations of the species involved (complexes), connected by a directed edge whenever an interaction is assumed to lead from any of these complexes (source) to another one (product). A simple reaction network modeling the enzymatic transfer of calcium ions is
\[\begin{split} 0\xrightleftharpoons{\kappa_{1}}{\kappa_{2}}X_{1}\\ X_{1}+X_{2}\xrightleftharpoons{\kappa_{3}}{\kappa_{3}}2X_{1}\\ X_{1}+X_{3}\xrightleftharpoons{\kappa_{4}}{\kappa_{5}}X_{4} \xrightarrow{\kappa_{6}}X_{2}+X_{3}.\end{split} \tag{4.1}\]
Here, \(X_{1}\) stands for cytosolic calcium, \(X_{2}\) for calcium in the endoplasmic reticulum, and \(X_{3}\) is an enzyme catalyzing the transfer via the formation of an intermediate protein complex \(X_{4}\)[12]. The labels of the reactions are positive real numbers called reaction rate constants. Given any reaction network with species \(X_{1},\ldots,X_{n}\) and reactions
\[b_{1i}X_{1}+\cdots+b_{ni}X_{n}\xrightarrow{\kappa_{i}}a_{1i}X_{1}+\cdots+a_{ni }X_{n},\quad i=1,\ldots,r,\]
under the assumption of mass-action kinetics, the concentration \(x=(x_{1},\ldots,x_{n})\) of the species in time is modeled by means of a system of ordinary differential equations (ODEs) of the form
\[\frac{dx}{dt}=\Gamma(\kappa\circ x^{B}),\quad x\in\mathbb{R}^{n}_{\geq 0}, \tag{4.2}\]
for reaction rate constants \(\kappa=(\kappa_{1},\ldots,\kappa_{r})\in\mathbb{R}^{r}_{>0}\), stoichiometric matrix \(\Gamma\in\mathbb{Z}^{n\times r}_{\geq 0}\) with entries \((a_{ji}-b_{ji})\), and reactant matrix \(B\in\mathbb{Z}^{n\times r}\) with entries \(b_{ji}\). The trajectories of the ODE system are confined into parallel translates of the image of \(\Gamma\), which can be described by linear equations \(Wx-c=0\) with \(c\) depending on the initial condition and \(W\in\mathbb{R}^{d\times n}\) with \(d=n-\operatorname{rk}(\Gamma)\). The intersection of each such linear subspace with \(\mathbb{R}^{n}_{\geq 0}\) is called a stoichiometric compatibility class. We note that for common networks in biochemistry, the rank of \(\Gamma\) is not maximal.
The reader might already notice the similarities with the parametric systems (3.1) and (3.2). These arise naturally in this context as follows:
* The **positive steady state variety**\(V_{\kappa}\) for a given \(\kappa\in\mathbb{R}^{r}_{>0}\) is the set of steady states of (4.2) in \(\mathbb{R}^{n}_{>0}\). By letting \(N\in\mathbb{Z}^{s\times n}\) be a matrix of full rank \(s=\operatorname{rk}(\Gamma)\) and \(\operatorname{im}(N^{\top})=\operatorname{im}(\Gamma^{\top})\), the positive steady state variety is the solution set to \[N(\kappa\circ x^{B})=0,\quad x\in\mathbb{R}^{n}_{>0}.\] (4.3) This corresponds to the parametric system \(f(\kappa,x)\) from (3.1) with \(\mathcal{K}=\mathbb{R}^{r}_{>0}\) and \(\mathcal{X}=\mathbb{R}^{n}_{>0}\). The complex (resp. real) steady state variety is the set of solutions to (4.3) in \((\mathbb{C}^{*})^{n}\) (resp. \((\mathbb{R}^{*})^{n}\)).
* The set of **positive steady states within stoichiometric compatibility classes**\(P_{\kappa,c}\) for a given \(\kappa\in\mathbb{R}^{r}_{>0}\) and \(c\in\mathbb{R}^{d}\) is the set of steady states of (4.2) in \(\mathbb{R}^{n}_{>0}\) that additionally belong to the stoichiometric compatibility class defined by \(c\), that is, solutions to \[N(\kappa\circ x^{B})=0,\quad Wx-c=0.\] (4.4) This corresponds to the parametric system \(F(\kappa,c,x)\) from (3.2) with \(\mathcal{K}=\mathbb{R}^{r}_{>0}\) and \(\mathcal{C}=\mathbb{R}^{d}\) with \(d=n-s\). By \(P^{\mathcal{C}}_{\kappa,c}\) we denote the set of solutions to (4.3) in \((\mathbb{C}^{*})^{n}\).
Since the reaction rate constants are normally unknown, studying the dynamics of a reaction network often implies understanding the behavior of the system for all parameters, maybe generically, or being able to describe the parameter regions leading to different behaviors.
In the reaction network literature, a steady state \(x^{*}\) is said to be **degenerate** if it is degenerate as a solution to the system \(F_{\kappa,Wx^{*}}(x)=0\) (equivalently, if \(\ker(J_{f_{\kappa}}(x^{*}))\cap\operatorname{im}(\Gamma)\neq\{0\}\).)
The existence of nondegenerate steady states is assumed to be common, although not guaranteed for all networks as we will see below. The existence of nondegenerate steady states is often a requirement to "lift" steady states from one network to another with arguments relying on the implicit function theorem or homotopy continuation [1, 13, 14, 15]. The relation between nondegenerate steady states and nonsingularity pointed out in Proposition 2.14 allows also to employ machinery from algebraic geometry to the study of steady states [14].
### Theorems on dimension and finiteness of steady states
In the language of reaction network theory, Theorem 3.7 gives rise to the following results.
**Theorem 4.1** (Expected dimension of the complex steady state variety).: _Consider a reaction network with \(n\) species, \(r\) reactions, and stoichiometric matrix \(\Gamma\). Consider the polynomial function \(f(\kappa,x)\) defining \(V_{\kappa}\) as in (4.3) with \(B\) the reactant matrix, and \(N\in\mathbb{Z}^{s\times n}\) of full rank \(s=\operatorname{rk}(\Gamma)\) and such that \(\operatorname{im}(N^{\top})=\operatorname{im}(\Gamma^{\top})\). The following are equivalent:_
1. _The system_ \(f_{\kappa}(x)=0\) _has a nondegenerate solution in_ \(\mathbb{R}^{n}_{>0}\) _for some_ \(\kappa\in\mathbb{R}^{r}_{>0}\)_._
2. _The set of parameters_ \(\kappa\) _for which_ \(V_{\kappa}\neq\varnothing\) _has nonempty Euclidean interior in_ \(\mathbb{R}^{r}\)_._
3. _The dimension of the complex steady state variety is_ \(n-s\) _for at least one_ \(\kappa\in\mathbb{R}^{r}_{>0}\) _for which_ \(V_{\kappa}\neq\varnothing\)_._
4. _The matrix_ \(N\operatorname{diag}(w)B^{\top}\) _has rank_ \(s\) _for some_ \(w\in\ker(N)\in\mathbb{R}^{r}_{>0}\)_._
_If any of these hold, then the dimension of the complex and real steady state varieties is \(n-s\) for almost all \(\kappa\) for which \(V_{\kappa}\neq\varnothing\)._
_If none of these holds, then the complex steady state variety is either empty or has dimension strictly larger than \(n-s\)._
Proof.: The statement follows from Proposition 3.4(i) and from Theorem 3.7 case (a) together with Remark 3.10 as \(\mathcal{K}=\mathbb{R}_{>0}^{r}\) and \(\mathcal{X}=\mathbb{R}_{>0}^{n}\).
**Corollary 4.2**.: _A reaction network such that for all \(\kappa\in\mathbb{R}_{>0}^{r}\) the positive steady state variety is either empty or has dimension higher than \(n-s\), has no positive steady states for almost all \(\kappa\in\mathbb{R}_{>0}^{r}\)._
**Example 4.3**.: For the calcium network in (4.1), the stoichiometric matrix and the reactant matrix are given by
\[\Gamma=\left[\begin{array}{cccccc}1&-1&1&-1&1&0\\ 0&0&-1&0&0&1\\ 0&0&0&-1&1&1\\ 0&0&0&1&-1&-1\end{array}\right]\quad\text{and}\quad B=\left[\begin{array}{ ccccc}0&1&1&1&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&1&1\end{array}\right].\]
Here, \(n=4\), \(r=6\), \(s=3\), and we can choose
\[W=\left[\begin{matrix}0&0&1&1\end{matrix}\right].\]
We note that \(w=(1,1,1,2,1,1)\in\ker(\Gamma)\cap\mathbb{R}_{>0}^{r}\), and that
\[\operatorname{rk}\left[\begin{matrix}\Gamma\operatorname{diag}(w)B^{\top}\\ W\end{matrix}\right]=4.\]
Hence Theorem 4.1(iv) holds and from this we conclude that there exist positive steady states for \(\kappa\) in a set with nonempty Euclidean interior, and that the complex and real steady state varieties are \(1\)-dimensional for almost all such reaction rate constants.
**Example 4.4**.: Consider the network
We have
\[f_{\kappa}(x)=\left[\begin{matrix}-\kappa_{1}x-\kappa_{2}x+\kappa_{3}yz\\ \kappa_{1}x-\kappa_{4}yz\\ \kappa_{2}x-\kappa_{4}yz\end{matrix}\right],\]
and \(V_{\kappa}\neq\varnothing\) precisely when \(\kappa_{1}=\kappa_{2}\) and \(\kappa_{3}=2\kappa_{4}\). So Theorem 4.1(ii) does not hold and hence, the complex steady state variety has dimension strictly larger than \(n-s=1\) in these cases. In fact, for these particular parameter values, the complex steady state variety is defined by the equation \(\kappa_{1}x-\kappa_{4}yz=0\), and hence has complex dimension \(2\).
**Theorem 4.5** (Finiteness of the number of steady states in stoichiometric compatibility classes).: _Consider a reaction network with \(n\) species, \(r\) reactions, and stoichiometric matrix \(\Gamma\). Consider the polynomial function \(F(\kappa,c,x)\) defining \(P_{\kappa,c}\) as in (4.4) with \(B\) the reactant matrix, \(N\in\mathbb{Z}^{s\times n}\) of full rank \(s=\operatorname{rk}(\Gamma)\) such that \(\operatorname{im}(N^{\top})=\operatorname{im}(\Gamma^{\top})\), and \(W\) a full matrix whose rows form a basis of \(\ker(\Gamma^{\top})\)._
_The following are equivalent:_
1. _The network has a nondegenerate steady state in_ \(\mathbb{R}_{>0}^{n}\) _for some_ \(\kappa\in\mathbb{R}_{>0}^{r}\)_._
2. _The set of parameters_ \((\kappa,c)\) _for which_ \(P_{\kappa,c}\neq\varnothing\)_, has nonempty Euclidean interior in_ \(\mathbb{R}^{r+d}\)_._
3. \(P_{\kappa,c}^{\mathbb{C}}\neq\varnothing\) _is finite for at least one_ \(\kappa\in\mathbb{R}_{>0}^{r}\) _and_ \(c\in\mathbb{R}^{d}\) _for which_ \(P_{\kappa,c}\neq\varnothing\)
_._
* _The matrix_ \[\left[\begin{array}{c}N\operatorname{diag}(w)B^{\top}\operatorname{diag}(h)\\ W\end{array}\right]\] _has full rank_ \(n\) _for some_ \(w\in\ker(N)\in\mathbb{R}^{r}_{>0}\) _and_ \(h\in\mathbb{R}^{n}_{>0}\)_._
_If any of these hold, then \(P_{\kappa,c}\) is finite for almost all \((\kappa,c)\in\mathbb{R}^{r}_{>0}\times\mathbb{R}^{d}\). Additionally, for almost all values of \((\kappa,c)\), the elements of \(P_{\kappa,c}\) are nondegenerate steady states._
Proof.: This follows from Proposition 3.4(ii) and from Theorem 3.7 case (b) together with Remark 3.10 as \(\mathcal{K}=\mathbb{R}^{r}_{>0}\), \(\mathcal{C}=\mathbb{R}^{d}\) and \(\mathcal{X}=\mathbb{R}^{n}_{>0}\). The last part follows from Corollary 3.2.
We note that condition (iv) in Theorem 4.1 and 4.5 holds if \(\ker(N)\cap\mathbb{R}^{r}_{>0}\neq\varnothing\) and the respective conditions hold over \(\mathbb{C}^{*}\) (see Subsection 3.5). We also note that Theorem 4.5(i) implies Theorem 4.1(i), and we obtain the following corollary.
**Corollary 4.6**.: _If \(P^{\mathbb{C}}_{\kappa,c}\neq\varnothing\) is finite for at least one \(\kappa\in\mathbb{R}^{r}_{>0}\) and \(c\in\mathbb{R}^{d}\) for which \(P_{\kappa,c}\neq\varnothing\), then the complex (and the real) steady state variety has dimension \(n-s\) for almost all \(\kappa\) for which it is not empty, and moreover it is nonempty for a set of parameters with nonempty Euclidean interior._
The converse is not necessarily true, as the following example shows.
**Example 4.7**.: Consider the simple reaction network
\[X_{1}+X_{2}\xrightarrow{\kappa_{1}}X_{1}\quad X_{2}\xrightarrow{\kappa_{2}} 2X_{2}.\]
The first row of the stoichiometric matrix \(\Gamma\) is zero, and hence the steady states are described by the single parametric polynomial
\[f_{\kappa}(x_{1},x_{2})=-\kappa_{1}x_{1}x_{2}+\kappa_{2}=x_{2}(-\kappa_{1}x_ {1}+\kappa_{2}).\]
We obtain \(V_{\kappa}=\{(x_{1},x_{2})\in\mathbb{R}^{2}_{>0}:x_{1}=\frac{\kappa_{2}}{ \kappa_{1}}\}\), which is nonempty for all \(\kappa\).
On the other hand, the stoichiometric compatibility classes are defined by the equation \(x_{1}=c\) for \(c\in\mathbb{R}\). The set \(P_{\kappa,c}\) is described by \(x_{1}=\frac{\kappa_{2}}{\kappa_{1}}=c\), and hence is empty unless \(\frac{\kappa_{2}}{\kappa_{1}}=c\). This shows that Theorem 4.1(i) does not imply Theorem 4.5(i).
In the next corollaries, we rewrite some of the implications of Theorem 4.5 to emphasize the consequences that our results have on the existence and number of positive (nondegenerate) steady states.
**Corollary 4.8**.: _If a reaction network has infinitely many steady states within stoichiometric compatibility classes for all \((\kappa,c)\) for which \(P_{\kappa,c}\neq\varnothing\), then \(P_{\kappa,c}=\varnothing\) for almost all \((\kappa,c)\in\mathbb{R}^{r}_{>0}\times\mathbb{R}^{d}\)._
**Corollary 4.9**.: _A reaction network that has a nondegenerate steady state for some \(\kappa\in\mathbb{R}^{r}_{>0}\) cannot have infinitely many steady states in all (not even in most) stoichiometric compatibility classes._
We note that even when the conditions of Theorem 4.5 hold, there might be values of \(\kappa\) for which all steady states are degenerate. Moreover, all steady states can be degenerate even if the dimension of the steady state variety has dimension \(n-s\). This is illustrated in the following example.
**Example 4.10**.: Consider the reaction network with \(s=1\)
\[3X_{1}+X_{2}\xrightarrow{\kappa_{1}}4X_{1}\quad 2X_{1}+X_{2}\xrightarrow{\kappa_{2}}3X _{2}\quad X_{1}+X_{2}\xrightarrow{\kappa_{3}}2X_{1},\]
giving rise to the polynomial function \(f(\kappa,x)=\kappa_{1}x_{1}^{2}-2\kappa_{2}x_{1}+\kappa_{3}\), after suitably choosing \(N\). For any choice of \(\kappa\) such that \(\kappa_{2}^{2}=\kappa_{1}\kappa_{3}\), all positive steady states are degenerate. For any other \(\kappa\), the steady states are all nondegenerate and therefore, the conditions in Theorem 4.1 and 4.5 apply: for almost all \(\kappa\), the dimension of the complex steady state variety is \(n-s=1\) when nonempty, and for almost all \((\kappa,c)\), there is a finite number of positive steady states in the corresponding stoichiometric compatibility class.
**Example 4.11** (Real networks and steady states).: As an illustration of the applicability of our methods to realistic networks, we applied the computational steps described in Subection 3.5 to the networks from the database ODEbase [10], under the assumption of mass-action kinetics. Out of a total of 610 networks appearing in the database, we found that precisely 368 have positive steady states. Out of these, 6 networks only have steady states that are degenerate solutions to \(f_{\kappa}(x)=0\), whereas 362 networks have a nondegenerate steady state, and therefore satisfy the equivalent properties in Theorems 4.1 and 4.5.
The largest network in the database which admits a positive nondegenerate steady state is BIOMD000000014, which has 86 species. The computation verifying this took \(<\)2 seconds in Maple*. For comparison, an attempt of computing the dimension for a fixed random choice of the rate constants via PolynomialIdeals[11] in Maple failed due to memory problems.
Footnote *: All computations were run in Maple 2023.0 on a 2.70 GHz Intel Core i5 with 8GB RAM.
### Weakly reversible networks and other particularities
Weakly reversible networks are those for which all connected components of the underlying digraph are strongly connected. These networks are known to admit positive steady states for all choices of \(\kappa\)[1] and are conjectured to display some strong dynamical behavior such as persistence [13] or bounded trajectories [1].
In [1], the authors considered the following reaction network for which \(n=s=2\)
and fine tuned the reaction rate constants in such a way that the two equations defining the steady states had a common factor and the other factors did not admit positive solutions. Hence the complex steady state variety has dimension 1, while \(d=n-s=0\). So the dimension is higher than expected. Specifically, all parameters were set to one except for \(\kappa_{3}=\kappa_{6}=\kappa_{12}=4\). With this trick, they illustrated that even for weakly reaction networks, the dimension of the steady state variety could be larger than expected for some choice of parameter values. Motivated by this, the authors posed the following question [1, Section 5]]:
_Is it possible for a weakly reversible network to have infinitely many positive steady states [within a stoichiometric class] for each choice of reaction rate constants in a [Euclidean] open set of the parameter space \(\mathbb{R}_{>0}^{r}\)?_
Theorem 4.5 gives an answer in the negative to this question: at least for almost all stoichiometric classes, this cannot happen if the network is weakly reversible. We note that weakly reversible networks admit so called complex balanced steady states for some choice of reaction rate constants ([10]), and these are always nondegenerate (see [14, Section 15.2.2]). It follows that weakly reversible networks admit nondegenerate steady states, and hence both Theorem 4.1 and Theorem 4.5, as well as Corollary 4.9 apply. This leads to the following result.
**Corollary 4.12**.: _For a weakly reversible network, and with notation as in Theorem 4.5, it holds that \(P_{\kappa,c}\) is finite for almost all \((\kappa,c)\in\mathbb{R}_{>0}^{r}\times\mathbb{R}^{d}\)_
Other classes of networks that are known to have nondegenerate steady states are those that satisfy the criteria in the Deficiency One Theorem of Feinberg [14, Section 17.1], and injective networks [16, Thm. 1.4].
To conclude this subsection, we make a connection to some classical objects in the study of reaction networks: the kinetic and the stoichiometric subspaces.
The stoichiometric subspace \(S\) is simply \(\operatorname{im}(\Gamma)\), while the kinetic subspace \(S_{\kappa}\) for a given \(\kappa\) is given by \(\operatorname{im}(\Sigma_{\kappa})\), with \(\Sigma_{\kappa}\) the coefficient matrix of \(f_{\kappa}\) from (4.3). In [14], criteria for when the two subspaces agree are given, but one can construct networks for which they do not agree for any \(\kappa\) (e.g. Example 4.4), or for some values of \(\kappa\). By Remark 2.4, if the network admits a nondegenerate solution to (4.3), then the generic rank of \(\Sigma_{\kappa}\) agrees with \(\operatorname{rk}(\Gamma)\) and hence the two subspaces agree generically. It follows that if the two subspaces do not agree generically, then there is no nondegenerate solution and consequently, by Corollary 4.2, for almost all values of \(\kappa\) there are no positive steady states.
|
2306.14319 | Bulk-preventing actions for SU(N) gauge theories | Lattice gauge field theories may suffer from unphysical "bulk" phase
transitions at strong lattice gauge coupling. We introduce a one-parameter
family of lattice SU(N) gauge actions which, when used in combination with an
HMC update algorithm, prevents the appearance of the bulk phase transition. We
briefly discuss the (presumed) mechanism behind the prevention of the bulk
transition and present test results for different SU(N) gauge groups. | Tobias Rindlisbacher, Kari Rummukainen, Ahmed Salami | 2023-06-25T19:17:58Z | http://arxiv.org/abs/2306.14319v2 | # Bulk-preventing actions for SU(N) gauge theories
###### Abstract
Lattice gauge field theories may suffer from unphysical "bulk" phase transitions at strong lattice gauge coupling. We introduce a one-parameter family of lattice SU(\(N\)) gauge actions which, when used in combination with an HMC update algorithm, prevents the appearance of the bulk phase transition. We briefly discuss the (presumed) mechanism behind the prevention of the bulk transition and present test results for different SU(\(N\)) gauge groups.
## I Introduction
In asymptotically free gauge theories on the lattice the continuum limit is obtained when the bare lattice gauge coupling vanishes. In practice lattice simulations are always done at finite lattice spacing, and as long as the coupling constant is sufficiently small we can analytically extrapolate the results to the continuum limit. The range of lattice spacings (or coupling constants) is often limited by the emergence of an unphysical "bulk phase" at strong lattice gauge coupling, which prevents the analytical connection to the continuum phase from this region. The value of the lattice coupling where the transition to the bulk phase happens depends on the lattice gauge group, the choice of the gauge action and the matter content.
The problem of the bulk phase transition becomes acute in SU(\(N\)) gauge theories with large number of colors and in models with large number of fermion degrees of freedom. In pure gauge SU(\(N\)) theories with the standard Wilson plaquette action the bulk transition is a rapid cross-over if \(N\leq 4\) but becomes an increasingly strong first order transition as \(N\geq 5\)[1]. Adding fermionic degrees of freedom slows down the evolution of the coupling constant (i.e. the magnitude of the \(\beta\)-function is smaller), and also increase the effective lattice gauge coupling [2]. Depending on the physical case of interest, these effects require one to use strong bare coupling. This happens especially in infrared (near-)conformal models, where the coupling runs very slowly, for example in SU(2) with large numbers of fundamental fermions [3; 4; 5; 6; 7] or SU(2) with adjoint fermions [8; 9; 10], SU(3) with large Nf [11; 12; 13] and SU(4) with fermions in the antisymmetric representation [14].
We present a local lattice action which efficiently removes the transition to the bulk phase. Our approach is related to the "dislocation prevention" method by DeGrand _et al._[19]. For U(1) and SU(2) gauge groups also the topological gauge actions that restict the plaquette magnitude [15; 17; 18] or the gauge-topology preserving actions investigated in [16] are to some extent related. This is, however, no longer the case for SU(\(N\)) gauge groups with \(N>2\). We note that our approach merely removes the bulk phase but does not restrict in any way gauge-topology fluctuations.
Following Wilson's prescription, the lattice discretization of a SU(\(N\)) gauge theory is obtained by promoting the Lie algebra valued continuum gauge field,
\[A_{\mu}(x^{\prime})=\sum_{a}A_{\mu}^{a}(x^{\prime})\,T^{a}\,\in\,\mathfrak{ su}(N)\, \tag{1}\]
with \(\left\{\,T^{a}\,\right\}_{a=1,\ldots,N^{2}-1}\) being a basis of \(\mathfrak{su}(N)\), to Lie group valued link variables,
\[U_{\mu}(x)\,=\,\mathcal{P}\,\mathrm{e}^{\,\mathrm{i}\,\int_{ax}^{a}(x+\hat{ \mu})\,\mathrm{d}x^{\prime}\,A_{\mu}(x^{\prime})}\,\in\,\mathrm{SU}(N)\, \tag{2}\]
which can be interpreted as the gauge-parallel transporters along the link between a site \(x\) and a neighboring site \(x+\hat{\mu}\). The leading \(\mathcal{P}\) on the right-hand side of (2) indicate that path-ordering should be applied when evaluating the exponential of the line-integral. The relation between the coordinate \(x^{\prime}\in\mathbb{R}^{4}\) in (1) and the coordinate \(x\in\mathbb{Z}^{4}\) in (2) is given by \(x=a\,x^{\prime}\), where \(a\) is the lattice spacing, and \(\hat{\mu}\) refers to the unit-vector in \(\mu\)-direction. Parallel transporters over longer distances are then expressed as product of consecutive link variables and a Lattice gauge action can be defined in terms of link variables by requiring that in the limit \(\left(a\to 0\right)\) the lattice gauge action converges to the continuum gauge action,
\[S_{G}=\frac{1}{4\,g_{0}^{2}}\int\mathrm{d}^{4}x^{\prime}\,\mathrm{tr}(F_{\mu \nu}(x^{\prime})F_{\mu\nu}(x^{\prime})). \tag{3}\]
Wilson proposed the gauge action [20]
\[S_{G,W}=\frac{\beta}{N}\sum_{x}\sum_{\mu<\nu}\mathrm{Re}\,\mathrm{tr}\big{(} \mathbb{1}-U_{\mu\nu}(x)\big{)}\, \tag{4}\]
which, as is well known, satisfies the above condition and is here written in terms of the inverse bare gauge coupling \(\beta=2\,N/g_{0}^{2}\) and the plaquette variables
\[U_{\mu\nu}(x)=U_{\mu}(x)\,U_{\nu}(x+\hat{\mu})\,U_{\mu}^{\dagger}(x+\hat{\nu} )\,U_{\nu}^{\dagger}(x). \tag{5}\]
The gauge action (4) and improved versions of it [21] are the most commonly used in Monte Carlo studies of SU(\(N\)) lattice gauge theories. They are, however, not unique and might not be the best choice for the study of lattice gauge theories at strong coupling, as they allow the gauge system to enter the bulk phase. This is not necessarily a proper phase, but simply a region in parameter space of the lattice theory where lattice artefacts dominate in ensemble averages. As a consequence the relation between lattice and continuum results becomes very complicated or can even be lost completely, if bulk and continuum phase are separated by a first order transition.
## II Avoiding the lattice bulk phase
In this section we propose a characterization of "bulk configurations" in SU(\(N\)) lattice gauge systems, which allows for the definition of a family of lattice gauge actions, that separate such bulk configurations from regular ones by an infinite potential barrier, while still yielding the same naive continuum limit as Wilson's plaquette gauge action. In combination with a hybrid Monte Carlo (HMC) update algorithm, the new gauge actions prevent the gauge system from entering a bulk phase.
### Motivation in U(1)
In the U(1) case, the Wilson gauge action in (4) reduces to
\[S_{G,W}=\beta\,\sum_{x}\sum_{\mu<\nu}\mathrm{Re}\big{(}\mathbb{1}-U_{\mu\nu}( x)\big{)}\, \tag{6}\]
and the Abelian link variables can be written as
\[U_{\mu}(x)=\mathrm{e}^{\mathrm{i}\,\theta_{x,\mu}}\quad\mathrm{with}\quad \theta_{x,\mu}=a\,A_{\mu}(x)\,\in\,(-\pi,\pi]. \tag{7}\]
Let us now define,
\[\Theta_{x,\mu\nu}=\theta_{x,\mu}+\theta_{x+\tilde{\mu},\nu}-\theta_{x+ \tilde{\nu},\mu}-\theta_{x,\nu}\,\in\,(-4\pi,4\pi]\, \tag{8}\]
and note that while \(\Theta_{x,\mu\nu}\) in (8) can vary in the interval \((-4\pi,4\pi]\), the gauge action (6) depends only on
\[\arg(U_{\mu\nu}(x))\,\in\,(-\pi,\pi]. \tag{9}\]
As illustrated in Fig. 1, the gauge action (6) produces a bulk-transition at \(\beta=\beta_{b}\approx 1\). For \(\beta<\beta_{b}\), the system is in the bulk phase, where the lattice spacing, \(a\), can be considered large and \(\Theta_{x,\mu\nu}\) from (8) explores the full \((-4\pi,4\pi]\)-interval. For \(\beta>\beta_{b}\), the system is in the continuum phase, where the lattice spacing tends to zero if \(\beta\to\infty\). In this phase, \(\Theta_{x,\mu\nu}\) can still be outside the \((-\pi,\pi]\)-interval, but the fraction of such plaquettes quickly drops as \(\beta\) is increased and most of the time, one has that \(\Theta_{x,\mu\nu}=\arg(U_{\mu\nu}(x))\).
The fact that plaquettes with \(\Theta_{x,\mu\nu}\notin(-\pi,\pi]\) also appear in the continuum phase indicates that the value of \(\Theta_{x,\mu\nu}\) by itself cannot be used to distinguish bulk from continuum configurations. Plaquettes with \(\Theta_{x,\mu\nu}\notin(-\pi,\pi]\) are in fact strictly necessary also in the continuum phase in order to allow for topology fluctuations. However, as illustrated in Fig. 2, there are two qualitatively different ways in which plaquettes with \(\Theta_{x,\mu\nu}\notin(-\pi,\pi]\) can be produced if one starts from a configuration in which initially \(\Theta_{x,\mu\nu}=\arg(U_{\mu\nu}(x))\) for all plaquettes, namely:
* one of the links of the plaquette can move across the boundary of the \((-\pi,\pi]\)-interval and wrap around, which adds roughly \(\pm 2\,\pi\) to \(\Theta_{x,\mu\nu}\). As indicated
Figure 1: The U(1) lattice gauge theory with Wilson gauge action (6) undergoes a bulk-transition at \(\beta=\beta_{b}\approx 1\). For \(\beta<\beta_{b}\), the system is in the bulk phase, where the lattice spacing, \(a\), can be considered large and \(\Theta_{x,\mu\nu}\) from (8) explores the full \((-4\pi,4\pi]\)-interval. For \(\beta>\beta_{b}\), the system is in the continuum phase, where the lattice spacing tends to zero if \(\beta\to\infty\). In this phase, \(\Theta_{x,\mu\nu}\) can still be outside the \((-\pi,\pi]\)-interval, but the fraction of such plaquettes quickly drops as \(\beta\) is increased and one mostly has \(\Theta_{x,\mu\nu}=\arg(U_{\mu\nu}(x))\).
in part (a) of Fig. 2, the latter can happen also if \(\arg(U_{\mu\nu}(x))\) is close to \(0\). We note that such a wrapping link will produce a shift of almost \(\pm 2\,\pi\) in the \(\Theta_{x,\mu\nu}\) of all the plaquettes that contain this link;
* no link wraps around the \((-\pi,\pi]\)-interval, but \(\Theta_{x,\mu\nu}\) is already close to the boundary of the \((-\pi,\pi]\)-interval and finally grows across it as one of its links undergoes a small change in the right direction. As indicated in part (b) of Fig. 2, this can only happen if the gauge action allows \(|\arg(U_{\mu\nu}(x))|\) to grow close to \(\pi\). As this type of "continuous" plaquette wrapping occurs typically only at individual plaquettes at a time, and not as in case (a) to all plaquettes that contain the updated link variable, this can lead to the formation of metastabilities. The reason for this is, that continuously wrapped and not continuously wrapped plaquettes pull in opposite direction on shard links.
As case (b) can only occur if individual plaquette angles are allowed to grow close to \(\pm\pi\), which corresponds to the maximum value of the local action of a plaquette, this case is likely to occur only in the bulk phase, where \(\beta\) is small and the plaquette action cannot grow sufficiently large as to oppose the entropy-driven randomization of the plaquettes in the lattice system. We will therefore introduce in Sec. II.3 a family of actions, which will prevent plaquette wrappings of type (b). As it turns out, this is sufficient to get rid of the bulk transition. First, however, we discuss in Sec. II.2 how the plaquette wrapping types (a) and (b) generalize to the case of non-Abelian \(\mathrm{SU}(N)\) lattice gauge theories.
### Situation in \(\mathrm{SU}(N)\)
In order to generalize the discussion from the previous section to \(\mathrm{SU}(N)\), we diagonalize the link variables:
\[U_{\mu}(x)=V_{x,\mu}^{\dagger}\,\mathrm{diag}\big{(}\,\mathrm{e}^{\mathrm{i} \theta_{x,\mu}^{(1)}},\ldots,\mathrm{e}^{\mathrm{i}\theta_{x,\mu}^{(N)}}\, \big{)}V_{x,\mu}\, \tag{10}\]
where \(\sum_{n=1}^{N}\theta_{x,\mu}^{(n)}=0\), and do the same with the plaquette variables:
\[U_{\mu\nu}(x)=V_{x,\mu\nu}^{\dagger}\begin{pmatrix}\mathrm{e}^{\mathrm{i} \theta_{x,\mu\nu}^{(1)}}\\ &\ddots\\ &&\mathrm{e}^{\mathrm{i}\theta_{x,\mu\nu}^{(N)}}\end{pmatrix}V_{x,\mu\nu}= \begin{pmatrix}\mathrm{e}^{-\mathrm{i}\theta_{x,\mu}^{(1)}}\\ &\ddots\\ &&\ddots\\ &&\mathrm{e}^{-\mathrm{i}\theta_{x,\mu\nu}^{(N)}}\end{pmatrix}\qquad\begin{pmatrix} \mathrm{e}^{-\mathrm{i}\theta_{x,\mu}^{(1)}}\\ &\ddots\\ &&\mathrm{e}^{\mathrm{i}\theta_{x+\mu,\nu}^{(N)}}\end{pmatrix} \tag{11}\] \[V_{x,\mu}V_{x,\mu}^{\dagger}\begin{pmatrix}\mathrm{e}^{\mathrm{i} \theta_{x,\mu\nu}^{(1)}}\\ &\ddots\\ &&\ddots\\ &&\mathrm{e}^{-\mathrm{i}\theta_{x,\mu\nu}^{(N)}}\end{pmatrix}\qquad\begin{pmatrix} \mathrm{e}^{\mathrm{i}\theta_{x,\mu\nu}^{(1)}}\\ &\ddots\\ &&\mathrm{e}^{\mathrm{i}\theta_{x+\mu,\nu}^{(N)}}\end{pmatrix}\]
where again, \(\sum_{n=1}^{N}\,\theta_{x,\mu\nu}^{(n)}=0\). As the products of \(V_{x,\mu}\)-matrices, appearing in (11) after the last equality sign, mix the eigenvalues of the link variables, the phases of plaquette eigenvalues cannot be represented as a simple sum of individual link-eigenvalue phases. Nevertheless, it is still true that each of the plaquette eigenvalues can leave the \((-\pi,\pi]\)-interval either by obtaining a large shift due to the wrapping of a link, analogous to case (a) of \(\mathrm{U}(1)\)-discussion in Sec. II.1, or by approaching and crossing the \(\pm\pi\)-boundary "smoothly" as in case (b). The latter case can be avoided, by preventing the plaquette eigenvalues from approaching the value \(-1\).
### Bulk-preventing action
To prevent plaquettes from having eigenvalues close to \(-1\), we introduce the following family of gauge actions:
\[S_{G,b} = \frac{2\,\gamma}{n\,N}\sum_{x}\sum_{\mu<\nu}\mathrm{tr}\big{(} \big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{-n}-\mathbb{1} \big{)}\, \tag{12}\]
with \(n\geq 1\) and
\[\Omega_{\mu\nu}(x)=\big{(}\mathbb{1}\,+\underbrace{U_{\mu\nu}(x)}_{\text{ plaquette}}\big{)}/2. \tag{13}\]
The form of the actions (12) was inspired by the dislocation-prevention action introduced in [19]. The naive continuum limit of (12) is the same as for the Wilson gauge action \(S_{G}\). This can be seen by writing \(U_{\mu\nu}(x)=\exp\bigl{(}\mathrm{i}\,s\,F_{\mu\nu}^{\dagger}(x)\bigr{)}\), with \(s\,F_{\mu\nu}^{\prime}=a^{2}\,F_{\mu\nu}+\mathcal{O}\big{(}a^{3}\big{)}\)
and expanding in a power series in \(s\). For the local plaquette actions, contributing to \(S_{G,W}\) from (4) one then finds:
\[\operatorname{Re}\operatorname{tr}(\mathbb{1}-U_{\mu\nu}(x))=s^{2} \operatorname{tr}\bigl{(}F^{\prime}_{\mu\nu}(x)F^{\prime}_{\mu\nu}(x)\bigr{)}/2\] \[\qquad-s^{4}\frac{1}{12}\bigl{(}\operatorname{tr}\bigl{(}F^{ \prime}_{\mu\nu}(x)F^{\prime}_{\mu\nu}(x)\bigr{)}/2\bigr{)}^{2}+\mathcal{O} \bigl{(}s^{6}\bigr{)}\, \tag{14}\]
and, correspondingly, for the local actions contributing to \(S_{G,b}\) from (12):
\[\frac{2}{n}\operatorname{tr}\bigl{(}\bigl{(}\Omega^{\dagger}_{ \mu\nu}(x)\Omega_{\mu\nu}(x)\bigr{)}^{-n}-\mathbb{1}\bigr{)}=s^{2}\operatorname {tr}\bigl{(}F^{\prime}_{\mu\nu}(x)F^{\prime}_{\mu\nu}(x)\bigr{)}/2\] \[\quad+s^{4}\frac{1+3\,n}{24}\bigl{(}\operatorname{tr}\bigl{(}F^{ \prime}_{\mu\nu}(x)F^{\prime}_{\mu\nu}(x)\bigr{)}/2\bigr{)}^{2}+\mathcal{O} \bigl{(}s^{6}\bigr{)}. \tag{15}\]
In the limit \(\left(a\to 0\right)\), one has \(s\sim a^{2}\), \(F^{\prime}_{\mu\nu}(x)\sim F_{\mu\nu}(x)\), and we see that the two local actions have the same leading term \(\sim\mathcal{O}\bigl{(}a^{4}\bigr{)}\), namely:
\[a^{4}\ \operatorname{tr}(F_{\mu\nu}(x)F_{\mu\nu}(x))/2. \tag{16}\]
The actions (12) introduce an infinite potential barrier between bulk and continuum configurations. This is sufficient to ensure that, if we start a simulation from a cold configuration (all link variables equal to the identity) and use a hybrid Monte carlo (HMC) algorithm to update the gauge system, no bulk-configurations will be produced.
One could infer that this procedure yields a non-ergodic update algorithm. However, one should keep in mind that the part of the configuration space that is not sampled is irrelevant for the continuum limit of the theory. The algorithm prevents ensemble averages of the lattice system from being contaminated (or even dominated) by bulk-configurations, which should allow one to extract continuum physics also at stronger coupling. The same effect could be achieved by defining a modified measure, which gives zero weight to bulk configurations. However, this would be difficult to implement as the latter are hard to identify once they are created. The use of an action (12) in combination with an HMC algorithm is a proxy to achieve the same effect but in a simpler and more economic way.
For U(1) and SU(2), the actions in (12) have a similar effect as the topological actions discussed in [15; 17; 18]: the larger the inverse bare coupling \(\gamma\), the stronger the plaquette values are repelled from \(-1\) resp. \(-1\). For SU(\(N\)) with \(N>2\) the effect of (12) is different from the one of the topological action discussed in [18], as for \(N>2\) the trace of the plaquette does no-longer completely determine the plaquette eigenvalues. However, an action which can have a similar effect as our actions (12) has been given in [22]. While for us it was desirable that the actions (12) do not prevent topology from fluctuating, there have also been attempts to find actions which keep the topology fixed [16].
## III Results
To see whether the bulk-preventing actions (12) deserve their name and how well they are able to reproduce the same weak coupling results as the Wilson gauge action, we carried out simulations with pure gauge SU(2), pure gauge SU(5) and SU(3) with \(N_{f}=4\) Wilson fermion flavours. With the Wilson gauge action, all three of these theories enter a bulk phase for sufficiently small values of the inverse gauge coupling \(\beta\). For pure gauge SU(2), the transition is a smooth cross-over, while for pure gauge SU(5) and for the fermionic SU(3) theory with sufficiently large fermion hopping parameter \(\kappa\) the transition is of first order. In the following we will discuss the three cases separately. We use the version of the action (12) with \(n=2\). The choice of \(n\) should not affect physical results, but it turns out that a too small value of \(n\) will require also a smaller step size in the HMC trajectories to achieve similar acceptance rates, which can become computationally more expensive than using \(n=2\).
According to the expansions (14) and (15), the Wilson gauge (WG) action (4) and bulk-preventing (BP) action (12) agree only to order \(\mathcal{O}\bigl{(}s^{2}\bigr{)}\sim\mathcal{O}\bigl{(}a^{4}\bigr{)}\). Thus, the inverse bare couplings \(\beta\) and \(\gamma\) will not be equal in the weak coupling limit. However, it turns out that locally the two couplings can be related quite accurately by a constant shift, \(\gamma=\beta-\Delta\beta\), which we will use to directly compare bare lattice results obtained with the two different actions. Of course, there is in general no need for bare lattice results, obtained with different actions, to agree. However, it seems that in the present case, the systems controlled by the WG and the BP action behave in the weak coupling regime sufficiently equally, so that a direct comparison of bare lattice results is reasonable.
### SU(2) pure gauge
SU(2) pure gauge theory with the WG action (4) is known to have a smooth cross-over between the bulk phase and the continuum-like phase. Thus, for this theory the BP action (12) is not expected to provide any significant advantage over the WG action and both actions should give rise to the same results not just at weak, but also all the way down to strong coupling.
In Fig. 3 we compare results obtained with the WG and the BP action. The WG data is plotted as function of \(\beta-\Delta\beta\) with \(\Delta\beta=1.65\) and the BP data is plotted as function of \(\gamma\). The shift \(\Delta\beta=1.65\) has been determined by requiring that the "spatial deconfinement" transition, at which the spatial Polyakov loop develops a non-zero expectation value (indicating that the physical spatial volume becomes too small to fit a meson), occurs for the two actions at the same value of \((\beta-\Delta\beta)\) resp. \(\gamma\).
The top-left panel in Fig. 3 shows the average of the traced plaquette and we note that when plotted against \(\beta-1.65\) resp. \(\gamma\), the plaquette values for the two different actions agree remarkable well at sufficiently weak
coupling. Only below \(\gamma=\beta-1.65\approx 0.65\), where the WG action enters the bulk-phase, the plaquette value for the WG action starts to deviate from the BP one as function of \(\gamma=\beta-1.65\).
The strong coupling limit is for both actions obtained by sending their respective inverse coupling to zero, i.e. \(\left(\gamma\to 0\right)\) and \(\left(\beta\to 0\right)\). With the WG action the strong coupling phase extends over the interval \(\beta\in[0,2.3]\), while with the BP action, the strong coupling phase extends over the significantly smaller interval \(\gamma\in[0,0.65]\). In the strong coupling phase, the system should therefore with the WG action change more slowly as function of \(\beta\) than with the BP action as function of \(\gamma\).
This is indeed what can be observed in the remaining panels of Fig. 3: the results obtained with the two actions for temporal Polyakov loop (top-center), Polyakov loop variance (top-right), and topological susceptibility (bottom-left) are consistent and match for \(\gamma=\beta-1.65>0.65\) very nicely as functions of \(\gamma\) resp. \(\beta-1.65\), while for \(\gamma=\beta-1.65<0.65\), the WG results change more slowly as function of \(\beta\) than the corresponding BP results do as function of \(\gamma\).
The last two panels on the second row of Fig. 3 show gradient flow quantities: the bottom-center panel shows the gradient flow coupling, \(N\,g_{\rm GF}^{2}(c)\) at \(c=\lambda/N_{s}=0.3\), with \(\lambda=\sqrt{8\,t}\) being the flow scale corresponding to flow time \(t\); and the bottom-left panel shows \(\lambda_{0}/a\), which is the flow scale \(\lambda\) (in lattice units) at which \(t^{2}\left\langle E(t)\right\rangle=0.3\)[23] with \(\left\langle E(t)\right\rangle\) being the clover action density of the flowed gauge field at flow time \(t\). Both gradient flow quantities have been corrected for leading finite volume [24] resp. both, finite volume and finite lattice spacing effects [7], adapted to the case \(N_{t}\neq N_{s}\). Also for these quantities the data obtained with the two actions agrees as function of \(\gamma=\beta-1.65\) if \(\gamma=\beta-1.65>0.65\), whereas for \(\gamma=\beta-1.65<0.65\), the WG data changes more slowly as function of \(\beta\) than the BP data does as function of \(\gamma\).
As mentioned at the beginning of this section, the shift \(\Delta\beta=1.65\) has been determined by requiring that the spatial deconfinement happens at the same value of \(\gamma\) and \(\beta-\Delta\beta\). Because of our small spatial volumes of
Figure 3: Comparison of pure SU(2) gauge theory results obtained with the Wilson gauge action (WG), Eq. (4) (orange circles and brown triangles) and the bulk-preventing (BP) action, Eq. (12), for \(n=2\) (black diamonds and blue squares). To guide the eye, the data points are connected by straight lines. The first row shows the real part of the average traced plaquette (top left), the temporal Polyakov loop (top center), and the temporal Polyakov loop variance (top right). The second row shows the topological susceptibility (bottom left) the gradient flow coupling at \(c=\lambda/N_{s}=0.3\) (bottom center), with flow scale \(\lambda=\sqrt{8\,t}\), corresponding to flow time \(t\), and \(\lambda_{0}/a\) (bottom right), which is the flow scale \(\lambda\) (in lattice units) at which \(t^{2}\left\langle E(t)\right\rangle=0.3\) with \(\left\langle E(t)\right\rangle\) being the clover action density of the flowed gauge field at flow time \(t\). On our finite lattices with spatial size \(V_{s}=12^{3}\), the temporal size is set to \(N_{t}=6\) for finite temperature, and to \(N_{t}=24\) for zero-temperature. The results are shown as functions of \(\beta-1.65\) (WG) resp. \(\gamma\) (BP).
linear size \(N_{s}=12\), this happens already at \(\gamma=\beta-1.65>0.8\). We note that by determining \(\lambda_{0}/a\) with the criterion \(t^{2}\left\langle E(t)\right\rangle=0.3\), one finds \(\lambda_{0}/a\approx N_{s}/2\) at spatial deconfinement, which shows that requiring \(t^{2}\left\langle E(t)\right\rangle=0.3\) is indeed a reasonable choice for defining \(\lambda_{0}/a\).
### Su(5) pure gauge
Fig. 4 provides the same information as Fig. 3 but for SU(5) instead of SU(2). The shift in \(\beta\) required to match \(\gamma\) at weak coupling has been set to \(\Delta\beta=14.35\), which, as in the SU(2)-case, is determined by matching the values of \(\gamma\) and \(\beta-\Delta\beta\) at which spatial deconfinement occurs.
For SU(5) the bulk transition of the WG action is of 1st order [1], which is clearly visible from the sharp discontinuity in the WG data for the average plaquette at \(\beta-14.35\approx 2.3\), shown in the top-left panel of Fig. 4. In contrast, with the BP action the plaquette is completely continuous as a function of the inverse gauge coupling \(\gamma\). In the continuum phase, i.e. for \(\gamma=\beta-14.35>2.3\), the average plaquette values obtained with the two different actions converge only slowly with increasing inverse coupling, while the average temporal Polyakov loop (top-center), temporal Polyakov loop variance (top-right) and topological susceptibility (bottom-left), as well as the gradient flow quantities, \(N\,g_{\rm GF}^{2}(c=0.3)\) and \(\lambda_{0}/a\), agree for the two actions almost immediately when \(\gamma=\beta-14.35>2.3\).
The finite temperature transition for \(N_{t}=6\) occurs at around \(\gamma=\beta-14.35\approx 2.9\). This is above the value at which the WG action undergoes the bulk transition, and the properties of the finite temperature transition should therefore be described equally well with the WG and with the BP action. The temporal Polyakov loop (top-center) looks indeed the same for the two actions at \(N_{t}=6\), and also the temporal Polyakov loop variance (top-right) agrees very well for \(\gamma=\beta-14.35>2.3\) at \(N_{t}=6\); at \(\gamma=\beta-14.35\approx 2.3\) (red vertical dashed line) one can, however, notice a small jump in the Polyakov loop variance for the WG action, while the Polyakov loop variance obtained with the BP action behaves completely regular across this point.
A similar behavior can be observed in the \(N_{t}=6\) data for the topological susceptibility (bottom-left), where the results obtained with the WG and BP action agree for \(\gamma=\beta-14.35>2.3\), but as \(\gamma=\beta-14.35\) decreases below the bulk transition point, \(\gamma=\beta-14.35\approx 2.3\), the topo
Figure 4: Same as Fig. 3, but for pure gauge SU(5) and including the case \(N_{t}=4\) for which the WG action can no longer properly resolve the finite temperature transition as the latter is forced to occur on top of the bulk transition. The dashed vertical red line indicates the approximate location of the bulk transition of the WG action. Note that the shown data was obtained on small lattices with \(N_{s}=12\) and we did not attempt to perform simulations directly at the pseudo critical points. The peaks visible in the data for the Polyakov loop variance (top-right panel) do therefore not reflect the true pseudocircal behavior; the lines simply connect the available data points to guide the eye.
logical susceptibility obtained with the WG action jumps and approaches almost immediately its strong-coupling plateau value, while with the BP action, the topological susceptibility approaches its strong coupling value much more smoothly.
With \(N_{t}=4\), the WG action is no longer able to properly resolve the finite temperature transition. The data obtained with the BP action suggests that the finite temperature transition should for \(N_{t}=4\) occur at \(\gamma=\beta-14.35\approx 2.1\). With the WG action, the system is in the bulk-phase at this value of the bare gauge coupling [1]. It appears that the finite temperature transition cannot take place inside the bulk phase and occurs therefore on top of the bulk transition. Also the topological susceptibility obtained with the WG action for \(N_{t}=4\) appears to be unable to decrease as long as the system is in the bulk phase. As in the case of \(N_{t}=6\), the topological susceptibility obtained with the WG action appears also for \(N_{t}=4\) to be stuck at the strong coupling plateau value for \(\gamma=\beta-14.35<2.3\) and to decrease abruptly at \(\gamma=\beta-14.35\approx 2.3\) when the inverse coupling is increased beyond this point. In contrast, with the BP action the asymptotic strong coupling value of the topological susceptibilities is also for \(N_{t}=4\) approached smoothly.
The measurements of the gradient flow coupling \(N\,g_{\rm GF}^{2}(c)\) at \(c=\lambda/N_{s}=0.3\) are shown in the bottom-center panel of Fig.4. From this we conclude that the WG action is not capable of reaching gradient flow couplings larger than \(g_{\rm GF}^{2}\approx 11\) before hitting the bulk transition. In the bottom-right panel we show the flow scale \(\lambda_{0}/a\) at which \(t^{2}\left\langle E(t)\right\rangle=0.3\). For the WG action there is a discontinuity in \(\lambda_{0}/a\) at the bulk transition point, indicating that there is a largest reachable lattice spacing. For the BP action these problems vanish and the gradient flow quantities behave smoothly.
As the spatial volume is again small, with linear size \(N_{s}=12\) in each direction, the system undergoes spatial deconfinement at \(\gamma=\beta-14.35\approx 3.7\). By fixing \(\lambda_{0}/a\) using the criterion \(t^{2}\left\langle E(t)\right\rangle=0.3\), one finds again that \(\lambda_{0}/a=N_{s}/2\) when spatial deconfinement happens.
### \(\mathrm{SU}(3)\) with \(N_{f}=4\) Wilson fermions
For a \(\mathrm{SU}(3)\) lattice gauge theory with the Wilson gauge action, the transition between continuum- and bulk-phase is normally a cross-over. However, if the theory is coupled to fermions, the transition can turn 1st order.
As a concrete example for a system where this is the case, we consider \(\mathrm{SU}(3)\) lattice gauge theory with the WG action and with \(N_{f}=4\) mass-degenerate, dynamical Wilson-clover fermion flavors, that couple to the gauge field via 2-step stout smeared links. Fig. 5 shows a schematic \((\beta,\kappa)\)-phase diagram for this system. The dashed red line marks the location of the bulk transition, the blue line indicates where the PCAC quark mass, \(m_{q}\), crosses zero, and the dashed black line marks the location of the thermal resp. "confinement/deconfinement" transition if the temporal lattice size is set to \(N_{t}=6\). The label "confinement/deconfinement" is put in quotation marks, because we use the temporal Polyakov loop as approximate order parameter for deconfinement, despite the presence of dynamical fermions [25, 26]. In QCD deconfinement is observed to be accompanied by a chiral transition that occurs at the same temperature; for light fermions, this chiral transition dominates, whereas in the heavy fermion limit, the "confinement/deconfinement" transition of pure gauge theory is approached.
The curves in Fig. 5 are based on parameter scans performed with simulations on lattices of spatial size \(V_{s}=N_{s}^{3}\) with \(N_{s}=12\). For the \(m_{q}=0\) and the bulk transition lines, the temporal lattice extent was set to \(N_{t}=24\) (approximating zero-temperature), while for the "confinement/deconfinement" transition, the indicated \(N_{t}=6\) was used. The additional dashed lines in different shades of gray are not based on actual simulations; they merely illustrate how the "confinement/deconfinement" transition line is expected to change if \(N_{t}\) is increased
Figure 5: Sketch of the finite temperature phase diagram for \(\mathrm{SU}(3)\) lattice gauge theory with Wilson gauge action and \(N_{f}=4\) Wilson clover fermion flavors (coupling to the gauge field via two-step stout smeared gauge links). The parameters \(\beta\) and \(\kappa\) are, respectively, the inverse gauge coupling and fermion hopping parameter. The red, dashed line marks the location of bulk transition (resp. crossover if \(\kappa\) is sufficiently small) of the Wilson gauge action, the blue line shows where the PCAC quark mass, \(m_{q}\), vanishes, and the dashed lines in different shades of gray mark the location of the finite temperature ”deconfinement” transition lines for different values of \(N_{t}\). The locations of the bulk transition and the \(m_{q}=0\) line were estimated with simulations on a \(12^{3}\times 24\) lattice, and the \(N_{t}=6\) ”deconfinement” transition line from simulations on a \(12^{3}\times 6\) lattice. Note that the \(N_{f}=4\) fermions cause the pseudo-critical \(\beta\) to be lower than it would be in the pure-gauge case.
(assuming that also \(N_{s}\) is increased accordingly).
For values of \(\kappa\) above \(\sim 0.13\) the line where the PCAC quark mass crosses zero coincides with the bulk transition line. Across these coinciding lines the system undergoes a first order transition and PCAC quark mass never passes through the value \(m_{q}=0\) but jumps discontinuously from positive to negative values across the transition line. This is shown in the top left panel of Fig. 6 where the
Figure 6: Simulation results for SU(3) lattice gauge theory, coupled via 2-step stout smeared links to \(N_{f}=4\) degenerate Wilson clover fermion flavors with hopping parameter \(\kappa=1.358\). The simulations were carried out on lattices of size \(12^{3}\times 6\) (finite temp.) resp. \(12^{3}\times 24\) (zero temp.). As in the previous figures, the orange circles and brown triangles correspond, respectively, to zero and finite temperature results obtained with the Wilson gauge action (WG), Eq. (4), and the black diamonds and blue squares to corresponding results obtained with the bulk-preventing action (BP), Eq. (12). Data points are connected by straight lines to guide the eye. The first row shows the PCAC quark mass (top left) and average unsmeared (top center) and smeared (top right) plaquette as functions of \(\gamma\) (BP) resp. \(\beta-4.167\) (WG). The remaining rows show the quantities as functions of the PCAC quark mass (obtained from the \(N_{t}=24\) simulations). The second row shows the chiral condensate (middle left), the temporal Polyakov loop (middle center), and the topological susceptibility (middle right), and the third row shows the disconnected piece of the chiral susceptibility (bottom left), variance of the temporal Polyakov loop (bottom center), and the integrated auto-correlation time of the topological charge (bottom right). The shaded areas in the different panels mark the PCAC quark mass range that cannot be resolved with the WG action for the given simulation parameters, due to the bulk transition.
PCAC quark mass for the WG action (brown circles) is shown as function of \(\beta-4.167\) at \(\kappa=1.358\). To the left of the bulk transition line, the system is in the unphysical bulk phase, while to the right of the line the system has unphysical negative PCAC quark mass. Thus, the lattice does not describe any continuum-related physics for \(\kappa>0.13\). Only for \(\kappa<0.13\) there is a range in \(\beta\) for which the system is in the continuum phase and the PCAC quark mass is non-negative.
With \(N_{t}=6\), also the "confinement/deconfinement" line in Fig. 5 is for \(\kappa<1.42\) on top of the bulk transition line. In the displayed range of \(\kappa\) the "confinement/deconfinement" transition separates from the bulk transition line only for \(\kappa>1.42\), but is then located in the negative PCAC mass region and hence unphysical. To extract information about the continuum theory form this lattice system, one would have to increase \(N_{t}\) (and, correspondingly, also \(N_{s}\) to avoid dominance of finite volume effects) so that the "confinement/deconfinement" line fully separates from the bulk transition line. Thus, we can conclude that it is not possible to reach the light quark confinement (chiral) phase transition with the WG action using \(N_{t}=6\) lattices. Of course, in the limit (\(\kappa\to 0\)), where the quark mass grows much larger than the deconfinement energy scale, the "confinement/deconfinement" line is expected to separate from the bulk transition line also for \(N_{t}=6\), as the fermions decouple and the system reduces to pure gauge SU(3).
Fig. 6 contains also results obtained with the bulk-preventing action (12). In this case the bulk transition is absent and the PCAC quark mass approaches \(m_{q}=0\) continuously. The small gap in the data around \(m_{q}=0\) is due to the slowing down caused by the appearance of zero eigenmodes of the Wilson-Dirac operator when \(m_{q}\to 0\). This could be avoided by e.g. using Schrodinger functional boundary conditions, which remove zero modes.
In the first two panels of the second and third row of Fig. 6 the chiral condensate (2nd row, left) and disconnected chiral susceptibility (last row, left), as well as the temporal Polyakov loop (2nd row, center) and corresponding variance (last row, center) are plotted as functions of the PCAC quark mass, \(m_{q}\). While deep in the strong-coupling and deep in the negative mass phase the results obtained with the WG action agree with those obtained using the BP action, the discontinuity in \(m_{q}\) with the WG action (marked by the shaded areas) implies that the WG action cannot be used to study the transition region. On the other hand, with the BP action there is no discontinuity in \(m_{q}\) and no bulk transition, and the behavior of the chiral condensate and the Polyakov loop at the finite-temperature phase transition are resolved.
The remaining two panels of Fig. 6, which show the topological susceptibility (2nd row, right) and integrated auto-correlation time for the topological charge itself (last row, right), indicate that also when coupled to fermions, fluctuations of the gauge-topology are not hindered by the use of the BP gauge action from (12) and HMC updates. For the values of \(m_{q}\) which are accessible with both actions, both actions yield the same results for the topological susceptibility. The slightly higher integrated auto-correlation time for the topological charge with the BP action at \(N_{t}=6\) is mostly due to a different tuning of the acceptance rates for the HMC trajectories.
## IV Conclusions
We have identified a mechanism which appears to be relevant for the formation of unphysical "bulk" configurations and the corresponding occurrence of a "bulk transition" in simulations of lattice SU(\(N\)) gauge theories using Wilson's plaquette gauge action. We proposed a one-parameter family of alternative gauge actions, which possess the same continuum limit as the Wilson plaquette gauge action but which, when used in combination with an HMC update algorithm, prevent bulk-configurations from being created.
We tested our bulk-preventing simulation framework for pure gauge SU(2), pure gauge SU(5), and for SU(3) with \(N_{f}=4\) mass-degenerate Wilson-clover fermion flavors with hopping parameter \(\kappa=1.358\), and which are coupled to the gauge field via 2-step stout smeared link variables. We found that in all three cases, the bulk-preventing action (12) with \(n=2\) removes the bulk transition and reproduces at sufficiently weak coupling the same results as the Wilson plaquette action.
In the case of the fermionic SU(3) theory, the Wilson gauge action could not be used to study the physical finite temperature phase transition on \(N_{t}=6\) lattices at small quark masses. This is due to the fact that the bulk transition prevents the system from simultaneously reaching the physical transition region and small quark masses. On the other hand, with the bulk-preventing action (12) the bulk transition is absent and \(m_{q}\) can be made arbitrarily small. It is also worth noting that the bulk-preventing actions do not seem to hinder any processes required for topology fluctuations.
###### Acknowledgements.
The authors acknowledge support from the Academy of Finland grants 308791, 319066, and 345070. T. R. is supported by the Swiss National Science Foundation (SNSF) through the grant no. TMPFP2_210064. The authors wish to acknowledge CSC - IT Center for Science, Finland, and the Finnish Computing Competence Infrastructure (FCCI) for computational resources.
## Appendix A Computing the gauge force
Note that the plaquette variables satisfy \(U^{\dagger}_{\mu\nu}(x)=U_{\nu\mu}(x)\) and therefore also Eq. (13) satisfies
\(\Omega_{\nu\mu}(x)\). We can therefore write the bulk-preventing action from Eq. (12) as
\[S_{G,b} = \frac{\gamma}{n\,N}\sum_{x}\sum_{\mu\neq\nu}\mathrm{tr}\big{(} \big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{-n}-1\big{)}. \tag{35}\]
Let us now denote by \(\delta^{a}_{y,\rho}\) the variation with respect to the link-variable that lives on the link that points from site \(y\) in \(\rho\)-direction. We then have:
\[\delta^{a}_{y,\rho}\,S_{G,b} = -\frac{\gamma}{n\,N}\sum_{x}\sum_{\mu\neq\nu}\mathrm{tr}\big{\{} \big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{-n} \tag{36}\] \[\quad\cdot\big{(}\delta^{a}_{y,\rho}\big{(}\Omega_{\mu\nu}^{ \dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{n}\big{)}\] \[\quad\cdot\big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x) \big{)}^{-n}\big{\}}\] \[= -\frac{\gamma}{N}\sum_{x}\sum_{\mu\neq\nu}\mathrm{tr}\big{\{} \big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{-(n+1)}\] \[\quad\quad\cdot\delta^{a}_{y,\rho}\big{(}\Omega_{\mu\nu}^{\dagger }(x)\Omega_{\mu\nu}(x)\big{)}\big{\}}\] \[=-\frac{\gamma}{2\,N}\sum_{x}\sum_{\mu\neq\nu}\mathrm{tr}\big{\{} \big{(}\delta^{a}_{y,\rho}U_{\mu\nu}(x)\big{)}A_{\mu\nu}(x)\] \[\quad\quad\quad+A_{\mu\nu}^{\dagger}(x)\big{(}\delta^{a}_{y,\rho }U_{\mu\nu}^{\dagger}(x)\big{)}\big{\}}\] \[=-\frac{\gamma}{N}\sum_{x}\sum_{\mu\neq\nu}\mathrm{tr}\big{\{} \big{(}\delta^{a}_{y,\rho}U_{\mu\nu}(x)\big{)}A_{\mu\nu}(x)\big{\}}\, \tag{37}\]
where
\[A_{\mu\nu}(x)=\big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu} (x)\big{)}^{-(n+1)}\Omega_{\mu\nu}^{\dagger}(x)\\ =\big{(}\Omega_{\mu\nu}^{\dagger}(x)\Omega_{\mu\nu}(x)\big{)}^{- n}\Omega_{\mu\nu}^{-1}(x)\, \tag{38}\]
and we have used that \(A_{\mu\nu}^{\dagger}(x)=A_{\nu\mu}(x)\) as \(\Omega_{\mu\nu}(x)\) and \(\Omega_{\mu\nu}^{\dagger}(x)\) commute with each other and with their inverses. If we now carry out the variation of the plaquette explicitly, we find:
\[\delta^{a}_{y,\rho}U_{\mu\nu}(x)=\\ \delta_{x,y}\delta_{\mu\rho}\big{(}\delta^{a}U_{\mu}(x)\big{)}U_{ \nu}(x+\widehat{\mu})U_{\mu}^{\dagger}(x+\widehat{\nu})U_{\nu}^{\dagger}(x)\\ +\delta_{x+\widehat{\mu},y}\delta_{\nu\rho}U_{\mu}(x)\big{(} \delta^{a}U_{\nu}(x+\widehat{\mu})\big{)}U_{\mu}^{\dagger}(x+\widehat{\nu})U_ {\nu}^{\dagger}(x)\\ +\delta_{x+\widehat{\nu},y}\delta_{\mu\rho}U_{\mu}(x)U_{\nu}(x+ \widehat{\mu})\big{(}\delta^{a}U_{\mu}^{\dagger}(x+\widehat{\nu})\big{)}U_{ \nu}^{\dagger}(x)\\ +\delta_{x,y}\delta_{\nu\rho}U_{\mu}(x)U_{\nu}(x+\widehat{\mu})U_ {\mu}^{\dagger}(x+\widehat{\nu})\big{(}\delta^{a}U_{\nu}^{\dagger}(x)\big{)} \\ =\mathrm{i}\,\delta_{x,y}\delta_{\mu\rho}T^{a}U_{\mu}(x)U_{\nu}(x+ \widehat{\mu})U_{\mu}^{\dagger}(x+\widehat{\nu})U_{\nu}^{\dagger}(x)\\ +\mathrm{i}\,\delta_{x+\widehat{\mu},y}\delta_{\nu\rho}U_{\mu}(x)T ^{a}U_{\nu}(x+\widehat{\mu})U_{\mu}^{\dagger}(x+\widehat{\nu})U_{\nu}^{\dagger}( x)\\ -\mathrm{i}\,\delta_{x+\widehat{\nu},y}\delta_{\mu\rho}U_{\mu}(x)U _{\nu}(x+\widehat{\mu})U_{\mu}^{\dagger}(x+\widehat{\nu})T^{a}U_{\nu}^{\dagger}( x)\\ -\mathrm{i}\,\delta_{x,y}\delta_{\rho\nu}U_{\mu}(x)U_{\nu}(x+ \widehat{\mu})U_{\mu}^{\dagger}(x+\widehat{\nu})U_{\nu}^{\dagger}(x)T^{a}\, \tag{39}\]
where \(\{\,T^{a}\,\}_{a=1,\ldots,N^{2-1}}\) are the generators of \(\mathrm{SU}(N)\), normalized so that \(\mathrm{tr}\big{(}T^{a}T^{b}\big{)}=\delta^{ab}/2\). Plugging this into Eq. (37), we obtain after some manipulations:
\[\delta^{a}_{y,\rho}\,S_{G,b} = -\frac{2\gamma}{N}\sum_{\nu\neq\rho}\mathrm{Re}\big{[}\,\mathrm{tr} \big{(}\mathrm{i}\,T^{a}U_{\rho\nu}(y)A_{\rho\nu}(y)\big{)} \tag{40}\] \[\quad+\mathrm{tr}\big{(}\mathrm{i}\,T^{a}U_{\rho(-\nu)}(y)A_{\rho(- \nu)}(y)\big{)}\big{]}\,\]
with
\[U_{\mu(-\nu)}(x)=U_{\nu}^{\dagger}(x-\widehat{\nu})U_{\nu\mu}(x-\widehat{\nu} )U_{\nu}(x-\widehat{\nu}) \tag{41}\]
being the plaquette that starts and ends at site \(x\) and is spanned by the \(\mu\) and the negative \(\nu\) direction, and the corresponding A-matrix,
\[A_{\mu(-\nu)}(x)=U_{\nu}^{\dagger}(x-\widehat{\nu})A_{\nu\mu}(x-\widehat{\nu})U _{\nu}(x-\widehat{\nu}). \tag{42}\]
|
2305.03335 | Local causality in the works of Einstein, Bohm and Bell | In this chapter we discuss the Einstein Podolsky Rosen theorem and its strong
relation with Bell's theorem. The central role played by the concept of beable
introduced by Bell is emphasized. In particular we stress that beables involved
in EPR and Bell theorems are not limited to hidden supplementary variables
(e.g., like in the de Broglie-Bohm (dBB) pilot-wave theory) but also include
the wave function. In full agreement with Bell this allows us the reformulate
the EPR and Bell results as strong theorems concerning nonlocality for quantum
mechanics itself and not only for hidden-variables approaches as it is often
mistakenly assumed. Furthermore, we clarify some repeated ambiguities
concerning `local-realism' and emphasize that neither realism nor determinism
nor counterfactual definiteness are prerequisites of EPR and Bell theorems. | Aurélien Drezet | 2023-05-05T07:29:20Z | http://arxiv.org/abs/2305.03335v3 | # Whence Nonlocality?
###### Abstract
In this chapter we discuss the Einstein Podolsky Rosen theorem and its strong relation with Bell's theorem. We clarify some ambiguities concerning 'local-realism' and emphasize that neither realism nor determinism nor counterfactual definiteness are prerequisites of these theorems.
Introduction and Motivations
David Bohm is a central figure in the history of quantum mechanics. As it is well known in 1952 [4] he proposed a deterministic hidden-variables theory able to complete and reproduce the statistical predictions and results of standard quantum mechanics. Moreover, the theory he developed was actually a rediscovery of the old pilot-wave theory presented by Louis de Broglie in 1927 at the fifth Solvay conference. Watched retrospectively seventy years later the most original and perhaps the most controversial contribution of Bohm compared to what de Broglie did is perhaps his analysis of the Einstein Podolsky Rosen (EPR) paradox concerning nonlocality and completeness of quantum mechanics. Indeed, the goal of the EPR article [10] was to show that if we assume the principle of Einstein locality quantum mechanics must be incomplete. More precisely EPR shows that either quantum mechanics is incomplete or quantum mechanics is nonlocal i.e., it violates Einstein's locality principle. Moreover, Bohm in his 1952 work showed that his own hidden-variables theory able to complete quantum mechanics is explicitely non-local. While this doesn't contradict the EPR results seen as a theorem Bohm approach was certainly the option 'which Einstein would have liked least'. Of course this was just the beginning of the story: In 1964 John Bell, based on EPR work, discovered his famous theorem ([3], chap. 2) firmly establishing that quantum mechanics (irrespectively of being complete or incomplete) must be nonlocal.
The previous summary is the view advocated by some physicists and philosophers, including Sheldon Goldstein [15], Tim Maudlin [21], Jean Bricmont [5], Travis Norsen [22; 23], David Albert [1] and of course John Bell [3], but this is not the majority view. The majority view considers that Bell's theorem concerns what is known as 'local realism' i.e., that it forces us to either abandon realism or locality. Since most physicists would disapprove relaxing locality they actually see the theorem as a strong indication that one must abandon realism. Most confusions concerning EPR, Bell's theorem and nonlocality arise from wavy and vague arguments associated with the definitions of Einstein locality and realism. In order to celebrate the work of Bohm on nonlocality the present author thinks it could be useful to summarize once more the EPR-Bohm-Bell connection.
The Story
Everything starts with Einstein who disliked very much the way quantum mechanics is built and axiomatized as a complete underderministic theory. For Einstein the fact that quantum mechanics is a statistical theory should be explained by a deeper dynamical approach (probably deterministic) able to complete the statistical predictions by a classical-like mechanical and realistic explanation. One should not forget that Einstein was a pioneer of classical statistical mechanics with his famous 1905 interpretation of Brownian motion as resulting of an underlying mechanical process. Also he developed general relativity a fundamentally deterministic theory of gravitation and space-time. Einstein could certainly not believe that God plays with dice at the microscopic scale! Moreover, for Einstein realism was a central prerequisite (more central even than determinism). In 1953 in a festschrift book to honor de Broglie achievements Einstein wrote: _I am not blushing to put the concept of'real state of a system' at the center of my meditation_ ([12], p. 7).
In 1927 during the Solvay conference, he debated with Niels Bohr on the possibility to beat or defeat the Heisenberg principle using correlations between a microscopic quantum particle (involved in a double-slit interference experiment) and some macroscopic apriori classical devices entangled with the particle. Bohr recollection of these discussions shows that Einstein at that time underestimated the coherence of quantum mechanics and didn't realize that the entanglement between particles and macroscopic systems in Einstein's 'which-path' experiment only makes sense if the macroscopic apparatus are also analyzed within quantum mechanics1. In 1930 at the sixth Solvay conference, Einstein and Bohr continued this debate with the famous 'photon-box' gedanken experiment where Einstein attempted to circumvent the Heisenberg principle (in the time-energy domain) by using correlations between a particle in a box and a device measuring its time and energy. Once more, Bohr showed to Einstein that it is not allowed to beat quantum mechanics with this kind of approach since quantum mechanics must be used for describing every parts of the whole indivisible system in agreement with the complementarity principle. Moreover, these two encounters with Bohr
convinced Einstein that quantum mechanics is mathematically and physically self-consistent, and that it is illusory to try defeating Heisenberg's principle directly. However, he didn't stop the fight: In 1935 with his two collaborators Podolsky and Rosen he proposed the famous paradox involving entanglement between two remote particles [10]. The core of the EPR article is the completeness of quantum mechanics. By a complete physical theory they meant that: 'every element of the physical reality must have a counterpart in the physical theory'. Questioning the completeness of quantum mechanics is therefore suggesting that some effects or correlations having an empirical nature are not predicted or explained by the theory. Of course, since quantum mechanics is supposed to describe everything this is apriori impossible. Yet, EPR showed that by associating to quantum mechanics a very natural and intuitive feature of the classical world (i.e., that nobody would like to abandon) we obtain a contradiction and therefore conclude that quantum mechanics cannot be complete and at the same time satisfy this natural property. This natural and intuitive property that EPR added is of course Einstein locality: a fundamental feature of causality in relativistic space-time.
The whole EPR historical deduction is based on the position and momentum observables for two entangled particles but it is common (and we will follow this strategy) to use instead the example proposed by Bohm in 1951 of two spin-\(\frac{1}{2}\) particles 1 and 2 entangled in the singlet state:
\[|\psi\rangle=\frac{1}{\sqrt{2}}(|+1_{\bf\hat{z}}\rangle_{1}|-1_{\bf\hat{z}} \rangle_{2}-|-1_{\bf\hat{z}}\rangle_{1}|+1_{\bf\hat{z}}\rangle_{2}) \tag{1}\]
where \(|+1_{\bf\hat{z}}\rangle_{j},|-1_{\bf\hat{z}}\rangle_{j}\) are the two eigenstates of the \(\sigma_{z}^{(j)}\) Pauli matrices for the \(j^{th}\) particle 2 As a consequence we have \((\sigma_{z}^{(1)}+\sigma_{z}^{(2)})|\psi\rangle=0\) that expresses the perfect anticorrelation between the two \(z-\)spin components. Importantly, by symmetry we have also \((\sigma_{x}^{(1)}+\sigma_{x}^{(2)})|\psi\rangle=0=(\sigma_{y}^{(1)}+\sigma_{y} ^{(2)})|\psi\rangle\) expressing the perfect spin anticorrelation in every direction \(x\), \(y\) etc... Due to this perfect anticorrelation between the particles experimentalists Alice and Bob recording respectively the projected spin of particle 1 and 2 along a common axis \({\bf\hat{n}}\) will naturally obtain the joint probability
Footnote 2: Rigorously we should also include \(\Phi_{j}({\bf X}_{j})\) the non-overlapping spatial parts of the wave function associated with the two remote particles. However, to simplify the notations we will avoid this discussion.
\[P(\alpha,-\alpha/{\bf\hat{n}}_{1},{\bf\hat{n}}_{2}={\bf\hat{n}}_{1},|\psi \rangle)=\frac{1}{2},\ P(\alpha,\alpha/{\bf\hat{n}}_{1},{\bf\hat{n}}_{2}={\bf \hat{n}}_{1},|\psi\rangle))=0 \tag{2}\]
where \(\alpha=\pm 1\), and the probabilities are conditioned on the common analysis direction \(\mathbf{\hat{n}}_{1}=\mathbf{\hat{n}}_{2}=\mathbf{\hat{n}}\) of the two independent Stern-Gerlach apparatus used by Alice and Bob and by the singlet wave function \(|\psi\rangle\). These conditions explicit the perfect anticorrelation at the core of EPR work.
Now comes the EPR crux: EPR introduce the Einstein-locality assumption based on natural features of the classical world picture concerning correlations and relativistic causality. The main idea is that a local operation made by Alice on particle 1 at space-time point \(x_{1}\) should not influence what is happening to the second particle recorded by Bob at space-time point \(x_{2}\) if the two events are space-like separated (so that no-signal could propagate between the two points). As Einstein wrote in 1949 :
But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system \(S_{2}\) is independent of what is done with the system \(S_{1}\) which is spatially separated from the former.[11], p. 85
Assuming this natural relativistic hypothesis the EPR deduction is easy to understand. From Eq. 4 we have perfect anticorrelation and therefore if Alice is recording the spin-1/2 component of particle 1 along the direction \(\mathbf{\hat{n}}_{1}=\mathbf{\hat{z}}\) with the result \(+1\) (respectively \(-1\)) we know that Bob must necessarily record in his lab the result \(-1\) (respectively) \(+1\)) for the same settings \(\mathbf{\hat{n}}_{2}=\mathbf{\hat{z}}\). Moreover, nothing obliges Bob to measure the spin projection of his particle along the same direction and he could for instance decide to record the spin along the \(x-\)direction: \(\mathbf{\hat{n}}_{2}=\mathbf{\hat{x}}\). Now Bob or Alice could take their decision at the last moment and since the locality principle is assumed to occur no influence that could modify the results obtained by Alice and Bob are allowed to propagate between them. As a direct consequence we can make a counterfactual reasoning: If Bob records along the \(x-\)direction and obtain for example \(+1\) and Alice obtained for example the result \(-1\) along the \(z-\)direction then we know for sure that Alice should have obtained the result \(-1\) along the \(x-\)direction and Bob the result \(+1\) along the \(z-\)direction even though these experiments have not actually be done. Crucially the legitimacy of the counterfactual reasoning of EPR is mandated by Einstein-locality and is not an independent hypothesis. This justifies the famous introduction of 'elements of reality' by EPR:
If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists
an element of physical reality corresponding to this physical quantity.[10]
Of course EPR understood perfectly well that counterfactual reasoning is in general forbidden for a single particle using the usual approach to quantum mechanics. This is because for a single particle one could invoke Heisenberg's principle to impose a strong form of complementarity and contextuality: It is impossible to record in one single experiment the spin projection for the \(x\) and \(z\) direction because \(\sigma_{x}\) and \(\sigma_{z}\) don't commute. Therefore, one must choose between one experiment or the other and if one is doing sequential experiments it is known that dispersion will occur in agreement with Heisenberg's principle. To speak about determinations and counterfactual hidden properties that could be observed but actually are not are presented as useless in standard quantum mechanics. To cite Asher Peres : 'Unperformed experiments have no results' [24]. However, with Einstein-locality EPR found a clean way to go around Heisenberg's principle limitations. Assuming locality we can know counterfactually the spin components of the two particles along orthogonal directions \(x\) and \(z\) even though we only actually measured the spin of particle 1 (respectively 2) along the \(z\) (respectively \(x\)) direction. With two measurements and assuming locality we actually know four spin projection values along directions \(x-z\) for the two particles and this is impossible if we assume that quantum mechanics is complete. In other words, assuming that quantum is local (QM-L) and complete (QM-C) we get that actually quantum mechanics is incomplete (QM-IC)!
This is a wonderful logical contradiction that can be formally written: "QM-L & QM-C=False" or equivalently "\(\neg\) QM-L OR \(\neg\) QM-C=True", i.e., "QM-NL OR QM-IC=True" with \(\neg\) QM-L is the negation of QM-L, i.e., quantum mechanics is nonlocal and similarly \(\neg\) QM-C is actually QM-IC. EPR theorem is unavoidable, it implies that if QM-L is true then we must have QM-IC true; and contrapositively if QM-C is true we must have QM-NL true:
\[\mbox{QM-L}\Rightarrow\mbox{QM-IC},\mbox{ i.e., QM-C}\Rightarrow\mbox{ QM-NL}. \tag{3}\]
Importantly, EPR leads to three alternatives:
\[(i)\mbox{QM-L \& QM-IC}\] \[(ii)\mbox{QM-NL \& QM-IC}\] \[(iii)\mbox{QM-NL \& QM-C} \tag{4}\]
where (i) was favored by Einstein, (ii) by Bohm, and (iii) by Bohr and followers3. The beauty and logic of the EPR deduction/theorem is often underappreciated and the fact that counterfactuality and determinism are actually derived and not inferred by EPR are still nowadays misunderstood.
Footnote 3: A different way to explain this is that either quantum mechanics is incomplete (i.e., regrouping options (i) and (ii) of Eq. 4) _or_ (and this or is exclusive) quantum mechanics is complete and nonlocal, i.e., option (iii) of Eq. 4.
## III Bell's Beables
Moreover, an important comment must be done concerning EPR theorem. Indeed, while this result is rigorous EPR didn't present it under a very formal way. The notion of locality and determinism are used in a very intuitive sense as shown, for instance, in the previous quote of Einstein concerning locality. Therefore one would have to be more precise on the definition of locality. This was done by Bell as will be discussed below. Furthermore, the path of EPR on incompleteness was obtained through the logical deduction to determinism meaning that we actually have: QM-L \(\Rightarrow\) QM-D i.e., quantum mechanics is deterministic, and from that QM-D \(\Rightarrow\) QM-IC. But the notion of determinism derived is not very clearly discussed by EPR. For example, considering the actual \(z-\)component of the spin-1/2 particle measured by Alice we see that the 'element of reality' \(v(\sigma_{z}^{(1)})\) associated with particle 1 implies through locality the existence of a couterfactual element of reality \(v(\sigma_{z}^{(2)})=-v(\sigma_{z}^{(1)})\) for the second particle even if Bob actually measured \(v(\sigma_{x}^{(2)})\). However, EPR didn't really explained what is the meaning of \(v(\sigma_{z}^{(2)})\). Should this value preexist before the measurement? But since the measurement is not actually done does this implies that the spin components must like in a classical dynamics be attributed to the particle in a non contextual way? This we know is a too particular possibility and actually the de Broglie Bohm theory is a counter-example to it. In the more general way it is better to write in a deterministic and local theory that something before the measurement predetermined the elements of reality observed or not by Alice and Bob. Writing \(\lambda\) this initial condition/predetermination we see that actually EPR determinism presupposed only that we have two functions \(v(\sigma_{n}^{(1)}):=A(\lambda,\mathbf{\hat{n}},|\psi\rangle)\), \(v(\sigma_{n}^{(2)}):=B(\lambda,\mathbf{\hat{n}},|\psi\rangle)=-A(\lambda, \mathbf{\hat{n}},|\psi\rangle)\) with \(\sigma_{n}^{(i)}=\mathbf{n}\cdot\boldsymbol{\sigma}^{(i)}\) the spin operator of particle \(i=1\) or 2 along the analysis direction \(\mathbf{n}\). Moreover this discussion was again done by Bell not by
EPR.
Therefore we must make a technical aside: In order to write the EPR reasoning in a formal way and discuss locality for deriving determinism we must use Bell notations for probabilities of beables. Bell introduced the concept of 'beable' ([3], chaps. 5,7,16,24) to characterize every physical (i.e., actual or real) properties belonging to the system that must influence the measured correlations. Crucially here the beables must include the 'classical' Stern-Gerlach devices with (in general different) directions \(\mathbf{\hat{n}}_{j}\) (i.e., associated with external fields and potentials acting on the two parts of the quantum system under study), the possible hidden supplementary variables \(\lambda\) (e.g., the particle positions in the de Broglie Bohm pilot-wave theory), and the wave function \(|\psi\rangle\). It is unfortunate that most commentators of Bell fail to realize that \(|\psi\rangle\) actually belongs to the fundamental beables listed by Bell. Now it is always possible to write the quantum joint probabilities \(P(\alpha,\beta|\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)=| \langle\alpha_{\mathbf{\hat{n}}_{1}},\beta_{\mathbf{\hat{n}}_{2}}|\psi\rangle |^{2}\) as:
\[P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)=\int_ {\Lambda}P(\alpha,\beta/\lambda,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},| \psi\rangle)\rho(\lambda,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)d\lambda \tag{5}\]
where \(\alpha,\beta=\pm 1\) and the integral spans over the hidden-variable space4. At each run of the quantum experiment an actual value of \(\lambda\) is selected. Bell used this notation in 1964 and 1971 ([3], chaps. 2,4) to derive his theorem but actually we could also generalize it as
Footnote 4: We have \(\int_{\Lambda}\rho(\lambda,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)d\lambda=1\) and \(\sum_{\alpha,\beta}P(\alpha,\beta/\lambda,\mathbf{\hat{n}}_{1},\mathbf{\hat{n} }_{2},|\psi\rangle)=1\).
\[P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)=\int_{ \Omega}P(\alpha,\beta/\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)\rho(\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)d\omega \tag{6}\]
which is formally the same as the original formula Eq. 5 used by Bell in his 1964 paper. We introduced the beable \(\omega:=(\lambda,\theta)\) where \(\theta\) is a new beable describing the quantum state. Again a value of \(\theta\) is actualized at each run of the experiment. Of course, since the probability description contains already \(|\psi\rangle\) as a condition, we expect \(\theta\) to be somehow redundant. For instance, according to Beltrametti and Bugajski[6; 8] for a complete theory (i.e., without \(\lambda\)) we can always write:
\[P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)=\int_{ \Theta}P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\theta \rangle)\delta(|\theta\rangle-|\psi\rangle)d|\theta\rangle \tag{7}\]
where now the beable \(|\theta\rangle\) belongs to the Hilbert space of the two-spins system. We have \(P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\theta\rangle)=| \langle\alpha_{\mathbf{\hat{n}}_{1}},\beta_{\mathbf{\hat{n}}_{2}}|\theta \rangle|^{2}\) and 5\(\rho(\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle,|\theta\rangle )=\delta(|\theta\rangle-|\psi\rangle)\). This represen
tation corresponds to what Harrigan and Spekkens [16] called an ontic description of the quantum state where the Dirac distribution associates a strongly localized density of probability to \(|\psi\rangle\). It is interesting to note that this is not the only ontic representation of the EPR state. For example, inspired by Wigner phase distribution Scully [25] developed an angular representation of the \(|\psi\rangle\) state also involving a Dirac distribution. Scully specifically considered the joint probability \(P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle):=P( \alpha,\beta/\varphi_{1},\varphi_{2},|\psi\rangle)\) where \(\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2}\) are two apriori different unit vectors contained in the \(x-z\) plane (the particles move along the two opposed \(\pm y\) directions and \(\varphi_{j}:=\widehat{\mathbf{z},\mathbf{\hat{n}}_{j}}\) are the angles between the vector \(\mathbf{\hat{n}}_{j}\) characterizing the spin analyzers and the vertical common axis \(\mathbf{\hat{z}}\)). He found:
\[P(\alpha,\beta/\varphi_{1},\varphi_{2},|\Psi\rangle)=\iint P(\alpha/\theta_{1 },\varphi_{1},|\Psi\rangle)P(\beta/\theta_{2},\varphi_{2},|\Psi\rangle)\rho( \theta_{1},\theta_{2}/\varphi_{1},\varphi_{2},|\Psi\rangle)d\theta_{1}d\theta _{2} \tag{8}\]
with \(P(\alpha/\theta_{i},\varphi_{i},|\psi\rangle)=\frac{1+\alpha\cos(\varphi_{i}- \theta_{i})}{2}\) and \(\rho(\theta_{1},\theta_{2}/\varphi_{1},\varphi_{2},|\Psi_{EPR})=\frac{1}{2} \delta(\theta_{2}-\theta_{1}-\pi)[\delta(\theta_{1}-\varphi_{1})+\delta(\theta _{1}-\varphi_{1}-\pi)]\). This leads to \(P(\alpha,\beta/\varphi_{1},\varphi_{2},|\psi\rangle)=\frac{1+\alpha\beta\cos \left(\varphi_{1}-\varphi_{2}\right)}{4}\) which is the quantum prediction for the singlet state. In later works Argaman, and Di Lorenzo [2, 20] rediscovered the model with a more symmetric probability density \(\rho(\theta_{1},\theta_{2}/\varphi_{1},\varphi_{2},|\Psi_{EPR})=\frac{1}{4} \delta(\theta_{2}-\theta_{1}-\pi)[\delta(\theta_{1}-\varphi_{1})+\delta( \theta_{1}-\varphi_{1}-\pi)+\delta(\theta_{2}-\varphi_{2})+\delta(\theta_{2}- \varphi_{2}-\pi)]\). We will come back to the interesting properties of these models concerning statistical independence and local-causality. Moreover for the moment the crucial issue is that these various \(\theta\) variables have not to be interpreted as hidden or supplementary variables \(\lambda\) (even if it was the interpretations made by the authors of these models). Instead, these models provide ontic representations of a complete quantum theory. In other words, examples provided by Beltrametti and Bugajski, or Scully and others explicitly show that Eq. 6 makes always sense even for approaches where quantum mechanics is supposed to be complete.
In his later writings of 1976 and 1991 ([3], chaps. 7,24) Bell emphasized the importance of the generality of the beable concepts and made an explicit use of Eq. 6 for general beables \(\omega\). In his famous article 'Bertlmann's socks and the nature of reality' Bell wrote:
It is notable that in this argument [i.e., Bell and EPR theorems] nothing is said about the locality, or even localizability, of the variable \(\lambda\) [our \(\omega\)]. These variables could well include, for example, **quantum mechanical state vectors**, which have no particular localization in ordinary space-time. It is assumed only that
the outputs A and B, and the particular inputs a and b [i.e., \(\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2}\)], are well localized. [3], chap. 16
In my opinion beside the issue of locality this important point concerning \(\omega\) notations answers some commentators who unfortunately still continue to believe that hidden variables are a prerequisite of Bell and EPR deductions. For example, in their 2014 QBist manifesto Fuchs, Mermin and Schack wrote:
The parameter \(\lambda\) [again our \(\omega\)] is undefined. It does not appear in the quantum theory. Nor has anybody ever suggested what in the experience of an agent \(\lambda\) [\(\omega\)] might correspond to. In QBism this puts it outside the scope of physical science. [14]
Clearly, the counterexamples of Beltrametti and Bugajski, or Scully illustrate the error. Furthermore, this proves that the fundamental formalism of Bell used for discussing nonlocality of quantum mechanics is actually independent of statements about realism or hidden variables. Therefore, this implies that repeated claims trying to oppose locality and realism as different alternatives to relinquish are badly motivated and based on misunderstanding of EPR and Bell works.
Moreover, once we accept Eq. 6 we can go back to the EPR-Bell reasoning. This allows us to give with Bell a rigorous definition of Einstein-locality or as Bell says 'local-causality'. Considering the elementary probability \(dP(\alpha,\beta,\lambda,\theta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)\) we have always
\[dP(\alpha,\beta,\lambda,\theta/\mathbf{\hat{n}}_{1},\mathbf{ \hat{n}}_{2},|\psi\rangle)=P(\alpha,\beta/\omega,\mathbf{\hat{n}}_{1},\mathbf{ \hat{n}}_{2},|\psi\rangle)\rho(\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2 },|\psi\rangle)d\omega\] \[=P(\alpha/\beta,\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2}, |\psi\rangle)P(\beta/\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)\rho(\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)d\omega. \tag{9}\]
Now, Bell showed through several important papers that the good definition of local causality implies actually three mathematically precise conditions or axioms:
\[P(\alpha/\beta,\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2}, |\psi\rangle)=P_{1}(\alpha/\omega,\mathbf{\hat{n}}_{1},|\psi\rangle) \tag{10}\] \[P(\beta/\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)=P_{2}(\beta/\omega,\mathbf{\hat{n}}_{2},|\psi\rangle)\] (11) \[\rho(\omega,\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi \rangle)=\rho(\omega,|\psi\rangle) \tag{12}\]
These conditions have been the subject of intense debates in the past concerning locality and causality. The last Eq. 12 is often named setting or measurement independence in
the literature (and sometimes freedom of choice condition) meaning that the distribution of beable \(\omega\) prepared initially at the source is naturally expected in any 'good' causal theory to be independent of the directions \(\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2}\) that could be selected by macroscopic devices located in the remote past along the backward cones, e.g., by photons coming from far-away quasars existing billions of light years away of Alice and Bob. If we relax this condition we would get some fatalistic or superdeterministic theories (including for instance retrocausal models). Accepting Bell's axiom Eq. 12 implies rejecting these fatalistic possibilities or loopholes (this natural assumption is accepted even in the nonlocal model of de Broglie and Bohm). The two other conditions are containing some hypotheses about outcome and parameter/setting independence. In particular, assuming again no superdeterminism and assuming that the space-time points \(x_{1}\), \(x_{2}\) of the two recording events by Alice and Bob are space-like separated we obtain with Eqs. 10 and 11 the more precise formulation of Einstein-locality quoted before forbidding spurious and parasitic communications 6.
Footnote 6: That was clearly the great achievement of Aspect group in the 1980’s to have experimentally developed such a configuration closing the communication loophole.
The following step for rederiving EPR is now straightforward: By assuming Einstein/Bell locality, i.e., Eqs. 10-12 we go back to Eqs. 3,4 and deduce for every \(\omega\) such that \(\rho(\omega,|\psi\rangle)\neq 0\)
\[P_{1}(\alpha/\omega,\mathbf{\hat{n}},|\psi\rangle)P_{2}(\alpha/ \omega,\mathbf{\hat{n}},|\psi\rangle)=0 \tag{13}\] \[P_{1}(\alpha/\omega,\mathbf{\hat{n}},|\psi\rangle)P_{2}(-\alpha /\omega,\mathbf{\hat{n}},|\psi\rangle)\geq 0 \tag{14}\]
These conditions look harmless but now suppose that for a given actualized \(\omega\) the state \(|+1_{\mathbf{\hat{n}}}\rangle_{1}|-1_{\mathbf{\hat{n}}}\rangle_{2}\) was recorded as an outcome. From Eq. 14 it means that \(P_{1}(+1/\omega,\mathbf{\hat{n}},|\psi\rangle)\neq 0\), \(P_{2}(-1/\omega,\mathbf{\hat{n}},|\psi\rangle)\neq 0\) and thus from Eq. 13 we must have \(P_{1}(-1/\omega,\mathbf{\hat{n}},|\psi\rangle)=P_{2}(+1/\omega,\mathbf{\hat{ n}},|\psi\rangle)=0\) and from probability conservation \(P_{1}(+1/\omega,\mathbf{\hat{n}},|\psi\rangle)=P_{2}(-1/\omega,\mathbf{\hat{ n}},|\psi\rangle)=1\). A similar situation would occur if for the given \(\omega\) the result \(|-1_{\mathbf{\hat{n}}}\rangle_{1}|+1_{\mathbf{\hat{n}}}\rangle_{2}\) obtained. In other words, we derive a deterministic theory: The EPR state requires probabilities
\[P_{1}(\alpha/\omega,\mathbf{\hat{n}}_{1},|\psi\rangle)=\delta_{\alpha,A( \lambda,\mathbf{\hat{n}}_{1},|\psi\rangle)}=\frac{1+\alpha\cdot A(\lambda, \mathbf{\hat{n}}_{1},|\psi\rangle)}{2} \tag{15}\]
and similarly for \(P_{2}(\beta/\omega,\mathbf{\hat{n}}_{2},|\psi\rangle)\) with values zero or one and beables7\(A(\lambda,\mathbf{\hat{n}},|\psi\rangle)=-B(\lambda,\mathbf{\hat{n}},|\psi \rangle)=\pm 1\). From this derivation of determinism all the previous EPR deduc
tions follow and in particular the logical results Eqs. 3,4. Most importantly the definition of QM-L and its negation QM-NL are transparent: This concludes our rederivation of EPR theorem.
We stress that in the case QM-C using beables \(\omega\) is a priori not mandatory to derive the EPR contradiction [22; 23]. Indeed, accepting locality is equivalent to assume the constraint \(P(\alpha,\beta/\mathbf{\hat{n}}_{1},\mathbf{\hat{n}}_{2},|\psi\rangle)=P( \alpha/\mathbf{\hat{n}}_{1}|\psi\rangle)P(\beta/\mathbf{\hat{n}}_{2}|\psi\rangle)\) which is obviously wrong [22; 26]. Moreover, using \(\omega\) leads to a more general and complete proof.
## IV Conclusion: From EPR to Bohm and Bell
Going back to the EPR three alternatives (i), (ii) and (iii) of Eq. 8 we see that abandoning locality implies to relax at least one of the three conditions 10-12. Moreover, it is remarkable that from the three possible alternatives of Eq. 8, only physical examples of case (ii) and (iii) are available.
Consider first the case (iii) where quantum mechanics is complete and nonlocal: We have already two representations given by Eq. 7 and 8. In the representation of Beltrametti and Bugajski Eq. 7 we see that Eq. 12 is preserved but we relinquish Eqs. 10, 11. Watching the representation of Scully et al., i.e., Eq. 8 we now see that Eqs. 10, 11 are preserved but actually it is Eq. 12 which is abandoned. This explains why this representation can be said to imply superdeterminism or retrocausality. Moreover, here the conditional probabilities \(P(\alpha/\theta_{i},\varphi_{i},|\psi\rangle)=\frac{1+\alpha\cos\left(\varphi _{i}-\theta_{i}\right)}{2}\) can only take value zero or one because of the specific form of \(\rho(\theta_{1},\theta_{2}/\varphi_{1},\varphi_{2},|\Psi_{EPR})\). The theory is thus 'effectively' deterministic even though \(P(\alpha/\theta_{i},\varphi_{i},|\psi\rangle)\) is in general different from 0 or 1. This was expected because the derivation leading to Eq. 15 doesn't require Eq. 12 to hold true but only Eqs. 10, 11. Furthermore, we see that once we add to Eq. 15 the statistical independence Eq. 12 we obtain the EPR contradiction leading to the rejection of QM-C & QM-L theories.
Now concerning case (ii): There is no known example of quantum theory involving hidden variables \(\lambda\) that is at the same time local in Einstein-Bell sense. For instance, as it is well known the deterministic pilot wave theory of de Broglie and Bohm preserves the statistical
independence of Eq. 12 but relax Eqs. 10, 11 in order to allow for action-at-a-distance with instantaneous forces. As Bohm wrote:
Thus the "quantum-mechanical" force may be said to transmit uncontrollable disturbances instantaneously from one particle to another through the medium of the \(\psi-\)field. [4]
Bohm subsequently remarked that in his hidden variable theory, there is clearly a strong difference with the case of a single particle where the uncertainty principle explains why disturbance can locally preclude measurements of non-commuting observable within one single experimental protocol. For entangled particles in the EPR case assuming hidden variables one should perhaps always expect a kind of nonlocal action-at-a distance affecting the two systems. This would somehow save the logics of the Heisenberg principle but would be in tension with special relativity.
It was moreover Bell and not Bohm who answered the question: Is it possible to find an example of a quantum theory incomplete and local (i.e., case (i) of Eq. 8)? As we all know now the answer he provided was: No. It is not here the aim to review the derivation of Bell's theorem which only requires the validity of the 3 local causality conditions Eqs. 10-12. We point out that Bell's theorem is easily generalizable for stochastic hidden variables theory. This makes sense if the singlet state is interacting with non efficient detectors involving losses In this regime the EPR derivation leading to determinism is not generally valid but Bell's theorem and the contradiction with locality is preserved. The open and important question is of course which condition must be relaxed in order to develop a more complete theory involving hidden variables. The problem will not be discussed here but just to mention that the choice followed by Bohm involves action-at-distance and a preferred space-time foliation. The present author advocates an alternative view where the action-at-distance implied by the pilot wave of de Broglie is preserved as an effective description. At a lower'subquantum level' the theory is fully local in the sense that no signal can propagate faster than light but a superdeterminism driven by a time-symmetric field allows us to relax Eqs. 10-12 without contradicting the spirit of Einstein relativity [9].
There is still one issue that we must discuss: The local realism controversy. The local realism rhetoric originates from a very serious definition of 'objective local theory' or 'local realistic theory' by Clauser Horne and Shimony in the 1970's as an honest substitute to
the pejorative term 'local hidden variables' [18]. Moreover, the idea that one can actually decouple the realism from the locality in EPR and Bell's theorems is very strange and results from a misunderstanding. Indeed, from the three alternatives listed in Eq. 8 only (iii), i.e., "QM-C & QM-NL" refers to quantum mechanics being complete. Bell's theorem rejecting alternative (i) we are left with (ii) "QM-IC & QM-NL" and (iii). But, note that nonlocal is actually a negation and the meaning of nonlocality changes from theory to theory. An advocate of complementarity will, following Bohr, more probably speaks about quantum wholeness, indivisibility of phenomena, or non-separability in order to stress the difference with the action-at-a-distance semantic of Newtonian classical mechanics or hiddden variables a la de Broglie Bohm. Therefore, with such rephrasing the two surviving alternatives of Bell's theorem read:
\[(ii)\mbox{QM-NL \ \& QM-IC}\] \[(iii)^{\prime}\mbox{QM-NS}\,\sim\,\mbox{QM-C} \tag{16}\]
where QS-NS is an abbreviation for quantum non-separable or anything similar and the '\(\sim\)' symbol is introduced to emphasize the sloppy equivalence or link between completness and Bohr's non-separability/indivisibility. With this sloppy definition we have indeed to choose between nonlocal hidden variables (ii) i.e., a particular form of realism and between a version of quantum mechanics (iii)' considered as being complete and where the notion of Einstein locality is analyzed as too dogmatic and naive. The advocates of this rethoric will eventually rephrase (iii)' as just quantum non realism or antirealism (in links with some form of positivism, instrumentalism and/or operationalism) in order to contrast their view with the 'naive' classical realism defended by Einstein or even Bohm. Therefore we end up with either assuming a realist nonlocal quantum world or a quantum nonrealist approach where non separability is the rule.
However, in order to understand how locality is (falsely) proposed as an alternative to realism it is crucial to see that the advocates of sloppy language often confuse the precise definition of local-causality (i.e., Eqs. 10-12, proposed by Bell after Einstein) with the also precise notion of local commutativity used in quantum-field and quantum measurement theory (for other related issues see also the Appendix). However, Bell was very clear ([3], chaps. 7,24) these two notions shouldn't be confused. Local commutativity \([{\cal O}_{A},{\cal O}_{B}]=0\) of two local Hermitian operators \({\cal O}_{A}\) and \({\cal O}_{B}\) defined in two space-like separated spatial regions
can be used to justify some averaged statistical independence of local measurements made by Alice and Bob. More precisely, consider for example the case where Alice is considering the evolution of the mean value \(\langle{\cal O}_{A}(t)\rangle\) between times \(t\) and \(t+\delta t\) when Bob disturbs locally his settings. Assuming in the interaction picture that \({\cal O}_{A}(t+\delta t)={\cal O}_{A}(t)\) and that the quantum state evolves as \(|\Psi(t+\delta t)\rangle=e^{-i\delta t{\cal O}_{B}(t)}|\Psi(t)\rangle\) we obtain if \([{\cal O}_{A},{\cal O}_{B}]=0\)
\[\langle{\cal O}_{A}(t+\delta t)\rangle=\langle\Psi(t)|e^{+i\delta t{\cal O}_{ B}(t)}{\cal O}_{A}(t)e^{-i\delta t{\cal O}_{B}(t)}|\Psi(t)\rangle=\langle{\cal O }_{A}(t)\rangle. \tag{17}\]
This condition shows that a local measurement made by Bob on his side cannot affect statistical observables of Alice. This is used to demonstrate the nonsignaling theorem which is a key feature of relativistic quantum mechanics: \(\sum_{\beta}P(\alpha,\beta/{\bf\hat{n}}_{1},{\bf\hat{n}}_{2},|\psi\rangle)=P( \alpha/{\bf\hat{n}}_{1},{\bf\hat{n}}_{2},|\psi\rangle)=P(\alpha/{\bf\hat{n}}_ {1},|\psi\rangle)\). Local-causality (i.e., Eqs. 10-12 implies nonsignaling but the opposite is not always true. Still it is possible for a quantum nonrealist to use locality in the weak sense of nonsignaling and local commutativity and at the same time to consider quantum theory as requiring an indivisibility or non-separability of phenomena a la Bohr, i.e., what EPR and Bell define as (iii) "QM-C & QM-NL". Clearly, it is the lack of precision in the language that allows something to be local and nonlocal at once. It is therefore not surprising that the false alternative between relinquishing locality or realism occurs if we use locality with such a wavy definition in the EPR and Bell theorem.
## V Appendix: Fine's Theorem
A recurrent but misleading objection against the EPR-Bell deduction of nonlocality is based on a theorem attributed to A. Fine [13] (already anticipated by G. Lochak in collaboration with de Broglie [19]). The claim is that assuming Bell's factorizability for probability generally implies the existence of joint probabilities
\[P(A_{1},A_{2},B_{1},B_{2})=\int P(A_{1}/\omega)P(A_{2}/\omega)P(B_{1}/\omega)P (B_{2}/\omega)\rho(\omega)d\omega \tag{18}\]
where \(A_{1}\) and \(A_{2}\) (respectively \(B_{1}\) and \(B_{2}\)) are two incompatible observables (i.e., spin components along the x and z directions) for particle 1 (respectively particle2). Joint probabilities used in Bell's derivation like \(P(A_{i},B_{j})=\int P(A_{i}/\omega)P(B_{j}/\omega)\rho(\omega)d\omega\) are just marginals obtained from \(P(A_{1},A_{2},B_{1},B_{2})\). Moreover, we can similarly obtain \(P(A_{1},A_{2})=\int P(A_{1}/\omega)P(A_{2}/\omega)\rho(\omega)d\omega\) and since quantum mechanics prohibits to define probability for |
2303.01874 | Scaling behaviour under the influence of a homogeneous size-dependent
perturbation | We study the finite-size scaling behaviour at the critical point, resulting
from the addition of a homogeneous size-dependent perturbation, decaying as an
inverse power of the system size. The scaling theory is first formulated in a
general framework and then illustrated using three concrete problems for which
exact results are obtained. | L. Turban | 2023-03-03T12:10:30Z | http://arxiv.org/abs/2303.01874v1 | # Scaling behaviour under the influence of a homogeneous size-dependent perturbation
###### Abstract
We study the finite-size scaling behaviour at the critical point, resulting from the addition of a homogeneous size-dependent perturbation, decaying as an inverse power of the system size. The scaling theory is first formulated in a general framework and then illustrated using three concrete problems for which exact results are obtained.
finite-size scaling, size-dependent perturbation, marginal perturbation, universal amplitude L. Turban]L. Turban +
Footnote †: This work is licensed under a _Creative Commons Attribution 4.0 International License_. Further distribution 13101-1 of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
0000-0002-4891-6500/000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-00000-00000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-00000-0000-00000-00000-00000-0000-00000-0000-00000-0000-00000-00000-0000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-000000-00000-00000-00000-00000-00000-00000-00000-00000-000000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-000000-00000-00000-00000-00000-000000-00000-00000-00000-000000-00000-00000-000000-00000-000000-00000-00000-00000-00000-00000-00000-00000-00000-000000-00000-00000-000000-00000-00000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-0000000-0000000-000000-000000-0000000-0000000-0000000-0000000-0000000-00000000-00000000-00000000-000000000-
Another example of the occurrence of a size-dependent contribution to the Hamiltonian, but of a quite different nature, is offered by (mean-field) fully-interacting systems with \(N\) spins. The number of interacting pairs growing as \(N^{2}\), the double sum over the sites has to be divided by \(N\) in order to obtain a finite energy per site in the thermodynamic limit [11, 12, 13, 14, 15]. Here, we shall consider the change in the finite-size scaling behaviour when a size-dependent perturbation is added to the interaction amplitude \(K\).
The outline of the paper is as follows: section 2, which contains the main results, presents the finite-size scaling behaviour resulting from the introduction of a homogeneous size-dependent perturbation in the three different cases of irrelevant, marginal, and relevant perturbations. This is illustrated with three exactly solvable examples: percolation in 1d (section 3), the Ising chain in a transverse field corresponding to a 2d classical system (section 4) and the Ising model on the fully-connected lattice (section 5). The results are discussed in section 6. The logarithmic transformation of a radial perturbation is presented in appendix A. In appendix B, we consider a perturbation of the 1d percolation problem in which the size of the system is replaced by the distance to the surface (Hilhorst-van Leeuwen perturbation). The two last appendices give some calculational details.
## 2 Finite-size scaling
Let us consider a perturbed \(d\)-dimensional classical system with Hamiltonian \(\mathcal{H}\) such that:
\[-\beta\mathcal{H}=-\beta\mathcal{H}_{0}+\Delta\sum_{i}\zeta_{i}\,,\quad\beta= \frac{1}{k_{\mathrm{B}}T}\,. \tag{2.1}\]
The system is finite, with size \(L\gg 1\), in at least one of its \(d\) dimensions. The scaling field associated with the perturbation, \(\Delta\), is size-dependent and given by
\[\Delta=\frac{A}{L^{\omega}}\,,\quad\omega>0\,,\quad A=\mathrm{O}(1)\,. \tag{2.2}\]
\(\Delta\) is conjugate to the local operator \(\zeta\) with scaling dimension \(x_{\zeta}\) and the perturbation acts on a subspace with dimension \(d_{\Delta}\). Thus, \(d_{\Delta}=1\) for a line defect and \(d_{\Delta}=d\) when the perturbation extends over the whole system.
Under a change of the length scale by a factor \(b>1\), such that \(L^{\prime}=L/b\), the perturbation transforms according to:
\[\Delta^{\prime}=\frac{A^{\prime}}{L^{\prime\,\omega}}=b^{y_{\Delta}}\Delta\,, \quad y_{\Delta}=d_{\Delta}-x_{\zeta}. \tag{2.3}\]
Taking into account the size-dependence of the perturbation (2.2), the following behaviour is obtained for the amplitude1
Footnote 1: Hereafter we assume that \(y_{\Delta}>0\).
\[A^{\prime}=b^{y_{\Delta}-\omega}A. \tag{2.4}\]
Suppose now that \(\mathcal{H}_{0}\) is at the bulk critical point and let us look for the finite-size scaling behaviour of a local operator \(\varphi\), with scaling dimension \(x_{\varphi}\), under the influence of \(\Delta\) acting on \(\zeta\). At bulk criticality \(\varphi_{c}\) depends only on two variables, the size-dependent perturbation amplitude \(\Delta\) and the system size \(L\). It transforms according to
\[\varphi^{\prime}_{\ c}=\varphi_{c}(\Delta^{\prime},L^{\prime})=b^{x_{\varphi} }\varphi_{c}(\Delta,L), \tag{2.5}\]
so that:
\[\varphi_{c}(\Delta,L)=b^{-x_{\varphi}}\varphi_{c}(b^{y_{\Delta}}\Delta,L/b). \tag{2.6}\]
With \(b=L\) one obtains the following finite-size behaviour
\[\varphi_{c}(\Delta,L)=\Phi_{c}(A,L)=L^{-x_{\varphi}}\varphi_{c}(L^{y_{\Delta} -\omega}A,1)=L^{-x_{\varphi}}\phi_{\varphi}(u),\quad u=L^{y_{\Delta}-\omega}A, \tag{2.7}\]
where the scaling function \(\phi_{\varphi}(u)\) is a universal function of its argument [7], with \(\phi_{\varphi}(0)\) giving the universal finite-size scaling amplitude of the unperturbed system:
\[\Phi_{c}\left(0,L\right)=\phi_{\varphi}(0)L^{-x_{\varphi}}. \tag{2.8}\]
Depending on the value of \(\omega\), three cases have to be considered:
* Irrelevant perturbation, \(\omega>y_{\Delta}\): According to (2.4), the perturbation amplitude decreases under rescaling, and the scaling function \(\phi_{\varphi}(u)\) in (2.7) can be expanded in powers of \(u\) giving: \[\Phi_{c}(A,L)=L^{-x_{\varphi}}\left[\phi_{\varphi}(0)+\frac{A}{L^{\omega-y_{ \Delta}}}\phi^{\prime}(0)+\ldots\right].\] (2.9) The leading contribution is the unperturbed one in (2.8) and the perturbation affects only the sub-leading corrections to scaling.
* Marginal perturbation, \(\omega=y_{\Delta}\): Then, the perturbation amplitude \(A\) is invariant under rescaling and the argument of the scaling function in (2.7) no longer depends on \(L\) so that: \[\Phi_{c}(A,L)=\phi_{\varphi}(A)L^{-x_{\varphi}}.\] (2.10) The scaling behaviour, \(L^{-x_{\varphi}}\), is the same as for the unperturbed system but the finite-size scaling amplitude, \(\phi_{\varphi}(A)\), is now continuously varying with \(A\). It is actually a universal function of \(A\).
* Relevant perturbation, \(\omega<y_{\Delta}\): In this case, the perturbation amplitude grows under rescaling. Let us consider the situation where \(\omega=0\) which, according to (2.2) corresponds to a constant deviation \(A\) from the critical point. Then, \[\Phi_{c}(A,L)=L^{-x_{\varphi}}\phi_{\varphi}(L^{y_{\Delta}}A),\quad\omega=0.\] (2.11) In the thermodynamic limit, either \(\lim_{L\to\infty}\Phi_{c}(A,L)=0\) or the exponent of \(L\) on the right-hand side of (2.11) vanishes. This occurs when \[\phi_{\varphi}(u)\sim|u|^{\kappa},\quad|u|\gg 1,\quad\kappa=\frac{x_{\varphi}}{y _{\Delta}}.\] (2.12) so that \(\Phi_{c}(A,\infty)\sim|A|^{x_{\varphi}/y_{\Delta}}\). When \(0<\omega<y_{\Delta}\), using (2.12) in (2.7), one obtains: \[\Phi_{c}(A,L)\sim L^{-\omega x_{\varphi}/y_{\Delta}}|A|^{x_{\varphi}/y_{\Delta }}.\] (2.13)
## 3 Bond percolation in 1d
To illustrate the results of the preceding section, let us start with a simple example, namely the bond percolation problem [16] on a 1d lattice with a size-dependent bond occupation probability.
### Percolation probability
We consider a finite chain with \(L\) bonds between neighbouring sites. The bonds are independently occupied with probability:
\[\pi=p+\frac{A}{L^{\omega}}\,,\quad-pL^{\omega}\leqslant A\leqslant(1-p)L^{ \omega}\,,\quad\omega\geqslant 0. \tag{3.1}\]
The order parameter for the percolation transition is the percolation probability. It is given by the probability \(P_{L}(\pi)\) to find an open path from \(0\) to \(L\). Thus, according to (3.1), we have:
\[P_{L}(\pi)=\left(p+\frac{A}{L^{\omega}}\right)^{L}\,. \tag{3.2}\]
### Unperturbed system
A change of the length scale by \(b\) can be exactly realized _via_ decimation [17]. In the process, \(b\) successive bonds with occupation probability \(p\) are replaced by a single bond with renormalized probability
\[p^{\prime}=\mathcal{R}_{b}(p)=p^{b}. \tag{3.3}\]
This exact renormalization group transformation has two fixed points at \(p^{\star}=\mathcal{R}_{b}(p^{\star})=0\) or \(1\). The unstable point corresponds to the percolation threshold, \(p_{c}=1\). A linearization of the transformation about the fixed point at \(p^{\star}=1\) gives the transformation of the deviation from the percolation threshold:
\[p^{\prime}-p_{c}=\frac{\mathrm{d}\mathcal{R}_{b}(p)}{\mathrm{d}p}\bigg{|}_{p= p_{c}=1}\ (p-p_{c})=b(p-p_{c}). \tag{3.4}\]
Thus, the dimension of the scaling field, \(p-p_{c}\), is
\[y_{p}=1. \tag{3.5}\]
The percolation probability, \(P_{L}(p)=\mathcal{R}_{L}(p)=p^{L}\), jumps from \(0\) to \(1\) at \(p_{c}\) when \(L\to\infty\):
\[P_{\infty}(p)=\left\{\begin{array}{ll}0,&\quad p<1,\\ 1,&\quad p=p_{c}=1.\end{array}\right. \tag{3.6}\]
The percolation transition is discontinuous in 1d.
On a finite chain, the critical percolation probability is independent of the size, \(P_{L}(p_{c})=1\). Then, according to (2.8), the scaling dimension of the order parameter is
\[x_{P}=0, \tag{3.7}\]
as expected at a discontinuity fixed point. The finite-size percolation probability reduces to its scaling function, \(\phi_{P}(0)=1\). Given (3.4), the scaling variable can be defined as
\[u=L(p-p_{c})=L(p-1)\leqslant 0,\quad p=1+\frac{u}{L}, \tag{3.8}\]
leading to
\[P_{L}(p)=\left(1+\frac{u}{L}\right)^{L}=\exp\left(u-\frac{u^{2}}{2L}+\ldots \right)=\mathrm{e}^{u}\left(1-\frac{u^{2}}{2L}+\ldots\right), \tag{3.9}\]
when \(u^{2}/L\ll 1\). The scaling function of the percolation probability
\[\phi_{P}(u)=\mathrm{e}^{u},\quad u\leqslant 0,\quad u^{2}/L\ll 1, \tag{3.10}\]
follows from this expression.
Note that the correlation function \(\Gamma_{n}(p)\), which is the probability that two sites at a distance \(n\) belong to the same cluster, is simply given by
\[\Gamma_{n}(p)=P_{n}(p)\approx\mathrm{e}^{-n(p_{c}-p)},\quad p_{c}-p\ll 1, \tag{3.11}\]
according to (3.8) and (3.9). Thus, the correlation length, \(\xi_{P}=(p_{c}-p)^{-1}\), diverges at \(p_{c}\) with an exponent \(\nu_{p}=1/y_{p}=1\), in agreement with (3.5).
### Perturbed system
We now consider the perturbed 1d percolation problem with a bond occupation probability \(\pi_{c}\) given by (3.1) at \(p=p_{c}=1\). The transformation of the perturbation amplitude follows from (2.4) and (3.5):
\[A^{\prime}=b^{y_{p}-\omega}A=b^{1-\omega}A. \tag{3.12}\]
Thus, the perturbation is irrelevant when \(\omega>1\), marginal at \(\omega=\omega^{\star}=1\) and relevant when \(\omega<1\). The finite-size scaling variable is given by
\[u=L(\pi_{c}-p_{c})=L^{1-\omega}A\leqslant 0, \tag{3.13}\]
in agreement with (3.12). The critical percolation probability
\[P_{c}(A,L)=P_{L}(\pi_{c})=\left(1+\frac{u}{L}\right)^{L}, \tag{3.14}\]
follows from (3.9). Let us examine its scaling behaviour depending on the value of \(\omega\).
* Irrelevant perturbation, \(\omega>1\): Then, \(|u|\ll 1\) and a first-order expansion in (3.9) yields: \[P_{c}(A,L)=1+\frac{A}{L^{\omega-1}}+\mathrm{O}\left(L^{-2(\omega-1)}\right), \quad A<0.\] (3.15) It should be compared to (2.9) with \(x_{\varphi}=x_{P}=0\) and \(y_{\Delta}=y_{P}=1\). The leading contribution \(\phi_{P}(0)=1\) is the unperturbed critical percolation probability and the perturbation enters this expression only through the sub-leading terms.
* Marginal perturbation, \(\omega=\omega^{\star}=1\):2 Footnote 2: See appendix B where \(L\) is replaced by the distance \(n\) to the first site. Although the perturbation amplitude \(A\) scales in the same way, a quite different behaviour is obtained when the perturbation is marginal. The scaling variable \(u=A\) so that: \[P_{c}(A,L)=\mathrm{e}^{A}\big{[}1+\mathrm{O}\big{(}L^{-1}\big{)}\big{]},\quad A <0.\] (3.16) The leading contribution to the critical percolation probability is the universal scaling function (3.10) of the perturbation amplitude in agreement with (2.10) considering (3.7).
* Relevant perturbation, \(\omega<1\): Then, \(|u|\) is large but \(|u/L|\) remains small. The critical percolation probability following from (3.9) reads: \[P_{c}(A,L)=\exp\left(AL^{1-\omega}-\frac{A^{2}}{2}L^{1-2\omega}+\ldots\right).\] (3.17)
Figure 1: Evolution of the critical percolation probability, given by (3.14), towards the universal scaling function \(\phi_{P}(u)=\mathrm{e}^{u}\) (dashed line), for increasing values of the size \(L\). There is no rescaling of the percolation probability since \(x_{P}=0\) and the scaling variable is \(u=L^{1-\omega}A\).
When \(\omega>1/2\), the second term in the series is small and leads to a correction to scaling.3 Then, one has
Footnote 3: Otherwise a non-scaling exponential factor enters the leading contribution to \(P_{c}\left(A,L\right)\).
\[P_{c}\left(A,L\right)=\exp\left(AL^{1-\omega}\right)\left[1+\text{O}\!\left(L^{- 2\omega+1}\right)\right],\quad 1/2<\omega<1,\quad A<0, \tag{3.18}\]
in agreement with (2.7) given (3.10) since \(x_{\psi}=0\) and \(y_{\Delta}=1\). The critical percolation probability vanishes with an essential singularity as \(L\to\infty\). This behaviour is expected for a system which is in the non-percolating phase for \(A<0\).
The convergence of finite-size data to \(\phi_{P}\) is shown in figure 1.
## 4 The 1d Ising chain in a transverse field
In this section we consider a 1d Ising model in a transverse field [18, 19] with a size-dependent perturbation of the first-neighbour interaction, looking for its influence on the surface spin magnetization.
### Hamiltonian and surface magnetization
The quantum chain Hamiltonian reads
\[\mathcal{H}_{L}=-\Lambda\sum_{n=1}^{L-1}\sigma_{n}^{x}\sigma_{n+1}^{x}-\sum_{ n=1}^{L}\sigma_{n}^{z},\quad\Lambda=\lambda+\frac{A}{L^{\omega}}, \tag{4.1}\]
where \(\sigma^{x}\) and \(\sigma^{z}\) are Pauli spin operators and the two-spin interaction \(\lambda>0\) is perturbed by the size-dependent term, \(A/L^{\omega}\), favouring order (disorder) when \(A\) is positive (negative).
We study the scaling behaviour of the surface magnetization which is given by the off-diagonal matrix element of the surface spin operator
\[m_{s}=|\langle 0|\sigma_{1}^{x}|1\rangle|, \tag{4.2}\]
between the ground state and the first-excited state of the quantum chain, with fixed boundary condition at \(L\). Using fermionic techniques it can be shown [20, 21] that, for a finite system with size \(L\), the surface magnetization takes the following form:
\[m_{sL}(\Lambda)=\left(\sum_{k=0}^{L-1}\Lambda^{-2k}\right)^{-1/2}=\left[\frac {1-(\lambda+AL^{-\omega})^{-2}}{1-(\lambda+AL^{-\omega})^{-2L}}\right]^{1/2}. \tag{4.3}\]
### Unperturbed system
In the infinite unperturbed system, i.e., when \(L\to\infty\) and \(A=0\), the series in (4.3) diverges and \(m_{s}=0\) when \(\lambda\leqslant 1\). In the ordered phase, corresponding to \(\lambda>1\), the surface magnetization behaves as:
\[m_{s\infty}(\lambda)=\left(1-\lambda^{-2}\right)^{1/2}. \tag{4.4}\]
It vanishes at the critical coupling, \(\lambda_{c}=1\), with a surface critical exponent \(\beta_{s}=1/2\). In a finite unperturbed critical system, with \(A=0\) and \(\lambda=1\), (4.3) yields the following scaling behaviour:
\[m_{sL}(\lambda_{c})=\left(\sum_{k=0}^{L-1}1\right)^{-1/2}=L^{-1/2}. \tag{4.5}\]
Thus, the scaling dimension of the surface magnetization is \(x_{ms}=1/2\). Given \(\beta_{s}=vx_{ms}\), the correlation length exponent \(\nu=1\) and the bulk thermal exponent \(y_{t}=1/\nu=1\), too. A deviation from the critical coupling transforms as
\[\lambda^{\prime}-\lambda_{c}=b^{y_{t}}(\lambda-\lambda_{c})=b(\lambda-\lambda _{c}), \tag{4.6}\]
so that, in a finite-size off-critical system, the appropriate scaling variable is:
\[u=L(\lambda-\lambda_{c})=L(\lambda-1),\quad\lambda=1+\frac{u}{L}. \tag{4.7}\]
The surface magnetization (4.3) with \(A=0\) involves
\[\lambda^{-2}=\left(1+\frac{u}{L}\right)^{-2}=1-\frac{2u}{L}+\frac{3u^{2}}{L^{2 }}+\ldots \tag{4.8}\]
and
\[\lambda^{-2L}=\left(1+\frac{u}{L}\right)^{-2L}=\exp\left(-2u+\frac{u^{2}}{L}+ \ldots\right)=\mathrm{e}^{-2u}\left(1+\frac{u^{2}}{L}+\ldots\right), \tag{4.9}\]
when \(u^{2}/L\ll 1\). Finally, one obtains the following finite-size scaling behaviour for the off-critical unperturbed system:
\[m_{sL}\left(1+\frac{u}{L}\right)=L^{-1/2}\left(\frac{2u}{1-\mathrm{e}^{-2u}} \right)^{1/2}\left[1+\left(\frac{u}{\mathrm{e}^{2u}-1}-\frac{3}{2}\right) \frac{u}{2L}+\ldots\right],\quad\frac{u^{2}}{L}\ll 1. \tag{4.10}\]
The leading contribution gives the scaling function of the surface magnetization
\[\phi_{ms}(u)=\left(\frac{2u}{1-\mathrm{e}^{-2u}}\right)^{1/2},\quad\frac{u^{2 }}{L}\ll 1. \tag{4.11}\]
The critical finite-size result in (4.5), \(\phi_{ms}(0)=1\), is recovered when \(u\to 0\).
### Perturbed system
Since \(y_{t}=1\), according to (2.4), the amplitude of the thermal size-dependent perturbation transforms as:
\[A^{\prime}=b^{u-\omega}A=b^{1-\omega}A. \tag{4.12}\]
As above for the percolation problem, the perturbation is marginal at \(\omega^{*}=1\), irrelevant when \(\omega>1\) and relevant when \(\omega<1\).
Let us study its influence on the finite-size scaling behaviour of the surface magnetization for the critical value of the unperturbed coupling, \(\lambda=\lambda_{c}\). Then, the first-neighbour interaction in (4.1) is \(\Lambda_{c}=\lambda_{c}+A/L^{\omega}\) so that, according to (4.7) and in agreement with (4.12), the associated scaling variable is now
\[u=L(\Lambda_{c}-\lambda_{c})=L^{1-\omega}A. \tag{4.13}\]
The critical surface magnetization
\[m_{sc}(A,L)=m_{sL}\left(1+\frac{u}{L}\right)=\left[\frac{1-(1+u/L)^{-2}}{1-(1+ u/L)^{-2L}}\right]^{1/2}, \tag{4.14}\]
is given by (4.10) and behaves in the following way in the three different regimes:
* Irrelevant perturbation, \(\omega>1\): Then, both \(u\) and \(u^{2}/L\ll 1\) so that (4.10) yields: \[m_{sc}(A,L)=L^{-1/2}\left(1+\frac{A}{2L^{\omega-1}}+\ldots\right).\] (4.15) The leading term gives the unperturbed finite-size result in (4.5). The perturbation introduces non-vanishing corrections to this leading behaviour.
* Marginal perturbation, \(\omega=\omega^{*}=1\): The scaling variable is then \(u=A\) and \(u^{2}/L\) remains small, thus (4.10) gives: \[m_{sc}(A,L)=\left(\frac{2A}{1-\mathrm{e}^{-2A}}\right)^{1/2}L^{-1/2}\left[1+ \left(\frac{A}{\mathrm{e}^{2A}-1}-\frac{3}{2}\right)\frac{A}{2L}+\ldots\right].\] (4.16) The leading contribution to the surface magnetization has the same finite-size scaling with \(L\) as in the unperturbed system, although with the varying universal amplitude \(\phi_{ms}(A)\) in (4.11).
* Relevant perturbation, \(\omega<1\): The scaling variable \(u\gg 1\) and the correction term \(u^{2}/L=A^{2}/L^{2\omega-1}\) in (4.9) remains small only when \(\omega>1/2\). When \(u>0\), this non-scaling correction term is not dangerous because it enters a negative exponential which can be neglected. Then, for \(0<\omega<1\), the critical surface magnetization following from (4.14) behaves as: \[m_{sc}(A,L)=\sqrt{2A}L^{-\omega/2}\left(1-\frac{3A}{4L^{\omega}}+\ldots\right),\quad A>0.\] (4.17) It vanishes as \(L^{-\omega/2}\) in agreement with (2.13) with \(x_{\varphi}=x_{ms}=1/2\) and \(y_{\Delta}=y_{t}=1\). At \(\omega=0\), the infinite system is in its ordered phase, with \(A\) giving a constant deviation from the critical coupling. Thus, the surface magnetization behaves as \(m_{sc}(A,\infty)=\sqrt{2A}\) and vanishes with a critical exponent \(\beta_{s}=1/2\), as expected. When \(u<0\), (4.9) gives the dominant contribution to the denominator in (4.14). When \(\omega\leqslant 1/2\), the effect of \(u^{2}/L\) is to generate a non-scaling exponential factor which is no longer negligible. The surface magnetization keeps its scaling form (4.14) only when \(\omega\) belongs to the interval \(1/2<\omega<1\) for which \[m_{sc}(A,L)=\sqrt{2|A|}L^{-\omega/2}\exp\left(-|A|L^{1-\omega}\right)\left(1- \frac{A^{2}}{2L^{2\omega-1}}+\ldots\right),\quad A<0.\] (4.18) The surface magnetization vanishes with an essential singularity. Such a behaviour is not surprising since \(A<0\) drives the system into its disordered phase. Note that (4.18) has the expected scaling form (2.7) with \(x_{\varphi}=1/2\) and \(y_{\Delta}=1\).
The convergence of finite-size data to \(\phi_{ms}\) is shown in figure 2.
Figure 2: Evolution of the scaled critical surface magnetization in (4.14) towards the universal scaling function \(\phi_{ms}(u)\) in (4.11), for increasing values of the size \(L\). The dashed lines indicate the expected behaviours for small and large values of the scaled amplitude, \(u=L^{1-\omega}A\). For \(A\ll 0\), it has been shifted upwards to be visible.
## 5 Ising model on the fully-connected lattice
Let us finally consider another exactly solvable example provided by the Ising model on the fully-connected lattice with \(N\) sites.
### Hamiltonian, Ginzburg-Landau expansion and order parameter
The perturbed Hamiltonian takes the following form
\[-\beta\mathcal{H}=\mathcal{K}\left(\frac{1}{2N}\sum_{i\neq j}\sigma_{i}\sigma_{ j}\right),\quad\mathcal{K}=K+\frac{C}{N^{\omega}},\quad K=\beta J=\frac{J}{k_{ \mathrm{B}}T}, \tag{5.1}\]
where \(J>0\) is the ferromagnetic interaction between the Ising spins \(\sigma_{i}=\pm 1\). The factor \(1/N\) is needed to have a total energy proportional to \(N\) in the thermodynamic limit. The two-spin interaction is perturbed by the size-dependent term \(C/N^{\omega}\).
Introducing the total magnetization \(M=\sum_{i=1}^{N}\sigma_{i}\) so that \(\sum_{i\neq j}\sigma_{i}\sigma_{j}=M^{2}-N\), the partition function is given by:
\[Z_{N}=\mathrm{Tr}_{\{\sigma\}}\,\mathrm{e}^{-\beta\mathcal{H}}=\mathrm{e}^{- \mathcal{K}/2}\,\mathrm{Tr}_{\{\sigma\}}\,\mathrm{exp}\left(\frac{\mathcal{K} M^{2}}{2N}\right). \tag{5.2}\]
Making use of the Stratonovich-Hubbard transformation [22, 23]
\[\mathrm{exp}\left(\frac{\mathcal{K}M^{2}}{2N}\right)=\left(\frac{N}{2\pi} \right)^{1/2}\int\limits_{-\infty}^{+\infty}\mathrm{d}\eta\,\exp\left(-\frac{ N}{2}\eta^{2}+\mathcal{K}^{1/2}\eta M\right), \tag{5.3}\]
the partition function can be rewritten as:
\[Z_{N}=\left(\frac{N}{2\pi}\right)^{1/2}\mathrm{e}^{-\mathcal{K}/2}\!\!\!\int \limits_{-\infty}^{+\infty}\!\!\!\!\mathrm{d}\eta\,\exp\left(-\frac{N}{2}\eta ^{2}\right)\prod_{i=1}^{N}\mathrm{Tr}_{\sigma_{i}}\,\mathrm{e}^{\mathcal{K}^{1 /2}\eta\sigma_{i}}. \tag{5.4}\]
Summing on the spin states leads to
\[Z_{N}=2^{N}\left(\frac{N}{2\pi}\right)^{1/2}\mathrm{e}^{-\mathcal{K}/2}\!\!\! \int\limits_{-\infty}^{+\infty}\!\!\!\!\mathrm{d}\eta\,\mathrm{e}^{-Nf\,( \mathcal{K},\eta)}, \tag{5.5}\]
where
\[f(\mathcal{K},\eta)=\frac{\eta^{2}}{2}-\ln\cosh\left(\mathcal{K}^{1/2}\eta\right) \tag{5.6}\]
is the free energy per site.
Close to the critical point, the free energy density in (5.6) can be expanded in even powers of \(\eta\) as:
\[f(\mathcal{K},\eta)=-\left(K+\frac{C}{N^{\omega}}-1\right)\frac{\eta^{2}}{2}+ \left(K+\frac{C}{N^{\omega}}\right)^{2}\frac{\eta^{4}}{12}+\mathrm{O}(\eta^{6 }). \tag{5.7}\]
Taking \(|\eta|\) for the order parameter, its mean value is given by:
\[m_{N}(\mathcal{K})=\frac{\int_{0}^{\infty}\eta\,\mathrm{e}^{-Nf\,(\mathcal{K},\eta)}\mathrm{d}\eta}{\int_{0}^{\infty}\mathrm{e}^{-Nf\,(\mathcal{K},\eta)} \mathrm{d}\eta}. \tag{5.8}\]
### Unperturbed system
In the unperturbed system \(C=0\), and (5.7) reduces to:
\[f(K,\eta)=-(K-1)\frac{\eta^{2}}{2}+K^{2}\,\frac{\eta^{4}}{12}+\mathrm{O}(\eta^{6}). \tag{5.9}\]
In the thermodynamic limit, as \(N\to\infty\), the order parameter in (5.8) is given by the value of \(\eta\) minimizing the free energy. It is non-vanishing when \(K>K_{c}=1\) where
\[m_{\infty}(K)\approx\sqrt{3(K-1)},\quad K>1. \tag{5.10}\]
As expected for a fully-connected lattice where a spin interacts with all the others, the Ising mean-field critical behaviour, \(\beta=1/2\), is obtained.
At the critical point, the free energy density is given by:
\[f(K_{c},\eta)=\frac{\eta^{4}}{12}+\mathrm{O}\big{(}\eta^{6}\big{)}. \tag{5.11}\]
The change of variable \(t=N\eta^{4}/12\) in (5.8) immediately leads to:
\[m_{N}(K_{c})=\left(\frac{12}{N}\right)^{1/4}\frac{\int_{0}^{\infty}t^{-1/2} \mathrm{e}^{-t}\mathrm{d}t}{\int_{0}^{\infty}t^{-3/4}\mathrm{e}^{-t}\mathrm{d }t}=\frac{12^{1/4}\sqrt{\pi}}{\Gamma(1/4)}N^{-1/4}. \tag{5.12}\]
This finite-size scaling with \(N\) is an old result, first obtained by Kittel [12]. It was later remarked [24, 25] that (5.12) is actually the standard scaling behaviour for the mean-field Ising model if one relates the number \(N\) of sites in the fully-connected lattice to an effective size \(L\) through \(N=L^{d_{c}}\). In this relation, \(d_{c}\) is the upper critical dimension above which the mean-field behaviour is obtained with short-range interactions. For the Ising model, \(d_{c}=4\), so that:
\[L=N^{1/d_{c}}=N^{1/4}. \tag{5.13}\]
Using this relation in (5.12) and \(\beta=x_{m}/y_{t}=1/2\) yields the scaling dimensions of the mean-field Ising model:
\[x_{m}=1,\quad y_{t}=1/\nu=2. \tag{5.14}\]
Let us now consider a finite-size off-critical system. According to (5.13) and (5.14), the scaling variable takes the following form:
\[v=L^{y_{t}}(K-K_{c})=N^{y_{t}/d_{c}}(K-K_{c})=N^{1/2}(K-1),\quad K=1+\frac{v}{ N^{1/2}}. \tag{5.15}\]
The scaling of the order parameter in (5.12) suggests the change of variable \(\eta=xN^{-1/4}\) in (5.9). As a function of \(x\) and \(v\), the free energy density takes the following form:
\[Nf(1+v/N^{1/2},xN^{-1/4})=Ng_{N}(v,x)=-v\frac{x^{2}}{2}+\frac{x^{4}}{12}+ \frac{vx^{4}}{6N^{1/2}}+\mathrm{O}\big{(}N^{-1}\big{)}. \tag{5.16}\]
In this expansion, the non-scaling third term and the following terms can be neglected, so that (5.8) yields:
\[m_{N}\left(1+v/N^{1/2}\right)=N^{-1/4}\frac{\int_{0}^{\infty}x\,\mathrm{e}^{- Ng_{N}(v,x)}\mathrm{d}x}{\int_{0}^{\infty}\mathrm{e}^{-Ng_{N}(v,x)}\mathrm{d}x}=N ^{-1/4}\frac{J_{1}(v)}{J_{0}(v)}\left[1+\mathrm{O}\Big{(}vN^{-1/2}\Big{)} \right]. \tag{5.17}\]
\(J_{n}(v)\) is given by
\[J_{n}(v)=\int_{0}^{\infty}x^{n}\exp\left(-\frac{x^{4}}{12}+\frac{vx^{2}}{2} \right)\mathrm{d}x=\frac{1}{2}\,6^{(n+1)/4}\Gamma\left(\frac{n+1}{2}\right) \mathrm{e}^{3v^{2}/8}D_{-(n+1)/2}\left(-\sqrt{\frac{3}{2}}v\right), \tag{5.18}\]
where
\[D_{\mu}(z)=\frac{\exp\left(-z^{2}/4\right)}{\Gamma(-\mu)}\int\limits_{0}^{\infty} \mathrm{e}^{-zx-x^{2}/2}x^{-\mu-1}\mathrm{d}x,\quad[\mathrm{Re}(\mu)<0], \tag{5.19}\]
is a parabolic cylinder function [26]. Thus, (5.17) can be rewritten as:
\[m_{N}\left(1+v/N^{1/2}\right)=\frac{1}{\sqrt{\pi}}\frac{D_{-1}\left(-\sqrt{ \frac{3}{2}}v\right)}{D_{-1/2}\left(-\sqrt{\frac{3}{2}}v\right)}\left(\frac{6 }{N}\right)^{1/4}\left[1+\mathrm{O}\left(N^{-1/2}\right)\right]\,. \tag{5.20}\]
The off-critical finite-size behaviour of the order parameter has the standard scaling form \(N^{-1/4}\phi_{m}(v)\) with a scaling function given by:
\[\phi_{m}(v)=\frac{6^{1/4}}{\sqrt{\pi}}\frac{D_{-1}\left(-\sqrt{\frac{3}{2}}v \right)}{D_{-1/2}\left(-\sqrt{\frac{3}{2}}v\right)},\quad v=N^{1/2}(K-1). \tag{5.21}\]
The different limiting behaviours of the ratio \(R(v)\) of parabolic cylinder functions entering (5.21) are studied in appendix C.
### Perturbed system
Let us now consider a perturbed critical system with \(K=K_{c}=1\) and \(C=\mathrm{O}(1)\). The size-dependant perturbation in (5.1) can be rewritten as \(C/L^{d_{c}\,\omega}\) and, according to (2.4) and (5.14), its amplitude transforms as:
\[C^{\prime}=b^{y_{t}-d_{c}\,\omega}C=b^{2-4\omega}C. \tag{5.22}\]
The finite-size scaling variable is then
\[v=L^{2-4\omega}C=N^{\frac{1}{2}-\omega}C, \tag{5.23}\]
leading to the following scaling behaviour for the critical magnetization:
\[m_{c}(C,N)=m_{N}\,\left(1+C/N^{\omega}\right)=m_{N}\left(1+v/N^{1/2}\right)=N ^{-1/4}\phi_{m}(v),\quad v=N^{\frac{1}{2}-\omega}C. \tag{5.24}\]
According to (5.22), the perturbation is irrelevant above \(\omega=\omega^{*}=1/2\), marginal at \(\omega^{*}\), and relevant below. Let us now look at the finite-size behaviour of the order parameter in the perturbed critical system in these three regimes:
* Irrelevant perturbation, \(\omega>1/2\): With \(C=\mathrm{O}(1)\) and \(N\gg 1\), the scaling variable \(v\) is small and the non-scaling correction term in (5.16), behaving as \(Cx^{4}/N^{\omega}\), can be neglected. A first-order expansion (C.4) of the ratio of parabolic cylinder functions \(R(v)\) in the scaling function (5.21) yields: \[m_{c}(C,N)\approx\frac{\sqrt{\pi}}{\Gamma(1/4)}\left(\frac{12}{N}\right)^{1/4} \left\{1+\left[\frac{1}{\sqrt{\pi}}-\frac{\Gamma(3/4)}{\Gamma(1/4)}\right] \frac{\sqrt{3}C}{N^{\omega-1/2}}\right\}.\] (5.25) The perturbation only contributes a correction to the \(N^{-1/4}\) scaling behaviour of the unperturbed system.
* Marginal perturbation, \(\omega=\omega^{*}=1/2\): Then, \(v=C\), so that the first correction term in (5.16) remains negligible. Thus, the leading contribution to \(m_{c}\) reads: \[m_{c}(C,N)\approx\phi_{m}(C)N^{-1/4}.\] (5.26) One recovers the unperturbed scaling with \(N\) with a \(C\)-dependent universal amplitude.
* Relevant perturbation, \(\omega<1/2\): With a relevant perturbation, \(|v|\) is large and the leading contribution to \(R(v)\) depends on the sign of the scaling variable. Using (C.6) and (C.8), one obtains: \[m_{c}(C,N)\approx\left\{\begin{array}{ll}\sqrt{3C}N^{-\omega/2},&C>0,\\ \sqrt{\frac{2}{\pi|C|}}N^{-(1-\omega)/2},&C<0.\end{array}\right.\] (5.27) As before in the relevant case, the scaling exponent depends on \(\omega\). This new scaling behaviour may lead to a dangerous non-scaling correction to the free energy density in (5.16) since \(x\) is no longer dimensionless. The problem is studied in appendix D where it is shown that (5.27) remains valid on the interval \(1/3<\omega<1/2\) when \(C>0\) and without restriction when \(C<0\) and \(\omega<1/2\).
The convergence of finite-size data to \(\phi_{m}\) is shown in figure 3.
## 6 Discussion
Let us first consider the finite-size scaling behaviour which is obtained in (2.10) for a truly marginal perturbation. Instead of a scaling dimension continuously varying with \(A\), the behaviour is the same as in the unperturbed system in (2.8).
A continuously varying critical exponent would be associated with a line of fixed points in the critical surface. By definition, the critical surface is the set of points in the parameter space where the correlation length diverges. Its mere existence requires an infinite-size system for which the size-dependent perturbation (2.2) vanishes. Thus, the critical Hamiltonians and its associated fixed points remain the unperturbed ones, which explains the observed scaling behaviour.
The marginalism affects the amplitude which is continuously varying with \(A\). Since \(\phi_{\varphi}(A)=\Phi_{c}(A,1)\), where \(\Phi_{c}\) is a finite-size scaling function, the variation of the amplitude with \(A\) is universal.
Figure 3: Evolution of the scaled critical bulk magnetization, given by (5.7) and (5.8) with \(K=1\), towards the universal scaling function \(\phi_{m}(v)\) in (5.21), for increasing values of the number of spins \(N\). The dashed lines indicate the expected behaviours for small and large values of the scaled amplitude, \(v=N^{(1/2)-\omega}C\). The convergence is quite slow for large positive values of \(v\).
For a marginally perturbed finite-size 2d system in the cylinder geometry, the lowest gap, \(G_{\varphi}(\Delta,L)\), which is an inverse correlation length associated with the local operator \(\varphi\), either the magnetization or the energy density, scales as:
\[G_{\varphi}(\Delta^{\prime},L^{\prime})=G_{\varphi}(b^{y_{\Delta}}\Delta,L/b)=bG _{\varphi}(\Delta,L),\quad\Delta=\frac{A}{L^{y_{\Delta}}}\,. \tag{6.1}\]
With \(b=L\), one obtains:
\[G_{\varphi}(\Delta,L)=G_{\varphi}(A,1)L^{-1}. \tag{6.2}\]
Thus, comparing to (A.6) in appendix A, the universal amplitude of the gap is given by
\[G_{\varphi}(A,1)=2\pi x_{\varphi}(A), \tag{6.3}\]
where \(x_{\varphi}(A)\) is the varying local exponent associated with the marginal radial defect from which the homogeneous size-dependent perturbation \(\Delta\) is the conformal transform on the cylinder [8, 9, 10].
## Appendix A Marginal size-dependent perturbation resulting from a conformal transformation
In this section we show how a homogeneous marginal size-dependent perturbation may result from the conformal transformation of a radial marginal perturbation in a 2d infinite system [8, 9, 10, 21].
The original system is critical and the radial marginal perturbation
\[\Delta=\frac{A}{(2\pi r)^{y_{\Delta}}}\] (A.1)
acts on the local operator \(\zeta\), in the plane with polar coordinates \((r,\theta)\), so that \(y_{\Delta}=2-x_{\zeta}\).
Under the conformal transformation [1]
\[w(z)=\frac{L}{2\pi}\ln z,\quad z=r\mathrm{e}^{\mathrm{i}\theta},\quad w=x+ \mathrm{i}y,\] (A.2)
the \(z\)-plane is mapped onto a \(w\)-cylinder with
\[x=\frac{L}{2\pi}\ln r,\quad-\infty<x<+\infty,\quad y=\frac{L\theta}{2\pi}, \quad 0\leqslant y<L.\] (A.3)
The associated local dilation factor is
\[b(z)=|w^{\prime}(z)|^{-1}=2\pi\frac{r}{L},\] (A.4)
so that the radial perturbation transforms into
\[\Delta^{\prime}=[b(z)]^{y_{\Delta}}\Delta=\frac{A}{L^{y_{\Delta}}},\] (A.5)
i.e., a constant size-dependent deviation from the critical point.
Let \(\varphi\) be an operator which, in the original system, displays a varying critical exponent \(x_{\varphi}(A)\) under the influence of the radial marginal perturbation. Comparing the expression of the critical two-point correlation function \(\langle\varphi(\rho_{1},\theta_{1})\varphi(\rho_{2},\theta_{2})\rangle\) in the original system to the form obtained by transforming back \(\langle\varphi(x_{1},y_{1})\varphi(x_{2},y_{2})\rangle\) on the cylinder, one can show that \(x_{\varphi}(A)\) is given, up to a constant factor, by the \(A\)-dependent universal amplitude of the associated lowest gap, \(G_{\varphi}=E_{\varphi}-E_{0}\), in the transformed system with a size-dependent marginal perturbation [8, 9, 10, 21]:
\[x_{\varphi}(A)=\frac{L}{2\pi}G_{\varphi}.\] (A.6)
## Appendix B Hilhorst-van Leeuwen perturbation of percolation in 1d
We consider the 1d bond percolation model on a finite system with size \(L\) with a perturbation of the Hilhorst-van Leeuwen type [27]. The bond occupation probability is a decreasing function of the distance \(n\) to the left-hand surface:
\[\pi_{n}=p+\Delta_{n}=p+\frac{A}{n^{\omega}},\quad\omega>0,\quad-p\leqslant A \leqslant 1-p.\] (B.1)
Evidently, the scaling behaviour of the perturbation amplitude \(A\) is the same as in (3.12) but the perturbation now involves the set of \(\Delta_{n}\) (\(n=1,L\)) instead of a single one, \(\Delta_{L}\). We shall only consider the marginal case with \(\omega=1\). The critical percolation probability is then given by:
\[P_{c}(A,L)=\prod_{n=1}^{L}\left(1+\frac{A}{n}\right)=\frac{1}{L!}\frac{\Gamma (L-|A|+1)}{\Gamma(1-|A|)},\quad-1\leqslant A\leqslant 0.\] (B.2)
When \(L\) is large, Stirling's formula [28] yields:
\[\ln\left[\frac{\Gamma(L-|A|+1)}{\Gamma(L+1)}\right]=-|A|\ln L+\text{O}\big{(}L ^{-1}\big{)}.\] (B.3)
Finally, the critical percolation probability displays a traditional marginal behaviour
\[P_{c}(A,L)\approx\frac{L^{-|A|}}{\Gamma(1-|A|)},\] (B.4)
with a perturbation-dependent finite-size scaling exponent, \(x_{P}(A)=|A|\).
## Appendix C Limiting behaviour of the ratio of parabolic cylinder functions
Let us study the asymptotic behaviour for small and large values of \(|v|\) of
\[R(v)=\frac{D_{-1}\left(-\sqrt{\frac{3}{2}}v\right)}{D_{-1/2}\left(-\sqrt{\frac {3}{2}}v\right)}\] (C.1)
in (5.21).
* \(|v|\ll 1\): The relation \(D_{-r-1/2}(x)=U(r,x)\) and the known values of \(U(r,0)\) and \(U^{\prime}(r,0)\)[28] lead to the expansion \[D_{\mu}(x)=\frac{2^{\mu/2}\sqrt{\pi}}{\Gamma\left(\frac{1-\mu}{2}\right)} \left[1-\frac{\sqrt{2}\,\Gamma\left(\frac{1-\mu}{2}\right)}{\Gamma(-\mu/2)}x+ \text{O}\big{(}x^{2}\big{)}\right],\] (C.2) so that: \[R(v)=\frac{\Gamma(3/4)}{2^{1/4}}\left[1+\left(\frac{1}{\sqrt{\pi}}-\frac{ \Gamma(3/4)}{\Gamma(1/4)}\right)\sqrt{3}v+\text{O}\big{(}v^{2}\big{)}\right].\] (C.3) Making use of the reflection formula, \(\Gamma(3/4)\Gamma(1/4)=\sqrt{2}\,\pi\), one obtains \[R(v)=\frac{2^{1/4}\pi}{\Gamma(1/4)}\left[1+\left(\frac{1}{\sqrt{\pi}}-\frac{ \Gamma(3/4)}{\Gamma(1/4)}\right)\sqrt{3}v+\text{O}\big{(}v^{2}\big{)}\right].\] (C.4)
* \(v\gg 1\): For \(x\gg 1\), the parabolic cylinder function behaves as [26] \[D_{\mu}(-x)\sim\frac{\sqrt{2\pi}}{\Gamma(-\mu)}\mathrm{e}^{x^{2}/4}x^{-\mu-1} \left[1+\mathrm{O}\big{(}x^{-2}\big{)}\right],\] (C.5) leading to: \[R(v)=\left(\frac{3}{2}\right)^{1/4}\sqrt{\pi\epsilon}\left[1+\mathrm{O}\big{(} v^{-2}\big{)}\right].\] (C.6)
* \(-v\gg 1\): For \(x\gg 1\), one has [26] \[D_{\mu}(x)\sim\,\mathrm{e}^{-x^{2}/4}x^{\mu}\left[1+\mathrm{O}\big{(}x^{-2} \big{)}\right],\] (C.7) so that: \[R(v)=\left(\frac{2}{3}\right)^{1/4}|v|^{-1/2}\left[1+\mathrm{O}\big{(}|v|^{-2 }\big{)}\right].\] (C.8)
## Appendix D Limits on the validity of (5.27)
When \(C>0\) the dimensionless variable \(x_{+}\) can be defined through \(\eta=x_{+}N^{-\omega/2}\). Then, according to (5.7), \(Nf(1+C/N^{\omega},x_{+}N^{-\omega/2})\) yields the critical free energy of the perturbed system:
\[Ng_{N}\left(C,x_{+}\right)=N^{1-2\omega}\left(-\frac{Cx_{+}^{2}}{2}+\frac{x_{ +}^{4}}{12}\right)+\frac{Cx_{+}^{4}}{6N^{3\omega-1}}+\mathrm{O}\big{(}N^{1-4 \omega}\big{)}.\] (D.1)
It follows that the correction term is indeed small when \(\omega>1/3\). In the interval \(1/3<\omega<1/2\) the leading contribution to the order parameter in (5.8) is given by
\[m_{c}\left(C,N\right)=N^{-\omega/2}\frac{K_{1}(C,N)}{K_{0}(C,N)},\] (D.2)
where:
\[K_{n}\left(C,N\right)=\int\limits_{0}^{\infty}x^{n}\exp\left[N^{1-2\omega} \left(-\frac{x^{4}}{12}+\frac{Cx^{2}}{2}\right)\right]\mathrm{d}x.\] (D.3)
This integral is easily evaluated using Laplace method which gives:
\[K_{n}\left(C,N\right)\approx\left(3C\right)^{n/2}\exp\left(\frac{3}{4}N^{1-2 \omega}C^{2}\right)\sqrt{\frac{\pi}{N^{1-2\omega}C}},\quad C>0.\] (D.4)
As expected, (D.2) gives back the first expression of \(m_{c}\left(C,N\right)\) in (5.27).
Similarly, when \(C<0\) the dimensionless variable is \(x_{-}\) such that \(\eta=x_{-}N^{-(1-\omega)/2}\) and the critical free energy now reads:
\[Ng_{N}\left(C,x_{-}\right)=\frac{|C|x_{-}^{2}}{2}+\frac{x_{-}^{4}}{12N^{1-2 \omega}}-\frac{|C|x_{-}^{4}}{6N^{1-\omega}}+\mathrm{O}\big{(}N^{-1}\big{)}.\] (D.5)
Thus, with \(\omega<1/2\) the first term alone survives. With the change of variable
\[t=\frac{|C|x_{-}^{2}}{2}=\frac{1}{2}|C|N^{1-\omega}\eta^{2}\] (D.6)
in (5.8), the second expression of \(m_{c}\left(C,N\right)\) in (5.27) is easily recovered. |
2306.12768 | Concept-aware clustering for decentralized deep learning under temporal
shift | Decentralized deep learning requires dealing with non-iid data across
clients, which may also change over time due to temporal shifts. While non-iid
data has been extensively studied in distributed settings, temporal shifts have
received no attention. To the best of our knowledge, we are first with tackling
the novel and challenging problem of decentralized learning with non-iid and
dynamic data. We propose a novel algorithm that can automatically discover and
adapt to the evolving concepts in the network, without any prior knowledge or
estimation of the number of concepts. We evaluate our algorithm on standard
benchmark datasets and demonstrate that it outperforms previous methods for
decentralized learning. | Marcus Toftås, Emilie Klefbom, Edvin Listo Zec, Martin Willbo, Olof Mogren | 2023-06-22T09:45:40Z | http://arxiv.org/abs/2306.12768v1 | # Concept-aware clustering for decentralized deep learning under temporal shift
###### Abstract
Decentralized deep learning requires dealing with non-iid data across clients, which may also change over time due to temporal shifts. While non-iid data has been extensively studied in distributed settings, temporal shifts have received no attention. To the best of our knowledge, we are first with tackling the novel and challenging problem of decentralized learning with non-iid and dynamic data. We propose a novel algorithm that can automatically discover and adapt to the evolving concepts in the network, without any prior knowledge or estimation of the number of concepts. We evaluate our algorithm on standard benchmark datasets and demonstrate that it outperforms previous methods for decentralized learning.
Machine Learning, ICML
## 1 Introduction
The proliferation of smartphones and other devices that can continuously collect and transmit data has substantially increased the amount of data available for machine learning-based applications. However, sharing data explicitly may not be possible or desirable in some scenarios. For instance, users and businesses may have privacy concerns preventing disclosure of private or sensitive data. Alternatively, legal regulations such as the GDPR (Voigt and Von dem Bussche, 2017) and other data protection acts may prohibit data sharing. Moreover, practical limitations such as large distributed datasets or low network bandwidth may make transmitting data to a centralized location infeasible. In such scenarios, distributed frameworks like federated learning (FL) can be a viable solution, as they communicate model parameters instead of data, and where the models are aggregated using federated averaging (FedAvg)(McMahan et al., 2017). FL has already demonstrated its scalability and applicability in various domains, such as hospitals (Dayan et al., 2021), retail stores (Yang et al., 2019), and at companies such as Google (McMahan and Thakurta, 2022).
In the FL framework multiple clients collaborate to train a shared model without exchanging their local data. However, FL often relies on a central node to coordinate the communication among the clients, which can cause communication bottlenecks and single points of failure. To overcome these limitations, decentralized learning proposes a peer-to-peer communication protocol that eliminates the need for a central node and reduces the vulnerability and computational load of any single node. However, decentralized learning also poses new challenges, such as how to optimize the client models in a decentralized manner.
Traditionally, decentralized machine learning has used consensus optimization, where clients aim to agree on a common model using a gossip learning approach (Kempe et al., 2003; Boyd et al., 2006; Blot et al., 2016). This works well when the data distributions across clients are similar or identical. However, when the data distributions or tasks differ significantly across clients, consensus optimization can be detrimental. Therefore, recent work has suggested viewing decentralized learning as a clustering problem, where clients try to find suitable collaborators in a network of peers and avoid merging their models with dissimilar ones (Onoszko et al., 2021; Li et al., 2022; Listo Zec et al., 2023). All of these works consider non-iid data in decentralized deep learning, however they still assume that the data distributions are stationary in time. For real world scenarios, this is often not the case for edge devices that continuously collect new data.
This work presents the first study of decentralized deep learning with temporal shifts, which account for the dynamic nature of data distributions over time. Our main contribution is an algorithm that allows clients to learn personalized models in non-iid settings where their concepts may evolve over time. Our problem setting is related to the recent work of (Jothimurugesan et al., 2023), which investigates non-iid data across clients and time in federated learning with a central server.
## 2 Background
### Problem formulation
Decentralized learning involves clients solving their own optimization problems, such as supervised learning. Previous work has shown that clients can benefit from communicating models with other clients who have similar tasks (Listo Zec et al., 2023). However, the authors assumed that the data distributions of each client are stationary. We address the challenges that emerge when the data distributions of clients vary over time. Our main contribution is to demonstrate that existing decentralized learning methods are not resilient to temporal shifts, and to propose a simple solution based on novel hierarchical model aggregation. We empirically show that our solution leads to improved performance for this problem.
We consider an empirical risk minimization (ERM) setup with \(K\) clients that communicate (synchronously) in a peer-to-peer network, where any pair of clients can communicate at each communication round. Each client \(k\) has a (private) training set \(\mathcal{D}^{t}_{k}(x,y)\) generated by the underlying distribution \(p^{t}_{k}(x,y)\) at time step \(t\). We assume that the data is non-iid both across clients and over time, which is more realistic than the common assumption of only having non-iid data across clients. We follow Jothimurugesan et al. (2023) and Gama et al. (2014) and say that there is a concept shift at client \(k\) if \(p^{t}_{k}\neq p^{t-1}_{k}\).
This work aims to design an algorithm that can handle shifts in a distributed setting. Concept shift can be caused by various types of shifts over time. In this paper, we focus on two main types of shifts: _covariate shift_ (where the input distribution \(p(x)\) changes but the label distribution \(p(y|x)\) is stationary) and _label shift_ (where the label distribution \(p(y)\) changes but the distribution \(p(x|y)\) is stationary.) (Kairouz et al., 2021).
### Motivation
Decentralized learning is appealing in situations where a central server is undesirable; this could be due to privacy concerns (lack of trust in the central server) or scalability issues (central server being a bottleneck). For example, consider \(K\) users with their own data distributions. A possible scenario is \(K\) smartphones that collect images that reflect the users' preferences and activities. Rather than relying on a central server, each user can communicate with peers in a network to jointly solve some optimization problem, such as learning a supervised image recognition task. However, the data distributions may differ across clients due to their personal preferences. In realistic scenarios, the images collected may also vary over time, which has not been studied in decentralized peer-to-peer learning before. Therefore, the goal of this work is to study distributed temporal shifts.
## 3 Method
We model each client as having a _concept_ that reflects its true optimization objective at any point in time. A client benefits from communicating with other clients that have the same concept as itself (i.e. similar data distribution), since FedAvg becomes detrimental otherwise (McMahan et al., 2017). This can thus be framed as a clustering problem, where each client tries to find clients in the network which have a similar concept. However, the clustering is challenging because the clients cannot share their local data due to privacy concerns, and can only exchange gradients (or model parameters).
### Hierarchical aggregation with similarity based tuning (HAST)
We propose a novel clustering algorithm that can adapt to different concepts that emerge at different times. Our algorithm, called **Hierarchical Aggregation with Similarity based Tuning** (HAST), extends the Decentralized Adaptive Clustering (DAC) method (Listo Zec et al., 2023) by incorporating a hierarchical structure and a tuning mechanism based on empirical training loss. We demonstrate the effectiveness of our algorithm on two scenarios: one with two concepts and one with four concepts. We show that our algorithm outperforms DAC and other baselines in terms of
Figure 1: An illustration of the proposed solution. Each client \(k\) has a model consisting of a feature extractor \(\mathcal{F}^{k}_{\theta}\) and a classifier \(\mathcal{F}^{k}_{\phi}\). The training consists of three steps: (1) Uniform sampling: A subset of clients \(\mathcal{R}\) is randomly selected and client \(k\) updates both model components using FedAvg with \(\mathcal{R}\). (2) Similarity-based sampling: A subset of clients \(\mathcal{S}\) is selected based on their similarity \(s\) to client \(k\), which updates only the classifier component using FedAvg with \(\mathcal{S}\). (3) Local training: Client \(k\) trains both model components locally and fine-tunes the classifier component.
accuracy and robustness under concept shift.
HAST works as follows. Each client \(k\) has a neural network consisting of a feature extractor \(\mathcal{F}^{k}_{\theta}\) and a classifier \(\mathcal{F}^{k}_{\phi}\) (the whole model being \(\mathcal{F}^{k}_{\phi}\circ\mathcal{F}^{k}_{\theta}\)). The training procedure in HAST consists of three steps, illustrated in figure 1. For each client \(k\) and for each communication round:
1. A subset of clients \(\mathcal{R}\) is sampled uniformly and at random. Client \(k\) updates all layers \(\mathcal{F}^{k}_{\phi}\circ\mathcal{F}^{k}_{\theta}\) as in FedAvg.
2. A subset of clients \(\mathcal{S}\) is randomly sampled based on their similarity to client \(k\). Client \(k\) updates _only_ the classifier layers \(\mathcal{F}^{k}_{\phi}\) using average aggregation.
3. Client \(k\) performs local training on its own data, updating the whole model \(\mathcal{F}^{k}_{\phi}\circ\mathcal{F}^{k}_{\theta}\). Afterwards, it fine-tunes only the classifier \(\mathcal{F}^{k}_{\phi}\).
The similarity function is the same as used by (Listo Zec et al., 2023), based on the empirical training loss of client model \(i\) on the data of client \(j\): \(s_{ij}=1/\ell(w_{i};x_{j})\). This similarity score is transformed using a softmax with a temperature scaling \(\tau\) in order to get a probability vector \(\tilde{s}_{ij}\) for each client and each communication round.
### Experimental setup
Our goal is to develop an algorithm that is robust to distributional shifts (not decreasing significantly in accuracy between tasks). To demonstrate the challenge of decentralized shift, we measure test accuracy over time and study how robust different methods are to temporal shifts on two computer vision datasets. Our code is found on Github. 1
Footnote 1: [https://github.com/EmileKar/HAST](https://github.com/EmileKar/HAST)
To investigate the effects of _covariate shift_, we consider the PACS dataset (Li et al., 2017) which is typically used for domain adaptation. It consists of four domains with the same seven labels in each domain. We simulate covariate shift by changing domains for a client \(k\) over time \(t\), i.e. \(\mathcal{D}^{t}_{k}(x)\) varies over time but keeping \(\mathcal{D}^{t}_{k}(y|x)\) fixed. To investigate the effects of _label shift_, we consider the CIFAR-10 dataset (Krizhevsky et al., 2009). We simulate label shift by varying \(\mathcal{D}^{t}_{k}(y)\) over time but keeping \(\mathcal{D}^{t}_{k}(x|y)\) fixed, creating two clusters based on the labels: one animal cluster (four labels) and one vehicle cluster (four labels).
**Baselines.** We compare our proposed method to two main baselines. The first we refer to as _Random_, where all clients communicate randomly using a gossip protocol. The second is _DAC_(Listo Zec et al., 2023), where clients communicate using the similarity metric based on empirical training loss. For a fair comparison with the baselines, we allow all methods to sample the same number of clients per communication round. Since HAST performs two stages of aggregation (random and similarity-based), it effectively samples \(2n\) clients, where \(n=3\) in this paper. Therefore, we allow the baselines to also sample \(2n\) clients for each round.
**Hyperparameters**. We performed a grid search over the hyperparameters of each baseline and selected the ones that achieved the highest validation accuracy. We also verified that the optimal learning rate was not at the boundary of the searched grid.
**Models.** We use a small CNN model for each client in all experiments. The model has two convolutional layers (\(\mathcal{F}_{\theta}\)), two fully-connected layers and an output layer (\(\mathcal{F}_{\phi}\)). The model is not designed to achieve state-of-the-art performance on the supervised tasks, but rather to have enough capacity to solve the tasks while exploring decentralized temporal shifts. The number of clients is set to 50 in the CIFAR-10 experiments, and 20 in the PACS experiments.
## 4 Results and discussion
Firstly, we evaluate our method on the PACS dataset, which consists of four domains: photo, art painting, cartoon, and sketch. We simulate a dynamic environment where clients may experience domain shifts over time. Specifically, at a certain communication round (marked by the vertical dashed line in figure 1(a)), each client randomly switches to a different domain with probability \(\frac{3}{4}\). Figure 1(a) shows the mean test accuracy over clients as a function of communication rounds. We can see that our method is robust to domain shifts and outperforms the baselines. Interestingly, the DAC baseline performs even worse than the Random baseline. We attribute this to overfitting, as DAC merges all layers of the model with clients selected based on the similarity score. In contrast, HAST first aggregates the model with random clients and then only aggregates the classifier layer with similar clients, resulting in a more generalizable model.
We conduct experiments to investigate the effect of the aggregation scheme in HAST on the performance of personalized models. We vary the number of layers that are aggregated in the second step of HAST using the similarity sampling and compare it with DAC and Random. Figure 1(b) shows the results on the CIFAR-10 dataset with \(C=2\) clusters divided into animals and vehicles. The depth parameter indicates how many layers are included in the aggregation. For example, a depth of 4 means that all layers except the first one are aggregated, while a depth of 1 means that only the output layer is aggregated. We conclude that the optimal depth for HAST is 3, which is used in all other experiments.
Figure 1(c) presents the results on CIFAR-10 with two clusters that have random labels. This setting creates a smaller
distribution gap between the two clusters (compared to the setting with animals vs vehicles), which makes the Random baseline more competitive. Nevertheless, HAST still surpasses both baselines in this setting. We also note that unlike DAC, which suffers from severe overfitting to the current domain and shows a large performance drop under domain shifts, HAST maintains its robustness across domains.
We perform an ablation study to investigate the effect of the third step of HAST, where we finetune the classifier layer. We compare HAST with DAC and Random, both with and without finetuning, on the CIFAR-10 dataset with one cluster. This means that the data is iid among clients. Figure 1(d) shows the results of this experiment. We observe that finetuning improves the performance of all methods and that HAST outperforms both baselines, regardless of finetuning.
## 5 Conclusions
We have presented a novel aggregation method for decentralized deep learning that can cope with temporal shifts in a peer-to-peer network. This is the first work to address this problem and to demonstrate the benefits of aggregating different layers of a neural network using FedAvg for robustness and generalization under temporal shifts. Our proposed algorithm uses soft clustering to group clients with similar concepts and allows them to update their beliefs of potential collaborators over time. This enables clients to smoothly transition between different collaboration groups as their concepts change. Moreover, our algorithm employs a two-stage aggregation scheme that makes the personalized models robust to concept changes by leveraging the knowledge from other clients.
## 6 Related work
Gossip learning is a peer-to-peer communication protocol that has been studied in various machine learning settings (Kempe et al., 2003; Boyd et al., 2006; Ormandi et al., 2012) and applied in a decentralized deep learning setup to learn personalized models (Blot et al., 2016). However, gossip learning is not suitable for situations where client data is distributed non-iid, as it assumes all clients share the same objective.
Previous work in federated learning has addressed this problem by introducing multiple central models to which clients are assigned, and clustering clients with similar objectives (Ghosh et al., 2020). However, this approach relies on determining the appropriate number of central models on beforehand and assigns clients to hard clusters, which may not capture the overlapping client interests.
Listo Zec et al. (2023) proposed an algorithm that assigns clients to soft clusters. These soft cluster assignments are continuously learned over communication rounds based on client similarity, which is approximated by training losses as by Onoszko et al. (2021). This work tackles many challenges in decentralized learning where data is non-iid distributed among clients, both in terms of cluster sizes and different types of distributional shifts. However, they do not consider time-dependent distributional shifts.
Jothimurugesan et al. (2023) addresses federated learning with distributed concept shift, where clients have different and dynamic data distributions. They present FedDrift, which learns multiple global models for different concepts. They are the first to tackle FL with concept shift, challenging the single-model paradigm. The work treats shift adaptation as a time-varying clustering problem, and uses hierarchical clustering to handle an unknown number of concepts.
Similarly to Jothimurugesan et al. (2023), we also view concept shift adaptation as a time-varying clustering problem. However, our solution differs in several aspects. First, we consider a decentralized learning framework, where there is no central server. Second, we build on previous work on round-based client similarity approximation and soft-cluster assignment, enabling clients to aggregate their models with similar peers. Third, we introduce a novel hierarchical model aggregation step, which ensures that every client in the network can perform well on their own current distribution and also adapt quickly to new concepts.
Figure 2: Test accuracy as a function of communication rounds for varying datasets and number of clusters \(C\). (a): PACS, (b-d): CIFAR-10. |
2305.13126 | Free Space Continuous Variable Quantum Key Distribution with Discrete
Phases | Quantum Key Distribution (QKD) offers unconditional security in principle.
Many QKD protocols have been proposed and demonstrated to ensure secure
communication between two authenticated users. Continuous variable (CV) QKD
offers many advantages over discrete variable (DV) QKD since it is
cost-effective, compatible with current classical communication technologies,
efficient even in daylight, and gives a higher secure key rate. Keeping this in
view, we demonstrate a discrete modulated CVQKD protocol in the free space
which is robust against polarization drift. We also present the simulation
results with a noise model to account for the channel noise and the effects of
various parameter changes on the secure key rate. These simulation results help
us to verify the experimental values obtained for the implemented CVQKD. | Anju Rani, Pooja Chandravanshi, Jayanth Ramakrishnan, Pravin Vaity, P. Madhusudhan, Tanya Sharma, Pranav Bhardwaj, Ayan Biswas, R. P. Singh | 2023-05-22T15:25:54Z | http://arxiv.org/abs/2305.13126v1 | # Free Space Continuous Variable Quantum Key Distribution with Discrete Phases
###### Abstract
Quantum Key Distribution (QKD) offers unconditional security in principle. Many QKD protocols have been proposed and demonstrated to ensure secure communication between two authenticated users. Continuous variable (CV) QKD offers many advantages over discrete variable (DV) QKD since it is cost-effective, compatible with current classical communication technologies, efficient even in daylight, and gives a higher secure key rate. Keeping this in view, we demonstrate a discrete modulated CVQKD protocol in the free space which is robust against polarization drift. We also present the simulation results with a noise model to account for the channel noise and the effects of various parameter changes on the secure key rate. These simulation results help us to verify the experimental values obtained for the implemented CVQKD.
## I Introduction
With the advancement in technology, the demand for secure communication has increased. In classical communication, the security relies on the complexity of the underlying mathematical algorithm and can be easily compromised once there is enough computational advancement [1]. Quantum Key Distribution (QKD) [2; 3] provides a secure way to distribute a key between two communicating parties, Alice and Bob. QKD uses quantum states to encode the key information, and its security completely relies on the laws of quantum mechanics, making no assumptions about the adversary's technological power [4]. The key exchange takes place through the quantum channel and is post-processed using an authenticated classical channel.
Implementing QKD over large distances enables secure quantum communication over a global scale and involves DVQKD protocols, which require encoding key information in a single quantum state [5; 6; 7; 8; 9; 10]. The practical implementation of these QKD protocols involves various challenges, one of which is the generation of deterministic single-photons. However, achieving this in experimental setups can be difficult. As a result, in prepare and measure DVQKD protocols, weak coherent pulses are often utilized as an alternative. Nonetheless, the use of weak coherent pulses increases the risk of photon number splitting attacks. On the measurement side, the single-photon detectors are expensive and are not photon number resolving, and hence, record multi-photon events that could lead to security loopholes [11]. On the other hand, entanglement-based DVQKD protocols [12] are unconditionally secure [13], but the key rate obtained is very low.
At this stage, we need to explore another class of QKD protocols, i.e., CVQKD protocols [14; 15; 16] that might be proven to be one of the best possible candidates. CVQKD protocols use the quadratures of the electromagnetic field to encode key information [17; 18]. These protocols are compatible with well-established classical communication technologies, thus enabling us to use existing communication infrastructure with enhanced security [19; 20] provided through quantum mechanics. Further, CVQKD protocols could be implemented using standard telecommunication components with a higher key rate [21; 22] as compared to DVQKD protocols. The state preparation step requires the use of amplitude and phase modulators, and the measurement step uses balanced homodyne detectors that are already available commercially and operate at a very high rate [23; 24; 25]. In addition to this, homodyne detectors are cost-effective and have high quantum efficiency at telecommunication wavelength. These protocols are efficient even at room temperature and daylight since the local oscillator acts as a spectral, temporal, and spatial filter and is robust against stray light.
According to the modulation scheme, we can divide CVQKD protocols into continuous (Gaussian) modulation CVQKD and discrete modulation CVQKD. In the former case, one performs Gaussian modulation for both amplitude and phase quadratures, like GMCS or GG02 protocols [26; 27; 28]. The latter is based on the discrete modulation of the quadratures, like quadrature amplitude modulation (QAM) [29], quadrature phase sifts keying (QPSK)[30; 31; 32; 33]. Gaussian modulated protocols are pretty mature with well-defined security [19] and have been successfully demonstrated up to a distance of hundreds of km [34] in fiber, making them efficient for metropolitan area networks. However, implementing such protocols over long distances is challenging as it is difficult to maintain good reconciliation efficiency at low signal-to-noise ratio (SNR) [35; 36]. Here comes the role of discrete modulated (DM) CVQKD. The advantage of DM-CVQKD is that it simplifies the modulation scheme and key extraction task, which is a bit compli
cated in Gaussian-modulated CVQKD protocols, where one extracts the key from continuous random values. DM-CVQKD protocols are remarkable for long-distance applicability even at low SNR [37; 38].
In this paper, we report the implementation of a free space discrete-modulated CVQKD protocol in the lab. The paper is structured as follows. In Sec. II, the theoretical background for the protocol is discussed, and a noise model is presented to account for the channel noise. The imperfections present in the experiment are also simulated, and the simulated results are discussed. In Sec. III, the experimental setup for the four-state discrete modulation CVQKD is presented. Sec. IV shows the experimental results, and we end up with concluding remarks in Sec. V.
## II Theory and Simulation
In this Section, we discuss the theoretical aspects of the protocol implemented and present the details of the simulation performed. Further, we describe imperfections in the experimental implementation and provide models to simulate them. We end the Section with some remarks on the security of the protocol and present the simulated results.
### Protocol Execution
The protocol implemented in this manuscript consists of the following steps.
1. Alice randomly selects from the four coherent states \(\left|\alpha e^{i\phi_{\mathrm{A}}}\right\rangle\), where \(\phi_{\mathrm{A}}\) is chosen from \(0,\pi/2,\pi\), and \(3\pi/2\) by modulating the phase of her signal. This signal is transmitted to the receiver Bob. The phases \(0\) and \(\pi\) correspond to encoding the bit in the \(\hat{q}\) basis, and \(\pi/2\) and \(3\pi/2\) correspond to the \(\hat{p}\) basis respectively. Here, \(|\alpha|^{2}\) is the mean photon number of the signal.
2. Bob performs homodyne detection [39] on the received signal and randomly decides to measure the \(\hat{q}\) quadrature or the \(\hat{p}\) quadrature by modulating the phase of the local oscillator (LO), choosing \(\phi_{\mathrm{B}}\) as \(0\) or \(\pi/2\) respectively.
3. After the exchange of signals, Alice discloses the basis in which the bit was encoded, and Bob discloses the basis in which the signal was measured. They retain the pulses for which the encoding and the measuring basis match. This process is called sifting.
4. The quadrature probability distributions for the measurements made by Bob for various \(\phi=\phi_{\mathrm{A}}-\phi_{\mathrm{B}}\) are Gaussians centered at \(\pm\alpha\) for \(\phi=0\) and \(\pi\) respectively and at \(0\) for \(\phi=\pi/2\) and \(3\pi/2\). The probability distributions for \(\phi=\pi/2\) and \(3\pi/2\) are indistinguishable and hence do not contribute to the key.
5. The measured values for \(\phi=0\) and \(\pi\) contribute to the key. Since in homodyne detection, the measured output values are continuous, Bob assigns a threshold \(x_{0}\) to the sifted signals for postselection and assigns his bit value as \[\text{bit value}=\begin{cases}1&x_{\phi}>x_{0}\\ 0&x_{\phi}<-x_{0}\\ \text{inconclusive}&-x_{0}<x_{\phi}<x_{0}.\end{cases}\] (1)
6. Alice assigns her bit value as \(1\) for \(\phi_{\mathrm{A}}=0\) and \(\pi/2\) and \(0\) for \(\phi_{\mathrm{A}}=\pi\) and \(3\pi/2\).
7. Alice and Bob disclose a fraction of their raw key in order to perform parameter estimation and mutual information to get the final secret key.
In order to understand the limitations of the carried out laboratory demonstration, a simulation of the DM-CVQKD protocol was performed.
### Noise model
One of the major roadblocks in the implementation of quantum information protocols is the presence of noise and attenuation, which is unavoidable due to interactions of the quantum system with the environment. The state that Alice prepares is sent to Bob via a quantum channel which in reality can either be a fiber optic or a free space. The propagation of this state through the quantum channel alters the state at the output, which in turn affects Bob's measurement and introduces errors in the generated key. The effect of the transmission losses and the channel noise on the transmitted state can be evaluated by considering a model as shown in Fig. 1.
A fictitious beam splitter of transmittance \(T<1\) is inserted into the quantum channel separating Alice and Bob. The beam splitter couples the quantum state to the environment, which introduces noise in the state. The transmittance \(T\) models the attenuation of the signal in the quantum channel. The density matrix for the ensemble of states shared by Alice can be written as
\[\hat{\rho}_{\mathrm{sig}}=\frac{1}{4}\left(\left|\alpha\right\rangle\left\langle \alpha\right|+\left|-\alpha\right\rangle\left\langle-\alpha\right|+\left|i \alpha\right\rangle\left\langle i\alpha\right|+\left|-i\alpha\right\rangle \left\langle-i\alpha\right|\right). \tag{2}\]
The effect of the channel can be evaluated by using the covariance matrix formalism [15]. The covariance matrix for the state in Eq. (2) is evaluated as
\[V=\begin{pmatrix}\frac{\left|\alpha\right|^{2}}{2}+\frac{1}{4}&0\\ 0&\frac{\left|\alpha\right|^{2}}{2}+\frac{1}{4}\end{pmatrix}. \tag{3}\]
Here, \(V_{\rm mod}=\frac{|\alpha|^{2}}{2}\) is Alice's modulation variance. The covariance matrix after propagation through the channel can be evaluated as
\[V_{\rm Bob}=\begin{pmatrix}T\frac{|\alpha|^{2}}{2}+\frac{1}{4}+\xi_{\rm ch}&0\\ 0&T\frac{|\alpha|^{2}}{2}+\frac{1}{4}+\xi_{\rm ch}\end{pmatrix}, \tag{4}\]
where \(\xi_{\rm ch}\) is the noise added to the signal due to transmission in the channel.
Similarly, an imperfect homodyne detection at the receiver end can also be modeled using a beam splitter with transmittance \(\eta\), which denotes the detection efficiency and noise \(\xi_{\rm ele}\), which models the electronic noise in shot noise units. The final covariance matrix for Alice and Bob's data will read as
\[V_{\rm AB}=\begin{pmatrix}\frac{|\alpha|^{2}}{2}\mathrm{I}_{2}&\frac{|\alpha|^{ 2}}{2}\mathrm{I}_{2}\\ \frac{|\alpha|^{2}}{2}\mathrm{I}_{2}&(T\eta\frac{|\alpha|^{2}}{2}+\frac{1}{4} +\xi_{\rm ch}+\xi_{\rm ele})\mathrm{I}_{2}\end{pmatrix}, \tag{5}\]
where \(\mathrm{I}_{2}\) represents the 2x2 identity matrix.
### Mutual Information and Security
The secret key rate [40] for a QKD protocol is defined by the relation,
\[k_{DR} =\beta I(A:B)-I(A:E)\text{ or} \tag{6}\] \[k_{RR} =\beta I(A:B)-I(B:E), \tag{7}\]
in the case of direct and reverse reconciliation, respectively. Here \(I(A:B)\) is the mutual information shared between Alice and Bob, and \(I(A:E)\) or \(I(B:E)\) is the information leakage to Eve in case of direct reconciliation or reverse reconciliation. \(\beta\) is the reconciliation efficiency.
For discrete modulated CVQKD under consideration, we have evaluated the mutual information between Alice and Bob by the relation,
\[I_{\rm AB} =\frac{(q_{1}+q_{2})}{2}+\frac{q_{1}}{2}\log_{2}(\frac{q_{1}}{(q_ {1}+q_{2})})+\] \[\frac{q_{2}}{2}\log_{2}(\frac{q_{2}}{(q_{1}+q_{2})}), \tag{8}\]
where,
\[q_{1} =\text{erfc}\left(\frac{(x_{0}-\sqrt{T}\alpha)}{\sqrt{2(\frac{1}{ 4}+\xi_{\rm ch}+\xi_{\rm ele})}}\right)\text{ and} \tag{9}\] \[q_{2} =\text{erfc}\left(\frac{(x_{0}+\sqrt{T}\alpha)}{\sqrt{2(\frac{1}{ 4}+\xi_{\rm ch}+\xi_{\rm ele})}}\right). \tag{10}\]
In Fig. 2, we plot the secret key rate achieved by the protocol for the case of a simple beam splitter attack by Eve. In this attack, Eve replaces the channel with a beam splitter of similar transmittance and a perfectly transmitting channel. Eve splits the signal on the beam splitter and keeps a part of the signal for measurement. The transformation on the state can be seen as
\[\left|\alpha\right\rangle_{\rm B}\left|0\right\rangle_{\rm E}\rightarrow \left|\sqrt{T}\alpha\right\rangle_{\rm B}\left|\sqrt{1-T}\alpha\right\rangle_ {\rm E}, \tag{11}\]
where \(T\) is the transmittance of the channel, and the subscripts denote the person receiving the state. Eve then waits for the basis announcement and measures her state in the correct basis. Depending on the measurement result Eve makes a guess on the state sent by Alice. If her measured quadrature value is positive she makes a guess of Alice's bit as 1 otherwise as 0. The mutual information between Eve and Bob, I(B:E), can be evaluated and
Figure 1: Theoretical model of the channel transmittance and noise included in the simulation. The beam splitter has a transmittance \(T\leq 1\) and couples the quantum state \(\left|\alpha\right\rangle_{\rm sig}\) with the environment and hence introduces excess noise in input state. Here \(\hat{a}_{\rm sig}\) & \(\hat{b}_{\rm env}\) represent the input field operators of signal and the environment respectively and \(\hat{a}^{\prime}_{\rm sig}\) & \(\hat{b}_{\rm out}\) denote the output field operators after interaction at the BS.
Figure 2: Plot of mutual information as a function of transmittance with different excess noises. Here, \(\xi=\xi_{\rm ch}+\xi_{\rm ele}\) represents the total excess noise at Bob’s end. \(\xi_{\rm ch}\) denotes the noise added to the signal due to transmission in the channel & \(\xi_{\rm ele}\) denotes the electronic noise present in the detection.
the secret key rate can be given as in Eq. (7). We have evaluated the mutual information between Bob and Eve for this particular beam splitter attack numerically, and the final secret key rate is as shown in Fig. 2. The secret key rate has been evaluated assuming the protocol is implemented with transmittance known as a function of distance. It is seen from Fig. 2 that for experimentally relevant values of excess noise, the protocol achieves a positive key rate even up to a distance of 35 km.
### Simulation Results
In this Section, we have presented the simulation results obtained from our study. The results would help in a better understanding of the experimental setup and optimization of the experimental parameters.
For simulation, the channel transmittance \(T\), and the excess noise were considered as 0.9 (under lab conditions) and 0.02, respectively. Also, the signal was taken to be a weak coherent state with an average of 1 photon per pulse. Fig. 3 depicts the probability distribution of the values measured by Bob after both have disclosed their phases. It can be seen that the probability distributions for \(\phi=90^{\circ}\) and \(\phi=270^{\circ}\) are indistinguishable from each other, and hence Alice and Bob discard those measurements. The variance of the probability distribution differs from 1/4 due to the presence of excess noise in the protocol. The mean of the probability distribution corresponding to \(\phi=0^{\circ}\) and \(\phi=180^{\circ}\) differs from \(\pm 1\) due to attenuation in the channel and is given by \(\pm\sqrt{T}\). Fig. 4 depicts the post-selection efficiency and the quantum bit error rate (QBER) versus the threshold value selected for various mean photon numbers of the signal. It can be readily seen from Fig. 4 that increasing the threshold value decreases the bit error rate and also decreases the post-selection efficiency. The trade-off gained by reducing the bit error rate is the reduction in post-selection efficiency which ultimately has an effect on the key rate. The simulation can help in optimizing the trade-off between bit error rate and post-selection efficiency by the optimal selection of the threshold value for the experiment being performed.
## III Experimental Setup
The experimental setup for the demonstration of discrete modulated CVQKD protocol in free space is shown in Fig. 5. We have used a 780 nm pulsed laser (NPL79B) operating at a 1 MHz repetition rate and 30 ns pulse width. We set up a Mach Zehnder interferometer (MZI) for the implementation of the discrete modulated CVQKD protocol. The beam from the laser splits at a PBS into two arms of the interferometer. One arm is the signal, and the other is the local oscillator (LO). Alice controls the signal arm, whereas the LO arm is a part of Bob's detection system. We have used electro-optic phase
Figure 3: The simulated probability distribution of the measured homodyne output \(\hat{x}_{\phi}\) corresponding to \(\phi=0,\pi/2,\pi,3\pi/2\). Here, the mean photon number of the signal is 1. The channel transmittance \(T\) was taken to be 0.9 (under lab conditions) and the excess noise was taken to be 0.02. The probability distributions corresponding to \(\phi=\pi/2\) and \(\phi=3\pi/2\) are indistinguishable; hence, the corresponding measurements are discarded.
Figure 4: Plot of post-selection efficiency (top) and bit error rate (bottom) as a function of the threshold for various average photon number in the signal. The channel transmittance \(T\) was taken to be 0.9 (under lab conditions) and the excess noise was taken to be 0.02. It can be readily seen from the above graphs that on increasing the threshold \(x_{0}\), the bit error rate decreases; however, it also results in a decreasing post-selection efficiency which results in a lower key rate.
modulators (EO-PM-NR-C1) to modulate the phase of Alice and Bob's signals.
We used a high-speed AWG (Tektronix AWG5200) to drive a high-voltage amplifier (Thorlabs HVA200) which in turn drives the PM. Both signal and LO arms include four mirror alignments (\(\mathrm{M_{2}},\mathrm{M_{3}}\) and \(\mathrm{M_{8}},\mathrm{M_{9}}\) are placed on translation stages) to adjust the delay between them. Before using the PM, the interferometer is calibrated so as to have zero phase difference between the arms. To do this, the mirror \(M_{5}\) is placed on a PZT-stage (Attocube, ECSx3080) controlled by an AMC100 controller for a fine scan of the interferometer phase. Homodyne detection is performed at the final BS. The detection system includes a balanced homodyne detector, BHD (Thorlab's PDB435A, DC-350 MHz), which measures the subtracted photo-current falling on the two detectors. A mixed signal oscilloscope, MSO (Tektronix 6-series), is used to record the output signal of BHD.
### Alice
One arm of the interferometer i.e. the signal arm, is controlled by Alice. The phase modulator PM1 is used to encode the four-phase values for Alice i.e., \(0,\pi/2,\pi\) and \(3\pi/2\). The half voltage, \(V_{\pi}\) of PM is 170 V. An optical density filter (ODF) with \(OD=4\) is placed in the signal arm to reduce the signal intensity. Using the combination of HWP1 and ODF, we can control the mean photon number of the signal.
### Bob
The other arm of the interferometer, which is the LO arm, is controlled by Bob. The power of the LO is varied using the HWP1 placed before the PBS. PM2 selects the \(\hat{q}\)-quadrature and \(\hat{p}\)-quadrature values corresponding to 0, and \(\pi/2\). The mirror \(\mathrm{M_{5}}\) is placed on a piezo nano-positioner stage to fine tune the path delay between the signal and LO arms. Bob performs homodyne detection at the final BS of the interferometer.
### Data Acquisition
The phase modulation at both Alice's and Bob's ends is performed at a rate of 1 MHz. The subtracted output signal from the BHD is saved using an MSO. We have saved 8.1x\(10^{4}\) pulses in a single acquisition. Once sufficient data has been recorded, postprocessing is performed. We integrate the individual pulses over their respective pulse duration. Each integrated value corresponds to one quadrature value at that particular phase. We then perform sifting, and the raw key is generated. The raw key is further processed, and the secure key is obtained. Error correction and privacy amplification are performed using LDPC codes and Toeplitz hashing, respectively.
## IV Results and Discussion
In this Section, we present the results of our experimental implementation of the protocol. The initial step in implementing the DM-CVQKD protocol is balancing
Figure 5: Experimental scheme for free space discrete modulated CVQKD: HWP: Half Wave Plate; PBS: Polarizing Beam Splitter; PM: Electro-optic Phase Modulator; LO: Local Oscillator; M: Mirrors; PZT: Piezo Controlled Nano-positioner Stage; AMC100: Nano-positioner Controller; ODF: Optical Density Filter; BS: Beam Splitter; BHD: Balanced Homodyne Detector; MSO: Mixed Signal Oscilloscope; AWG: Arbitrary Waveform Generator.
the measurement setup (not shown in Fig 5) and measuring the shot noise variance of the laser source [41; 42]. To perform the intial calibration, the signal arm is blocked, and the difference signal is measured as a function of the LO power. This measurement is used to find out the shot noise and define the shot noise unit (SNU) for the experiment. Once the initial calibration is done, the power of the LO is fixed at 0.25 mW. The electronic noise-to-shot noise (electronic-to-shot noise ratio) clearance is found to be 3.7%. We then proceed with the implementation of the discrete modulation CV-QKD protocol.
The interferometer is calibrated to achieve zero path difference between the arms. The condition for constructive and destructive interference is achieved with a visibility of 98%. The signal is attenuated by using an optical density filter (ODF) of OD = 4 with an input power of 60 \(\mu W\) before ODF. The delay introduced by the ODF is compensated by scanning the translation stage and PZT stage. The phase of the signal is then varied from 0 to 2\(\pi\) by applying an appropriate voltage to the PM and the \(\hat{q}\) quadrature is measured using homodyne detection. For each applied voltage, 2000 pulses are saved, and the mean of the integrated values for the pulses are plotted as a function of the applied voltage as shown in Fig 6. The fluctuation in the data is due to the inherent phase instability of the Mach-Zehnder interferometer.
A proof of principle experimental demonstration of free space DM-CVQKD has been performed. The voltages fed to both Alice and Bob's PM are generated randomly using an AWG, shown in Fig. 5. A single acquisition in the MSO contains 8.1x10\({}^{4}\) pulses. In order to retrieve the quadrature values from the signal, pulses are integrated over the respective time window. We do the basis sifting for Alice and Bob's data. The sifted key has a length of 4x10\({}^{4}\) bits. The probability distributions of the quadrature values corresponding to relative phases are plotted in Fig 7. The threshold value \(x_{0}\) chosen for the experiment is 0.
We do the further post-processing of the data. For our laboratory experiments, the channel transmittance, \(T=0.95\), and detector efficiency \(\eta=0.76\) are observed. We calculated the mutual information between Alice and Bob, and finally, the secure key rate is extracted. The experimental parameters are shown in Table 1.
While performing CVQKD experiments, the very important parameter is the phase fluctuation of the Mach-Zehnder interferometer that affects the key rate. To account for these fluctuations, we are working on the phase stabilization of the MZI. To maximize the key rate, we will consider the noises introduced due to various sources present in the experiment in the near future.
## V Conclusion
We have performed a prototype tabletop experiment of the discrete modulated CVQKD protocol and have used the results to extract a secure key. We have also performed a simulation with a realistic noise model encountered in field demonstrations. The trade-off between the secure key rate and the bit error rate is illustrated using the results of the simulation. These studies assist in surveying the feasibility of continuous variable-based QKD protocols for ground as well as satellite-based communication systems. Conclusively, we can say that continuous variable-based QKD protocols can be perceived as the next frontier in the field of secure communication, be it fiber, free space, or satellite-to-ground communication.
\begin{table}
\begin{tabular}{|c|c|} \hline Parameters & Values \\ \hline Signal processed & 8.1x10\({}^{4}\) pulses \\ \hline Sifted bits & 4x10\({}^{4}\) bits \\ \hline PSE & 3.2x10\({}^{4}\) bits \\ \hline QBER & 5\% \\ \hline Secure key rate & 0.35 (bit/pulse) \\ \hline \end{tabular}
\end{table}
Table 1: The experimental results for the executed protocol for a single acquisition window. Here, PSE is the Post Selection Efficiency and QBER is the Quantum Bit Error Rate.
Figure 6: The variation of the mean \(\hat{q}\) quadrature value of the signal as a function of the applied voltage to the PM. The voltage being applied to the PM is amplified using a voltage amplifier with a gain of -20X.
Figure 7: Probability distributions of the homodyne detected signal for the four relative phases between signal and LO. The points represent the experimental data, and the curves represent the best fit.
###### Acknowledgements.
We thank Dr. Rajesh Kumar Kushawaha for providing the resources required for the experiment. We thank Dr. Rupesh Kumar, Dayanand Mishra, Jaya Krishna Meka, and QST lab members for their valuable input. The authors acknowledge the financial support from DST through the QuEST program.
## Disclosures
The authors declare no conflicts of interest related to this article.
## Appendix A Noise Model
Consider the field operator \(\hat{a}_{\text{sig}}\) of the signal and \(\hat{b}_{\text{env}}\) of the environment. The signal is in a coherent state which is given by \(\ket{\alpha}_{\text{sig}}\)
The action of the beam splitter on the field operators are given by
\[\begin{pmatrix}\hat{a}_{\text{sig}}^{\prime}\\ \hat{b}_{\text{out}}\end{pmatrix}=\begin{pmatrix}\sqrt{T}&\sqrt{1-T}\\ -\sqrt{1-T}&\sqrt{T}\end{pmatrix}\begin{pmatrix}\hat{a}_{\text{sig}}\\ \hat{b}_{\text{env}}\end{pmatrix}. \tag{10}\]
The mode represented by the field operator \(\hat{a}_{\text{out}}\) is received by Bob, who performs a measurement on the corresponding quantum state. Since we are dealing with Gaussian states and the noise model represents a Gaussian transformation on the modes, we can utilize the elegant variance matrix formalism to understand the effect of the quantum channel on the state. The covariance matrix of a single-mode Gaussian state is given by
\[V_{ij}=\frac{1}{2}\bra{(\hat{x}_{i},\hat{x}_{j})}-\bra{\hat{x}_{i}}\bra{\hat{ x}_{j}}, \tag{11}\]
where \(\hat{\mathbf{x}}=[\hat{q},\hat{p}]^{\text{T}}\) are the quadrature operators of the signal mode given by \(\hat{q}=\frac{1}{2}(\hat{a}_{\text{sig}}+\hat{a}_{\text{sig}}^{\dagger})\) and \(\hat{p}=\frac{i}{2}(\hat{a}_{\text{sig}}^{\dagger}-\hat{a}_{\text{sig}})\), and \(\left\{\hat{A},\hat{B}\right\}=\hat{A}\hat{B}+\hat{B}\hat{A}\) denotes the anti-commutator of operators \(\hat{A}\) and \(\hat{B}\). For the example of a coherent state the covariance matrix reduces to
\[V=\frac{1}{4}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \tag{12}\]
Using Eq. (10), the quadrature operators of the output signal can be written as
\[\hat{q}_{\text{sig}}^{\prime} =\sqrt{T}\hat{q}_{\text{sig}}+\sqrt{1-T}\hat{q}_{\text{env}}\quad \text{and} \tag{13}\] \[\hat{p}_{\text{sig}}^{\prime} =\sqrt{T}\hat{p}_{\text{sig}}+\sqrt{1-T}\hat{p}_{\text{env}}. \tag{14}\]
The combined covariance matrix of the signal and the environment after the action of the beam splitter is given by
\[\Sigma=\text{BS}\begin{pmatrix}\frac{1}{4}\text{I}_{2}&0_{2}\\ 0_{2}&N_{0}\text{I}_{2}\end{pmatrix}\text{BS}^{T}, \tag{15}\]
where \(N_{0}\) denotes the channel noise and the matrix BS is defined as
\[\text{BS}=\begin{pmatrix}\sqrt{T}\text{I}_{2}&\sqrt{1-T}\text{I}_{2}\\ -\sqrt{1-T}\text{I}_{2}&\sqrt{T}\text{I}_{2}\end{pmatrix}. \tag{16}\]
Evaluating the expression given in Eq. 15, the covariance matrix of the signal reaching Bob is given by
\[V_{\text{Bob}}=\begin{pmatrix}T\frac{\ket{\alpha}^{2}}{2}+\frac{1}{4}+\xi_{ \text{ch}}&0\\ 0&T\frac{\ket{\alpha}^{2}}{2}+\frac{1}{4}+\xi_{\text{ch}}\end{pmatrix}, \tag{17}\]
where \(N_{0}=\frac{1}{4}+(\xi_{\text{ch}}/(1-T))\).
|
2307.12441 | Swarm-based optimization with random descent | We extend our study of the swarm-based gradient descent method for non-convex
optimization, [Lu, Tadmor & Zenginoglu, arXiv:2211.17157], to allow random
descent directions. We recall that the swarm-based approach consists of a swarm
of agents, each identified with a position, ${\mathbf x}$, and mass, $m$. The
key is the transfer of mass from high ground to low(-est) ground. The mass of
an agent dictates its step size: lighter agents take larger steps. In this
paper, the essential new feature is the choice of direction: rather than
restricting the swarm to march in the steepest gradient descent, we let agents
proceed in randomly chosen directions centered around -- but otherwise
different from -- the gradient direction. The random search secures the descent
property while at the same time, enabling greater exploration of ambient space.
Convergence analysis and benchmark optimizations demonstrate the effectiveness
of the swarm-based random descent method as a multi-dimensional global
optimizer. | Eitan Tadmor, Anil Zenginoglu | 2023-07-23T22:06:02Z | http://arxiv.org/abs/2307.12441v2 | # Swarm-based optimization with random descent
###### Abstract.
We extend our study of the swarm-based gradient descent method for non-convex optimization, [5], to allow random descent directions. We recall that the swarm-based approach consists of a swarm of agents, each identified with a position, \(\mathbf{x}\), and mass, \(m\). The key is the transfer of mass from high ground to low(-est) ground. The mass of an agent dictates its step size: lighter agents take larger steps. In this paper, the essential new feature is the choice of direction: rather than restricting the swarm to march in the steepest gradient descent, we let agents proceed in randomly chosen directions centered around -- but otherwise different from -- the gradient direction. The random search secures the descent property while at the same time, enabling greater exploration of ambient space. Convergence analysis and benchmark optimizations demonstrate the effectiveness of the swarm-based random descent method as a multi-dimensional global optimizer.
Key words and phrases:Optimization, gradient descent, swarming, backtracking, convergence analysis 2020 Mathematics Subject Classification: 90C26,65K10,92D25 **Acknowledgment.** Research was supported in part by ONR grant N00014-2112773.
###### Contents
* 1 Introduction
* 1.1 Why randomization is important
* 2 Implementation of the SBRD algorithm
* 2.1 A protocol for random choice of the descent direction
* 2.2 Backtracking -- a protocol for time stepping
* 2.3 SBRD pseudocode
* 3 Convergence and error analysis
* 3.1 Convergence to a band of local minima
* 4 Numerical results
* 4.1 Examples of SBRD with mass transfer parameter \(q=2\)
* 4.2 SBRD with higher order mass transition \(q>2\).
## 1. Introduction
We discuss a swarm-based descent method for non-convex optimization. The swarm consists of \(N\) agents, each identified with a time-dependent position and mass.
\[\mathbf{x}_{i}^{n}=\mathbf{x}_{i}(t^{n})\in\mathbb{R}^{d},\ \ m_{i}^{n}=m_{i}(t^{n})\in(0,1], \qquad i=1,2,\ldots,N.\]
The aim is to minimize a loss function, \(F\in C^{2}(\Omega)\), and, ideally, approach its global minimizer in the region explored by these agents,
\[\mathbf{x}_{i}^{n}\stackrel{{ n\to\infty}}{{\longrightarrow}} \operatorname*{argmin}_{\mathbf{x}\in\Omega}F(\mathbf{x}).\]
The swarm-based iterations proceed using an interplay between positions and weights, repeatedly transferring mass from high to lower ground and, on the way, driving agents to smaller loss values. Each iteration consists of two stages.
**Mass transfer**. In the first stage, positions change the distribution of mass: each agent with mass \(m_{i}^{n}\) transfers a fraction of its mass, \(\eta_{i}^{n}m_{i}^{n}\), to the current global minimizer positioned at \(\mathbf{x}_{i_{n}}\) where \(i_{n}=\operatorname*{argmin}_{i}F(\mathbf{x}_{i}^{n})\).
\[\left\{\begin{array}{rl}m_{i}^{n+1}&=m_{i}^{n}-\eta_{i}^{n}m_{i}^{n},\qquad \quad i\neq i_{n}\\ m_{i_{n}}^{n+1}&=m_{i_{n}}^{n}+\sum_{i\neq i_{n}}\eta_{i}^{n}m_{i}^{n},\qquad \qquad\quad\eta_{i}^{n}:=\Big{(}\frac{F(\mathbf{x}_{i}^{n})-F_{\min}^{n}}{F_{ \max}^{n}-F_{\min}^{n}}\Big{)}^{q}\in(0,1].\end{array}\right. \tag{1.1a}\]
The fraction of mass transfer, \(\Big{(}\frac{F(\mathbf{x}_{i}^{n})-F_{\min}^{n}}{F_{\max}^{n}-F_{\min}^{n}} \Big{)}^{q}\), is determined by the _relative height_ of each agent relative to the global extremes1, \(F_{\min}^{n}=\min_{j}F(\mathbf{x}_{j}^{n})\) and \(F_{\max}^{n}=\max_{j}F(\mathbf{x}_{j}^{n})\), and a mass transfer parameter, \(q\geqslant 1\). The higher \(q\) is, the more tamed is the transfer of mass. A systematic study reported in section 4.2 below reveals a dramatic improvement when increasing the mass transfer parameter \(q=2,4,8\).
Footnote 1: To prevent vanishing denominator in the extreme case \(F_{\max}=F_{\min}\), we adjust (1.1a) with a small \(\epsilon\)-correction, \(\eta_{i}^{n}:=\Big{(}\frac{F(\mathbf{x}_{i}^{n})-F_{\min}^{n}}{F_{\max}^{n}-F_ {\min}^{n}+\epsilon}\Big{)}^{q}\). Observe that while the total mass is conserved, say \(\sum_{i}m_{i}^{n}=1\), individual masses are redistributed from high to lower ground: the higher the agent, the larger fraction of its mass will be lost in favor of the agent at the lowest ground. In fact, the highest agent in each iteration is eliminated; to be precise, the worst performing agents are eliminated whenever \(1-\eta_{i}^{n}=\mathcal{O}(\epsilon)\ll 1\). This follows an aggressive "survival of the fittest" protocol, [5, SS3], so that after \(N\) iterations the swarm consists of a single agent which should be in the best position to approach the minimum of the space explored so far by the swarm. We note in passing that one can adopt a more flexible protocol which allows the worst (highest) agents to survive a few iterations before elimination; this flexibility would improve the overall success rates of the swarm at the expense of efficiency.
**Stepping in descent direction -- a random choice approach**. In the second stage, the distribution of mass affects the change of positions,
\[\mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}-h_{i}^{n}\mathbf{p}_{i}^{n}. \tag{1.1b}\]
The driving force behind the protocol for choosing the direction, \(\mathbf{p}=\mathbf{p}_{i}^{n}\), and the step size, \(h=h_{i}^{n}\), is to secure the following descent property, depending on the relative mass \(\widetilde{m}_{i}^{n+1}\) and a descent parameter \(\lambda<1\),
\[F(\mathbf{x}_{i}^{n}-h\mathbf{p})\leqslant F(\mathbf{x}_{i}^{n})-\frac{1}{2} \lambda\widetilde{m}_{i}^{n+1}h|\nabla F(\mathbf{x}_{i}^{n})|^{2},\qquad \widetilde{m}_{i}^{n+1}=\frac{m_{i}^{n+1}}{\max_{i}m_{i}^{n+1}},\quad\lambda<1. \tag{1.2}\]
Recall that the steepest descent corresponding to the gradient direction, \(\mathbf{p}_{i}^{n}=\nabla F(\mathbf{x}_{i}^{n})\), secures the sharper descent property,
\[F(\mathbf{x}_{i}^{n}-h\mathbf{p})\leqslant F(\mathbf{x}_{i}^{n})-\lambda \widetilde{m}_{i}^{n+1}h|\nabla F(\mathbf{x}_{i}^{n})|^{2},\]
which was the basis for the swarm-based gradient descent (SBGD) method we introduced in [5]. The purpose of this work is to extend the SBGD method by allowing a larger set of descent directions: the emphasis is no longer on the steepest descent along the gradient direction but instead, allowing a more effective exploration of the ambient space using a _random choice of directions_, \(\{\mathbf{p}_{i}^{n}\}\), that still maintains (half the steepest) descent property. This implies that the swarm, stepping in other than the gradient direction, will explore a larger portion of the ambient space which in turn leads to a more effective search, and proved to be particularly relevant in high-dimensional optimizations, see the numerical simulations reported in section 4. We refer to this new version as the Swarm-Based Random Descent (SBRD) method.
To this end, the SBRD choice of the direction \(\mathbf{p}_{i}^{n}\) is determined by its orientation \(\boldsymbol{\omega}_{i}^{n}\),
\[\mathbf{p}_{i}^{n}=|\nabla F(\mathbf{x}_{i}^{n})|\boldsymbol{\omega}_{i}^{n},\qquad\boldsymbol{\omega}_{i}^{n}\in\mathbb{S}^{d-1}, \tag{1.3a}\]
relative to the orientation of the gradient, \(\mathbf{q}_{i}^{n}=\frac{\nabla F(\mathbf{x}_{i}^{n})}{|\nabla F(\mathbf{x}_ {i}^{n})|}\in\mathbb{S}^{d-1}\), so that
\[\langle\boldsymbol{\omega}_{i}^{n},\mathbf{q}_{i}^{n}\rangle=r,\qquad\mathbf{ q}_{i}^{n}:=\frac{\nabla F(\mathbf{x}_{i}^{n})}{|\nabla F(\mathbf{x}_{i}^{n})|}. \tag{1.3b}\]
Here, \(r\) is randomly chosen number from a normal distribution in an interval dictated by the relative mass,
\[r\in\mathcal{N}\Big{(}\frac{1}{2}(1+\widetilde{m}_{i}^{n+1}),1\Big{)}. \tag{1.3c}\]
This means that the orientation of \(\mathbf{p}_{i}^{n}\) lies in a spherical cap centered around \(\mathbf{q}_{i}^{n}\). The 'opening' of the corresponding spherical cone, see Figure 1.1, \(\theta:=\arccos\big{(}\frac{1}{2}(1+\widetilde{m}_{i}^{n+1})\big{)}\). It is larger for lighter agents, and coincides with the gradient direction, \(\nabla F(\mathbf{x}_{i}^{n})\), for the heaviest agent where \(\widetilde{m}_{i}^{n+1}=1\). The protocol for randomly selecting \(\mathbf{\omega}_{i}^{n}\) subject to (1.3b) is outlined in section 2.1 below.
**Choosing the step size -- a backtracking protocol**. It follows that the new position, \(\mathbf{x}^{n+1}(h)=\mathbf{x}_{i}^{n}-h\mathbf{p}_{i}^{n}\) -- viewed as a function of the step size \(h\), satisfies the desired descent property, at least for small enough \(h\). Indeed, (1.3b) implies
\[\langle\mathbf{p}_{i}^{n},\nabla F(\mathbf{x}_{i}^{n})\rangle=r|\nabla F( \mathbf{x}_{i}^{n})|^{2}\geqslant\frac{1}{2}(1+\widetilde{m}_{i}^{n+1})| \nabla F(\mathbf{x}_{i}^{n})|^{2}. \tag{1.4}\]
Hence, if \(\nabla F\) has a Lipschitz bound \(L\leqslant\infty\), then for every \(\lambda<1\) and \(h<\nicefrac{{1}}{{L}}\), there holds
\[F(\mathbf{x}_{i}^{n+1}(h)) \leqslant F(\mathbf{x}_{i}^{n})-h\langle\mathbf{p}_{i}^{n}, \nabla F(\mathbf{x}_{i}^{n})\rangle+\frac{h^{2}}{2}L|\nabla F(\mathbf{x}_{i}^ {n})|^{2}\] \[\leqslant F(\mathbf{x}_{i}^{n})-\frac{1}{2}\big{(}1+\widetilde{m }_{i}^{n+1}-Lh\big{)}h|\nabla F(\mathbf{x}_{i}^{n})|^{2}\] \[<F(\mathbf{x}_{i}^{n})-\frac{1}{2}\lambda\widetilde{m}_{i}^{n+1} h|\nabla F(\mathbf{x}_{i}^{n})|^{2}. \tag{1.5}\]
Thus, we recover (1.2) for any \(h<\nicefrac{{1}}{{L}}\).2
Footnote 2: In fact, a slightly larger threshold, \(h<\frac{1+\widetilde{m}_{i}^{n+1}(1-\lambda)}{L}\), still allows (1.5) to hold.
We note, however, that we have no access to the Lipschitz bound \(L\), and therefore, no access to an effective step size securing (1.5), beyond \(h\) being'sufficiently small'. Instead, we seek a step size that identifies the _largest_\(h\) for which (1.5) holds. To this end we use a _backtracking protocol_ outlined in Algorithm 2 below. The backtracking algorithm produces a time step, \(h_{i}^{n}\), depending on the position of the agent, \(\mathbf{x}_{i}^{n}\), and its relative mass \(\widetilde{m}_{i}^{n+1}\),
\[h_{i}^{n}=h(\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1}).\]
It secures the _lower_ bound \(h_{i}^{n}\geqslant\frac{\gamma}{L}\) for some \(\gamma<1\), so that using (1.5) we finally end up with the descent property of the form,
\[F(\mathbf{x}_{i}^{n+1})\leqslant F(\mathbf{x}_{i}^{n})-\frac{\gamma}{2L} \lambda\widetilde{m}_{i}^{n+1}|\nabla F(\mathbf{x}_{i}^{n})|^{2},\qquad \mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}-h_{i}^{n}\mathbf{p}_{i}^{n}.\]
_Remark 1.1_.: This should be compared with the descent property of SBGD method restricted to the steepest descent direction \(\mathbf{p}_{i}^{n}=\nabla F(\mathbf{x}_{i}^{n})\), for which we have, [5, Proposition 5.2],
\[F(\mathbf{x}_{i}^{n+1})\leqslant F(\mathbf{x}_{i}^{n})-\frac{2\gamma}{L} \big{(}1-\lambda\widetilde{m}_{i}^{n+1}\big{)}\lambda\widetilde{m}_{i}^{n+1} |\nabla F(\mathbf{x}_{i}^{n})|^{2}.\]
Thus, our stepping protocol retains at least half of the steepest descent, while gaining greater heterogeneity in space exploration. In particular, while heavier agents are still restrained by smaller time steps, lighter agents are now allowed to take larger time steps from a richer set of directions which are aligned with -- but otherwise different from, the gradient direction. This 'greedy' exploration of the ambient space by lighter agents, increases their likelihood
of encountering a new neighborhood with a better minimum, which may place one of them as the new heaviest minimizer and so on.
### Why randomization is important
We compare the swarm-based method
\[\mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}-h(\mathbf{x}_{i}^{n},\lambda\widetilde{m }_{i}^{n+1})\mathbf{p}_{i}^{n},\qquad i=1,2,\ldots,N, \tag{1.6}\]
in two scenarios: with the gradient direction for SBGD, \(\mathbf{p}_{i}^{n}=\nabla F(\mathbf{x}_{i}^{n})\) and with the randomized direction for SBRD, \(\mathbf{p}_{i}^{n}=|\nabla F(\mathbf{x}_{i}^{n})|\mathbf{\omega}_{i}^{n}\) in (1.3). The same backtracking protocol was implemented in both cases. The advantage of randomization in exploring larger regions becomes apparent in SBRD when the number of agents is larger than the dimension of the search space, \(N>d\). The results recorded in Table 1.1 for the Ackley function show that SBRD optimization outperforms SBGD optimization in higher dimensions.
More can be found in numerical simulations of several benchmark problems presented in section 4.
## 2. Implementation of the SBRD algorithm
The SBRD method relies on three procedures: (i) a communication protocol that dictates mass transition factors, \(\{\eta_{i}^{n}\}\); (ii) a random choice of the descent direction, \(\mathbf{p}_{i}^{n}\); and (iii) an effective strategy for taking step size, \(h_{i}^{n}=h\big{(}\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1}\big{)}\), in that direction. Both the
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \(d\)\(N\) & \multicolumn{2}{c|}{10} & \multicolumn{2}{c|}{25} & \multicolumn{2}{c|}{50} & \multicolumn{2}{c|}{100} \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline \hline
12 & 13.7\% & 26.7\% & 55.5\% & 96.2\% & 88.3\% & 100.0\% & 99.2\% & 100.0\% \\
13 & 8.8\% & 9.2\% & 49.9\% & 65.5\% & 82.1\% & 95.6\% & 98.1\% & 99.9\% \\
14 & **3.0\%** & 1.7\% & **42.4\%** & 22.3\% & **77.9\%** & 51.0\% & **96.1\%** & 85.4\% \\
15 & 1.3\% & 0.4\% & **35.9\%** & 2.7\% & **70.2\%** & 10.6\% & **90.5\%** & 23.7\% \\
16 & 0.3\% & 0.0\% & **23.6\%** & 0.1\% & **60.6\%** & 0.8\% & **85.2\%** & 2.2\% \\
17 & 0.1\% & 0.0\% & **14.1\%** & 0.0\% & **50.8\%** & 0.1\% & **79.1\%** & 0.4\% \\
18 & 0.0\% & 0.0\% & **8.8\%** & 0.0\% & **37.3\%** & 0.0\% & **65.5\%** & 0.0\% \\
19 & 0.0\% & 0.0\% & **2.0\%** & 0.0\% & **16.8\%** & 0.0\% & **48.2\%** & 0.0\% \\
20 & 0.0\% & 0.0\% & 0.7\% & 0.0\% & **5.1\%** & 0.0\% & **21.3\%** & 0.0\% \\ \hline \end{tabular}
\end{table}
Table 1.1. Success rates of SBRD vs. SBGD for global optimization of the \(d\)-dimensional Ackley function using \(N\) agents based on \(m=1000\) runs of uniformly generated initial data, \(\mathbf{x}_{i}^{0}\in[-3,3]^{d}\). Backtracking parameters are \(\lambda=0.2\) and \(\gamma=0.9\) (see algorithm 2.2). Boldfaced numbers emphasize the cases where SBRD outperforms SBGD by more than 1%. The randomization provided by SBRD becomes essential beyond the critical dimension \(d=13\).
direction and step size are adjusted to the position and the relative mass of a given agent. These procedures are summarized in the following pseudo-codes.
### A protocol for random choice of the descent direction
Algorithm 2.1 picks a random orientation lying in the spherical cap of the unit sphere, \(\mathbf{\omega}_{i}^{n}\in\mathbb{S}^{d-1}\), centered around the gradient orientation, \(\mathbf{q}_{i}^{n}=\dfrac{\nabla F(\mathbf{x}_{i}^{n})}{|\nabla F(\mathbf{x}_{ i}^{n})|}\), and then sets the descent direction \(\mathbf{p}_{i}^{n}=|\nabla F(\mathbf{x}_{i}^{n})|\mathbf{\omega}_{i}^{n}\). To this end, we proceed in two steps. First, sampling a randomly chosen point, \(\mathbf{X}=\big{(}X(1),\ldots,X(d-1),X(d)\big{)}\in\mathbb{S}^{d-1}\), in the spherical cap centered around the north pole, \(\mathbf{z}=(0,0,\ldots,1)\),
\[X(i)=\left\{\begin{array}{ll}\sqrt{1-r^{2}}\dfrac{Y(i)}{|\mathbf{Y}|}&Y(i) \sim\mathcal{N}(0,1),\ i=1,2,\ldots d-1,\\ r,&i=d.\end{array}\right.\]
Here \(r\) is a randomly chosen parameter from a normal distribution in \(\frac{1}{2}(1+\widetilde{m}_{i}^{n+1})<r<1\); thus, the spherical cap has an opening angle of \(\theta=\arccos(r)\), ranging from \(\theta=60^{\circ}\) for lightest agents to the gradient orientation, \(\theta=0^{\circ}\), for the heaviest agent. In the second step, Algorithm 2.1 uses the unitary (Householder) reflection which reflects the north pole \(\mathbf{z}\) to \(\mathbf{q}_{i}^{n}\)
\[\mathbb{P}_{i}^{n}=\mathbb{I}-2\dfrac{\mathbf{v}_{i}^{n}(\mathbf{v}_{i}^{n})^ {\top}}{|\mathbf{v}_{i}^{n}|^{2}},\qquad\mathbf{v}_{i}^{n}:=\mathbf{q}_{i}^{n} -\mathbf{z},\]
and then reflects \(\mathbf{X}\) into the desired \(\mathbf{\omega}_{i}^{n}:=\mathbb{P}_{i}^{n}\mathbf{X}\), see Fig. 1,
```
Set \(\mathbf{q}_{i}^{n}=\dfrac{\nabla F(\mathbf{x}_{i}^{n})}{|\nabla F(\mathbf{x}_ {i}^{n})|}\) Choose random \(r\) such that \(\frac{1}{2}(1+\widetilde{m}_{i}^{n})<r<1\) Set random vector \(\mathbf{Y}\in\mathbb{R}^{d-1}\) with \(Y(i)\sim\mathcal{N}(0,1)\) so that \(\mathbf{Y}/|\mathbf{Y}|\in\mathbb{S}^{d-2}\) for\(i=1\) to \(d-1\)do Set \(X(i)=\sqrt{1-r^{2}}\dfrac{Y(i)}{|\mathbf{Y}|}\) endfor Set \(X(d)=r\) so \(\mathbf{X}=(X(1),\cdots,X(d-1),X(d))\in\mathbb{S}^{d-1}\) if\(1-\mathbf{q}_{i}^{n}(d)\neq 0\)then Set \(\mathbf{v}_{i}^{n}=\mathbf{q}_{i}^{n}-\mathbf{z}\) where \(\mathbf{z}:=(0,\ldots,0,1)\) is the north pole of \(\mathbb{S}^{d-1}\) Set \(\mathbf{\omega}_{i}^{n}=\mathbf{X}-2\dfrac{\langle\mathbf{v}_{i}^{n},\mathbf{X} \rangle}{|\mathbf{v}_{i}^{n}|^{2}}\mathbf{v}_{i}^{n}\qquad\%\) Simplification: \(|\mathbf{v}_{i}^{n}|^{2}=2\big{(}1-q_{i}^{n}(d)\big{)}\) else Set \(\mathbf{\omega}_{i}^{n}=\mathbf{X}\) endif Set \(\mathbf{p}_{i}^{n}=|\nabla F(\mathbf{x}_{i}^{n})|\mathbf{\omega}_{i}^{n}\)
```
**Algorithm 2.1** Random descent direction \(\mathbf{p}_{i}^{n}\) for agent \(\mathbf{x}_{i}^{n}\) with relative mass \(\widetilde{m}_{i}^{n+1}\)
## 3. Proof of Theorem 2.1
### A proof of Theorem 2.1
We first prove Theorem 2.1.
**Theorem 2.1**.: _Let \(\mathbf{X}\) be a random oriented
### Backtracking -- a protocol for time stepping
The direction \(\mathbf{p}_{i}^{n}\) computed in Algorithm 2.1 is partially aligned with \(\nabla F(\mathbf{x}_{i}^{n})\) so that (1.4) holds. Once the direction \(\mathbf{p}_{i}^{n}\) is set, the new position \(\mathbf{x}_{i}^{n+1}(h)=\mathbf{x}_{i}^{n}-h\mathbf{p}_{i}^{n}\) is viewed as a function of the step size \(h\), and the objective is to select an appropriate step size, \(h_{i}^{n}=h\big{(}\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1}\big{)}\), which ensures the corresponding descent bound (1.5),
\[F(\mathbf{x}_{i}^{n+1})\leqslant F(\mathbf{x}_{i}^{n})-\frac{1}{2}\lambda \widetilde{m}_{i}^{n+1}h_{i}^{n}|\nabla F(\mathbf{x}^{n})|^{2},\qquad\quad \mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}-h_{i}^{n}\mathbf{p}_{i}^{n},\quad 0< \lambda<1. \tag{2.1}\]
A proper strategy for choosing such step size is based on the classical backtracking line search, [6, SS3], which is a computational realization of the well-known Wolfe conditions [7, 1]. Recall that by Taylor's expansion (1.5), the desired bound holds as long as the step size is sufficiently small, \(h_{i}^{n}\ll 1\). Our aim to choose a step size that is _sufficiently large_, in order to maximize the descent term \(\frac{1}{2}\lambda\widetilde{m}_{i}^{n+1}h_{i}^{n}|\nabla F(\mathbf{x}_{i}^{n })|^{2}\). To this end, one employs a dynamic adjustment, starting with a relatively large \(h\) for which one expects
\[F\big{(}\mathbf{x}_{i}^{n}-h\mathbf{p}_{i}^{n}\big{)}>F(\mathbf{x}^{n})-\frac {1}{2}\lambda\widetilde{m}_{i}^{n+1}h|\nabla F(\mathbf{x}_{i}^{n})|^{2},\]
and then successively shrink the step size, \(h\to\gamma h\), using a shrinkage factor \(0<\gamma<1\), until the descent condition (2.1) is fulfilled. Adjusting the shrinkage parameter \(\gamma\) requires careful consideration of the trade-off between the cost of a refined \(\gamma\sim 1\) vs. improved performance with a crude \(\gamma\ll 1\).
The pseudo-code for computing the SBRD steps based on backtracking line search is given in Algorithm 2.2 below.
```
Set the shrinkage parameter, \(\gamma\in(0,1)\) Set the relative mass \(\widetilde{m}_{i}^{n+1}=\dfrac{m_{i}^{n+1}}{m_{+}^{n+1}}\) Initialize the step size \(h=h_{0}\). while\(F\big{(}\mathbf{x}_{i}^{n}-h\mathbf{p}_{i}^{n}\big{)}>F(\mathbf{x}_{i}^{n})- \frac{1}{2}\lambda\widetilde{m}_{i}^{n+1}h|\nabla F(\mathbf{x}_{i}^{n})|^{2}\)do \(h\leftarrow\gamma h\). endwhile Set \(h_{i}^{n}=h\big{(}\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1}\big{)}\gets h\)
```
**Algorithm 2.2** Backtracking line search
A stepping protocol for a non-convex optimization is required to strike a balance between small steps the vicinity of a potential minimizer and larger steps which avoid being trapped in local basins of attraction. The backtracking protocol achieves such a balance by adjusting the step size of each agent according to its relative mass, \(\widetilde{m}_{i}^{n+1}\)
\[h_{i}^{n}=h\big{(}\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1}\big{)} \qquad\widetilde{m}_{i}^{n+1}:=\dfrac{m_{i}^{n+1}}{m_{+}^{n+1}},\quad m_{+}^{n +1}:=\max_{i}m_{i}^{n+1},\ \ 0<\lambda<1, \tag{2.2}\]
where \(h(\mathbf{x}_{i}^{n},\cdot)\) is a decreasing function of the relative mass \(\lambda\widetilde{m}_{i}^{n+1}\). Thus, our mass-dependent backtracking is an adaptive protocol: it adapts itself from small time steps in the steepest gradient direction for heavier agents which lead the swarm, to larger steps in randomly
chosen directions (that may differ from the steepest descent) for lighter agents which are the explorers of the swarm, exploring the ambient space.
**The descent property**. The backtracking Algorithm 2.2 yields a step size \(h_{i}^{n}=h(\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1})\) with a lower bound \(h_{i}^{n}\geqslant\frac{\gamma}{L}\) with \(L\) denoting the Lipschitz bound of \(\nabla F\) which is assumed to exist. Indeed, this can be argued by contradiction: if \(\frac{h_{i}^{n}}{\gamma}\leqslant\frac{1}{L}\) then by (1.4) we would have,
\[F\Big{(}\mathbf{x}_{i}^{n}-\frac{h_{i}^{n}}{\gamma}\mathbf{p}_{i }^{n}\Big{)} \leqslant F(\mathbf{x}^{n})-\frac{h_{i}^{n}}{\gamma}\langle \mathbf{p}_{i}^{n},\nabla F(\mathbf{x}_{i}^{n})\rangle+\frac{L}{2}\big{(}\frac {h_{i}^{n}}{\gamma}\big{)}^{2}|\mathbf{p}_{i}^{n}\rangle|^{2}\] \[\leqslant F(\mathbf{x}_{i}^{n})-\Big{(}\frac{1+\widetilde{m}_{i} ^{n}}{2}-\frac{L}{2}\frac{h_{i}^{n}}{\gamma}\Big{)}\frac{h_{i}^{n}}{\gamma}| \nabla F(\mathbf{x}_{i}^{n})|^{2}\] \[\leqslant F(\mathbf{x}_{i}^{n})-\frac{1}{2}\lambda\widetilde{m}_ {i}^{n+1}\frac{h_{i}^{n}}{\gamma}|\nabla F(\mathbf{x}_{i}^{n})|^{2},\quad \lambda<1.\]
But this contradicts the fact that the backtracking iterations fail to satisfy such inequality with step size \(h_{i}^{n}/\gamma\), since according to Algorithm 2.2, \(F(\mathbf{x}_{i}^{n}-(h_{i}^{n}/\gamma)\mathbf{p}_{i}^{n})>F(\mathbf{x}_{i}^{ n})-\frac{1}{2}\lambda\widetilde{m}_{i}^{n+1}(h_{i}^{n}/\gamma)|\nabla F( \mathbf{x}_{i}^{n})|^{2}\) (in fact, the largest step size that succeeds in securing reverse inequality is with time step \(h_{i}^{n},\ h_{i}^{n}<h_{i}^{n}/\gamma\)). This contradiction confirms that \(h_{i}^{n}\geqslant\frac{\gamma}{L}\), which in turn enables us to convert the descent bound (2.1) into a precise descent property,
\[F(\mathbf{x}_{i}^{n+1})\leqslant F(\mathbf{x}_{i}^{n})-\frac{1}{2}\lambda \widetilde{m}_{i}^{n+1}h_{i}^{n}|\nabla F(\mathbf{x}_{i}^{n})|^{2}\leqslant F (\mathbf{x}_{i}^{n})-\frac{\gamma}{2L}\lambda\widetilde{m}_{i}^{n+1}|\nabla F (\mathbf{x}_{i}^{n})|^{2} \tag{2.3}\]
The descent property we obtain is constrained by an additional factor of \(\frac{1}{4}\) compared to the standard version of SBGD that relies on the gradient direction [5, Proposition 5.2]. However, the randomization of the descent direction brings the advantage of allowing lighter agents to explore a wider range of directions. As we will see later, this exploration leads to substantial improvements in the optimization process in high dimensions.
It is important to note that heavy agents still adhere to the steepest descent along the gradient direction. The spherical cone of random directions is narrower for heavier agents. In fact, the heaviest agent strictly follows the steepest descent with \(\mathbf{p}_{i}^{n}=\nabla F(\mathbf{x}_{i}^{n})\), eliminating the need for a random choice at this particular point.
### SBRD pseudocode
The pseudocode of the SBRD method is presented in Algorithm 2.3. The initial setup involves \(N\) randomly distributed agents \(\mathbf{x}_{1}^{0},\cdots,\mathbf{x}_{N}^{0}\), associated with initial masses \(m_{1}^{0},\cdots,m_{N}^{0}\). Initially, all agents are assigned equal masses, \(m_{j}^{0}=\nicefrac{{1}}{{N}}\), \(j=1,\ldots,N\). At each iteration, the agent positioned at \(\mathbf{x}_{i_{n}}=\operatorname*{argmin}_{\mathbf{x}_{i}^{n}}F(\mathbf{x}_{i} ^{n})\) attains the minimal value, while the other agents transfer part of their masses to that minimizer \(\mathbf{x}_{i_{n}}\). Then all the agents are updated with the gradient descent method using the direction obtained in (1.3b) and step size in (2.2).
We use three tolerance factors:
\(\cdot\)\(tolm\): If an agent's mass falls below this threshold, the agent is eliminated, and its remaining mass is transferred to the optimal agent at \(\mathbf{x}_{i_{n}}\).
\(\cdot\)\(tolmerge\): Agents that are sufficiently close to each other, i.e., their distance is below this threshold, are merged into a new agent. The masses of the merged agents are combined into the newly generated agent.
\(\cdot\)\(tolres\): The iterations terminate when the descent of the minimizer between two consecutive iterations falls below this threshold.
```
Set the parameters: \(tolm\), \(tolmerge\), \(tolres\), and \(nmax\) Set the number of agents,\(N\), and the mass transfer parameter, \(q\geqslant 1\) Randomly generate initial positions: \(\mathbf{x}_{1}^{0},\ldots,\mathbf{x}_{N}^{0}\) Set initial mass for all agents: \(m_{1}^{0}=\cdots=m_{N}^{0}=\nicefrac{{1}}{{N}}\) for\(n=0,1,2,\ldots,nmax\)do Merge agents if their distance \(<tolmerge\) Set the index of the optimal agent: \(i_{n}=\operatorname*{argmin}_{i}F(\mathbf{x}_{i}^{n})\) Set \(F_{\min}=F(\mathbf{x}_{i_{n}}^{n})\) and \(F_{\max}=\max_{i}F(\mathbf{x}_{i}^{n})\) for\(i=1,\ldots,N\)and\(i\neq i_{n}\)do if\(m_{i}^{n}<\nicefrac{{1}}{{N}}\cdot tolm\)then Set \(m_{i}^{n+1}=0\) Reduce the number of active agents: \(N\gets N-1\) else Set \(m_{i}^{n+1}=m_{i}^{n}-\eta_{i}^{n}m_{i}^{n}\) where \(\eta_{i}^{n}=\left(\frac{F(\mathbf{x}_{i}^{n})-F_{\min}^{n}}{F_{\max}^{n}-F_{ \min}^{n}}\right)^{q}\) endif endfor Set \(m_{i_{n}}^{n+1}=m_{i_{n}}^{n}+\sum_{i\neq i_{n}}\eta_{i}^{n}m_{i}^{n}\) Set \(m_{+}=\max_{i}m_{i}^{n+1}\) for\(i=1,\ldots,N\)do Compute relative masses \(\widetilde{m}_{i}^{n+1}=\frac{m_{i}^{n+1}}{m_{+}}\) Compute a random descent direction: \(\mathbf{p}_{i}^{n}\) (using Algorithm 2.1) Compute the step size: \(h=h(\mathbf{x}_{i}^{n},\lambda\widetilde{m}_{i}^{n+1})\) (using Algorithm 2.2) Update position: \(\mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}-h\mathbf{p}_{i}^{n}\) endfor if\(|\mathbf{x}_{i}^{n+1}-\mathbf{x}_{i}^{n}|\leqslant tolres\)then break endif endfor
```
**Algorithm 2.3** Swarm-Based Random Descent Method
## 3. Convergence and error analysis
The study of convergence and error estimates for the SBRD method requires quantifying the behavior of \(F\). Here we emphasize that the required smoothness properties of \(F\) are only sought in the region explored by the SBRD iterations. We assume that there exists a _bounded_ region, \(\Omega\ni\mathbf{x}_{i}^{n}\) for all agents. Since the SBRD allows light agents to explore the ambient space with large step size (starting with \(h_{0}\)), we do not have an apriori bound on \(\Omega\); in particular, the footprint of the SBRD crowd \(\operatorname*{conv}_{i}\{\mathbf{x}_{i}^{n}\}\) may expand well beyond its initial convex hull \(\operatorname*{conv}_{i}\{\mathbf{x}_{i}^{0}\}\). The expansion of the initial convex hull is an essential feature of the algorithm that allows the agents to find minima outside their initial range, demonstrated in the numerical experiments with shifted initial data domains such as in Table 4.4.
We consider the class of loss functions, \(F\in C^{2}(\Omega)\), with Lipschitz bound
\[|\nabla F(\mathbf{x})-\nabla F(\mathbf{y})|\leqslant L|\mathbf{x}-\mathbf{y}|, \quad\forall\mathbf{x},\mathbf{y}\in\Omega. \tag{3.1}\]
### Convergence to a band of local minima
Our next proposition provides a precise quantitative description for the convergence of the SBRD method. The convergence is determined by the time series of SBRD minimizers, \(\{\mathbf{X}_{-}^{n}\}\),
\[\mathbf{X}_{-}^{n}=\mathbf{x}_{i_{n}}^{n},\qquad i_{n}:=\operatorname*{ argmin}_{i}F(\mathbf{x}_{i}^{n}). \tag{3.2a}\]
We shall also need the time series of its heaviest agents, \(j_{n}:=\operatorname*{argmax}_{i}m_{i}^{n}\); to this end, we let \(\mathbf{X}_{+}^{n}\) denote the _parent_ of the heaviest agent at \(t=t^{n+1}\)
\[\mathbf{X}_{+}^{n}=\mathbf{x}_{j_{n+1}}^{n},\qquad j_{n+1}:=\operatorname*{ argmax}_{i}m_{i}^{n+1}. \tag{3.2b}\]
The interplay between minimizers and the communication of masses leads to a gradual mass shift from higher ground to the minimizers. Eventually, the two sequences coincide when the SBRD minimizers gain enough mass to assume the role of heaviest agents. Finally, we introduce the scaling \(M=\max_{j}F(\mathbf{x}_{j}^{0})-F(\mathbf{x}^{*})\) where \(\mathbf{x}^{*}\) is the global minimum. Since \(F(\mathbf{x}_{i}^{n})\) are decreasing, we conclude that the SBRD iterations remain within that range, namely
\[\forall n,j:\quad F(\mathbf{x}_{i}^{n})-F(\mathbf{x}_{j}^{n})\leqslant M, \qquad M:=\max F(\mathbf{x}_{i}^{0})-F(\mathbf{x}^{*}) \tag{3.3}\]
**Proposition 3.1**.: _Consider the SBRD iterations (1.1) with random-based search direction, \(\mathbf{p}_{i}^{n}\), determined by Algorithm 2.1, and with a step-size (2.2), \(h_{i}^{n}=h\big{(}\mathbf{x}_{i}^{n},\lambda\tilde{m}_{i}^{n+1}\big{)}\), determined by backtracking line search of Algorithm 2.2. Let \(\{\mathbf{X}_{-}^{n}\}_{n\geqslant 0}\) and \(\{\mathbf{X}_{+}^{n}\}_{n\geqslant 0}\) denote the time sequence of SBRD minimizers and, respectively, (parent of) heaviest agents outlined in (3.2) Then, there exists a constant, \(C=C(\gamma,L,M,\lambda)\) given in (3.10) below, such that we have summability of gradients_
\[\sum_{n=0}^{\infty}\delta_{n}^{2}\cdot\min\big{\{}1,\delta_{n}^{2q}\big{\}}<C \min_{i}F(\mathbf{x}_{i}^{0}),\qquad\delta_{n}:=\min\{|\nabla F(\mathbf{X}_{ +}^{n})|,|\nabla F(\mathbf{X}_{-}^{n})|\}. \tag{3.4}\]
_Here, \(q\geqslant 1\) is the mass transfer parameter in (1.1a)._
Proof.: Our purpose is to find a lower bound on the relative masses, \(\widetilde{m}_{i}^{n+1}=\dfrac{m_{i}^{n+1}}{m_{j_{n+1}}^{n+1}}\), which will dictate the descent property of the different agents according to (2.3). Observe that for the heaviest agent, \(i=j_{n+1}\), (2.3) with \(\widetilde{m}_{j_{n+1}}^{n+1}=1\) implies
\[F(\mathbf{x}_{j_{n+1}}^{n+1})\leqslant F(\mathbf{X}_{+}^{n})-\dfrac{\gamma}{2 L}\lambda|\nabla F(\mathbf{X}_{+}^{n})|^{2},\qquad\mathbf{X}_{+}^{n}=\mathbf{x}_{j_{n+1 }}^{n}. \tag{3.5}\]
We distinguish between two scenarios. The first is a canonical scenario in which the minimizing agent at \(t=t^{n}\) coincides with the heaviest agent at time \(t^{n+1}\), namely, when \(i_{n}=j_{n+1}\), or \(\mathbf{X}_{-}^{n}=\mathbf{X}_{+}^{n}\). Then (3.5) implies
\[F(\mathbf{X}_{-}^{n+1})\leqslant F(\mathbf{x}_{j_{n+1}}^{n+1})\leqslant F( \mathbf{X}_{-}^{n})-\dfrac{\gamma}{2L}\lambda|\nabla F(\mathbf{X}_{-}^{n})|^{2 },\qquad\mathbf{X}_{-}^{n}=\mathbf{X}_{+}^{n}. \tag{3.6}\]
The inequality on the left follows since \(\mathbf{X}_{-}^{n+1}=\mathbf{x}_{i_{n+1}}^{n+1}\) is the global minimizer at \(t^{n+1}\).
Next, we consider the second scenario \(i_{n}\neq j_{n+1}\), that is -- when the mass of the minimizer \(m_{i_{n}}^{n+1}\) did not yet 'catch-up' the position as the heaviest agent so that \(\widetilde{m}_{i_{n}}^{n+1}=\dfrac{m_{i_{n}}^{n+1}}{m_{j_{n+1}}^{n+1}}<1\).
Yet, we claim that the descent property associated with the relative mass \(\widetilde{m}_{i_{n}}^{n+1}\) cannot be arbitrarily small. We consider two sub-cases, depending on the size of \(F(\mathbf{X}_{+}^{n})-F(\mathbf{X}_{-}^{n})\).
Case (i). Assume \(F(\mathbf{X}_{+}^{n})-F(\mathbf{X}_{-}^{n})\leqslant\frac{\gamma}{4L}\lambda| \nabla F(\mathbf{X}_{+}^{n})|^{2}\). Appealing to (3.5) we find
\[F(\mathbf{X}_{-}^{n+1})\leqslant F(\mathbf{x}_{j_{n+1}}^{n+1})\leqslant F( \mathbf{X}_{+}^{n})-\frac{\gamma}{2L}\lambda|\nabla F(\mathbf{X}_{+}^{n})|^{2} \leqslant F(\mathbf{X}_{-}^{n})-\frac{\gamma}{4L}\lambda|\nabla F(\mathbf{X}_ {+}^{n})|^{2}. \tag{3.7}\]
The inequality on the left follows since \(\mathbf{X}_{-}^{n+1}\) is the global minimizer at \(t^{n+1}\); the middle inequality quotes (3.5) and the last inequality follows from our assumption.
Case (ii). Finally, we remain with the case
\[F(\mathbf{X}_{+}^{n})-F(\mathbf{X}_{-}^{n})\geqslant\frac{\gamma}{4L}\lambda| \nabla F(\mathbf{X}_{+}^{n})|^{2}.\]
We claim that in this case,
\[\widetilde{m}_{i_{n}}^{n+1}>\frac{1}{M^{2}}\big{(}F(\mathbf{X}_{+}^{n})-F( \mathbf{X}_{-}^{n})\big{)}^{2}\geqslant\Big{(}\frac{\gamma\lambda}{4ML}\Big{)} ^{q}|\nabla F(\mathbf{X}_{+}^{n})|^{2q}. \tag{3.8}\]
Indeed, since agent \(j_{n+1}\) is not the minimizer at time \(t=t^{n}\), namely \(j_{n+1}\neq i_{n}\), then it had to shed a portion of its mass, \(m_{j_{n+1}}^{n}-\eta_{+}^{n}m_{j_{n+1}}^{n}\to m_{j_{n+1}}^{n+1}\), which was transferred to the minimizer \(m_{i_{n}}^{n+1}\gets m_{i_{n}}^{n}+\ldots+\eta_{+}^{n}m_{j_{n+1}}^{n}\). Thus, the loss of mass by heavy agent
\[m_{j_{n+1}}^{n+1}=m_{j_{n+1}}^{n}-\eta_{+}^{n}m_{j_{n+1}}^{n},\quad\eta_{+}^{n }=\Big{(}\frac{F(\mathbf{x}_{j_{n+1}}^{n})-F(\mathbf{x}_{i_{n}}^{n})}{\max_{j} F(\mathbf{x}_{j}^{n})-F(\mathbf{x}_{i_{n}}^{n})}\Big{)}^{q}\geqslant\frac{1}{M^{q}} \big{(}F(\mathbf{X}_{+}^{n})-F(\mathbf{X}_{-}^{n})\big{)}^{q}.\]
was _gained_ by the minimizer agent, \(i=i_{n}\). Therefore, the relative mass of that minimizer is at least as large as claimed in (3.8)
\[\widetilde{m}_{i_{n}}^{n+1}=\frac{m_{i_{n}}^{n+1}}{m_{j_{n+1}}^{n+1}}>\frac{ \eta_{+}^{n}}{1-\eta_{+}^{n}}\geqslant\frac{1}{M^{q}}\big{(}F(\mathbf{X}_{+}^ {n})-F(\mathbf{X}_{-}^{n})\big{)}^{q}\geqslant\Big{(}\frac{\gamma\lambda}{4ML} \Big{)}^{q}|\nabla F(\mathbf{X}_{+}^{n})|^{2q}.\]
The descent property (2.3) together with (3.8) imply
\[\begin{split} F(\mathbf{X}_{-}^{n+1})&\leqslant F( \mathbf{x}_{i_{n}}^{n+1})\leqslant F(\mathbf{x}_{i_{n}}^{n})-\frac{\gamma}{2L }\lambda\widetilde{m}_{i_{n}}^{n+1}|\nabla F(\mathbf{x}_{i_{n}}^{n})|^{2}\\ &\leqslant F(\mathbf{X}_{-}^{n})-\frac{\gamma\lambda}{2L}\Big{(} \frac{\gamma\lambda}{4ML}\Big{)}^{q}|\nabla F(\mathbf{X}_{+}^{n})|^{2q}\cdot| \nabla F(\mathbf{X}_{-}^{n})|^{2}.\end{split} \tag{3.9}\]
Combining (3.6), (3.7) and (3.9) we find
\[\begin{split} F(\mathbf{X}_{-}^{n+1})&\leqslant F( \mathbf{X}_{-}^{n})-\frac{1}{C}\min\big{\{}|\nabla F(\mathbf{X}_{-}^{n})|^{2},| \nabla F(\mathbf{X}_{+}^{n})|^{2},|\nabla F(\mathbf{X}_{+}^{n})|^{2q}\cdot| \nabla F(\mathbf{X}_{-}^{n})|^{2}\big{\}}\\ &\leqslant F(\mathbf{X}_{-}^{n})-\frac{1}{C}\delta_{n}^{2}\min \big{\{}1,\delta_{n}^{2q}\big{\}},\qquad C=\frac{4L}{\gamma\lambda}\cdot\max \Big{\{}2,\Big{(}\frac{4ML}{\gamma\lambda}\Big{)}^{q}\Big{\}}.\end{split} \tag{3.10}\]
The desired bound (3.4) follows by a telescoping sum. \(\square\)
The summability bound (3.4) implies that eventually, for large enough \(>N_{0}\), the minimizers and (parent of) heaviest SBRD agents, \(\delta_{n}<1\) and hence
\[\sum_{n>N_{0}}^{\infty}\min\{|\nabla F(\mathbf{X}_{+}^{n})|,|\nabla F(\mathbf{X }_{-}^{n})|\}^{2(q+1)}\leqslant C\min_{i}F(\mathbf{x}_{i}^{0}).\]
It follows that there exist sub-sequences, \(\mathbf{X}^{n_{\alpha}}\in\{\mathbf{X}_{+}^{n}\}_{n\geqslant N_{0}}\cup\{ \mathbf{X}_{-}^{n}\}_{n\geqslant N_{0}}\), satisfying the Palais-Smale condition, \(F(\mathbf{X}^{n_{\alpha}})\leqslant\max_{i}F(\mathbf{x}_{i}^{0})\) while \(\nabla F(\mathbf{X}^{n_{\alpha}})\stackrel{{\alpha\to\infty}}{{ \longrightarrow}}0\). Arguing along [5, Theorem 3.3] we summarize by stating the following.
**Theorem 3.2**.: _Let \(\{\mathbf{X}^{n}\}_{n\geqslant N_{0}}:=\{\mathbf{X}^{n}_{+}\}_{n\geqslant N_{0}} \cup\{\mathbf{X}^{n}_{-}\}_{n\geqslant N_{0}}\) denote the combined time sequence of SBRD minimizers/heaviest agents, (3.2). Then there exist one or more sub-sequences, \(\{\mathbf{X}^{n_{\alpha}},\ \alpha=1,2,\dots,\}\), that converge to a band of local minima with equal heights,_
\[\mathbf{X}^{n_{\alpha}}\stackrel{{\alpha\to\infty}}{{ \longrightarrow}}\mathbf{X}^{*}_{\alpha}\ \ \text{such that}\ \nabla F(\mathbf{X}^{*}_{\alpha})=0,\ \text{and}\ F(\mathbf{X}^{*}_{\alpha})=F(\mathbf{X}^{*}_{\beta}) \tag{3.11}\]
_In particular, if \(F\) admits only distinct local minima in \(\Omega\) (i.e., different local minima have different heights), then the whole sequence \(\mathbf{X}^{n}\) converges to a minimum._
Moreover, for analytic \(F\)'s, we can quantify the convergence _rate_ (3.11). To this end we use Lojasiewicz inequality, [3, 4], which guarantees that each critical point of analytic \(F\) has "flatness" of some fixed order \(\beta\in(1,2]\) in the sense that there exists a neighborhood \(\mathcal{N}_{*}\ni\mathbf{x}^{*}\) surrounding \(\mathbf{x}^{*}\), an exponent \(\beta\) and a constant \(\mu>0\) such that
\[\mu|F(\mathbf{x})-F(\mathbf{x}^{*})|\leqslant|\nabla F(\mathbf{x})|^{\beta}, \qquad\forall\mathbf{x}\in\mathcal{N}_{*}. \tag{3.12}\]
**Theorem 3.3**.: _Consider an analytic loss function \(F\) with minimal flatness \(\beta\in(1,2]\), such that the Lip bound (3.1) holds. Let \(\{\mathbf{X}^{n}_{-}\}_{n\geqslant 0}\) denote the time sequence of SBRD minimizers, (1.1),(2.2). Then, there exists a constant, \(C=C(\gamma,\lambda,\mu)\), such that_
\[F(\mathbf{X}^{n_{\alpha}}_{-})-F(\mathbf{X}^{*}_{\alpha})\lesssim\Big{(}\frac{ C}{n_{\alpha}}\Big{)}^{\beta^{\prime}},\qquad\beta^{\prime}=\frac{\beta}{2(q+1)- \beta},\ \ \beta\in(1,2). \tag{3.13}\]
Observe that as 'flatness', increases, \(\beta\) decreases the polynomial decay in (3.13). A more careful analysis which we omit3, allows to replace the factor \((q+1)\) by \(q\), in which case, (3.13) with \(q=1,\beta=2\) implies exponential convergence.
Footnote 3: Requires to eliminate case (ii) in the proof of proposition 3.1; consult [5]
Proof.: We summarize the different statements of descent properties in (3.6), (3.7) and (3.9), writing
\[F(\mathbf{X}^{n+1}_{-})\leqslant F(\mathbf{X}^{n}_{-})-\frac{1}{C}|\nabla F( \mathbf{X}^{n}_{\pm})|^{2(q+1)},\qquad n>N_{0},\]
where \(\mathbf{X}^{n}_{\pm}=\operatorname{argmin}_{\mathbf{X}^{n}_{\pm}}\{|\nabla F( \mathbf{X}^{n}_{+})|,|\nabla F(\mathbf{X}^{n}_{-})|\}\). We focus on the converging sub-sequence \(\{\mathbf{X}^{n_{\alpha}}_{-}\}\),
\[F(\mathbf{X}^{n_{\alpha}+1}_{-})-F(\mathbf{X}^{*}_{\alpha})\leqslant F( \mathbf{X}^{n_{\alpha}}_{-})-F(\mathbf{X}^{*}_{\alpha})-\frac{1}{C}|\nabla F( \mathbf{X}^{n_{\alpha}}_{\pm})|^{2(q+1)}. \tag{3.14}\]
Using Lojasiewicz bound (3.12), and the fact that \(F(\mathbf{X}^{n}_{+})\geqslant F(\mathbf{X}^{n}_{-})\), we find
\[|\nabla F(\mathbf{X}^{n_{\alpha}}_{\pm})|^{\beta}\geqslant\mu|F(\mathbf{X}^{n _{\alpha}}_{\pm})-F(\mathbf{X}^{*}_{\alpha})|\geqslant\mu|F(\mathbf{X}^{n_{ \alpha}}_{-})-F(\mathbf{X}^{*}_{\alpha})|. \tag{3.15}\]
Combining (3.14), (3.15), we conclude that the error, \(E_{n_{\alpha}}:=F(\mathbf{X}^{n_{\alpha}}_{-})-F(\mathbf{X}^{*}_{\alpha})\), satisfies
\[E_{n_{\alpha}+1}\leqslant E_{n_{\alpha}}-\frac{1}{C}(\mu E_{n_{\alpha}})^{\frac {2(q+1)}{\beta}},\qquad\mathbf{X}^{n_{\alpha}}_{-}\in\mathcal{N}_{\alpha}.\]
The solution of this Riccati inequality yields
\[F(\mathbf{X}^{n_{\alpha}}_{-})-F(\mathbf{X}^{*}_{\alpha})\leqslant\bigg{\{}| \min_{i}F(\mathbf{x}^{0}_{i})-F(\mathbf{X}^{*}_{\alpha})|^{-\nicefrac{{1}}{{ \beta^{\prime}}}}+\frac{1}{C}\mu^{\frac{2(q+1)}{\beta}}n_{\alpha}\bigg{\}}^{- \beta^{\prime}},\quad\beta^{\prime}=\frac{\beta}{2(q+1)-\beta}.\]
and (3.13) follows. \(\square\)
## 4. Numerical results
Initially, the agents are placed at random positions, \(\{\mathbf{x}_{i}^{0}\}\) with equi-distributed masses \(\{m_{i}^{0}=\nicefrac{{1}}{{N}}\}\). Masses are transferred from the high to the lowest ground at each iteration. Since we implement a "survival of the fittest" protocol in which the agent with the worst (=highest) configuration is eliminated, the swarm size decreases, one agent at a time, until only the heaviest agent remains. Our choice for the time-stepping protocol, \(h\big{(}\mathbf{x},\lambda\widetilde{m}\big{)}\), is the _backtracking line search_ outlined in SS2.2, which is weighted by the relative masses, \(\widetilde{m}_{i}^{n+1}\). The backtracking enforces a descent property for the SBRD iterations \(\mathbf{x}_{i}^{n}\), and the parameter, \(\lambda\in(0,1)\), dictates how much the descent property holds in the sense that (1.2) is fulfilled.
We illustrate the performance of the multi-dimensional SBRD algorithm, (1.1),(1.3), in several benchmark test cases [2]. The results are based on \(k=1000\) runs of uniformly generated initial data in a hypercube. Backtracking parameters in Algorithm 2.2 are \(\lambda=0.2\) and \(\gamma=0.9\) and \(h_{0}=1.\) The parameters in Algorithm 2.3 are \(tolm=10^{-4}\), \(tolmerge=10^{-3}\), \(tolmax=10^{-4}\) and \(nmax=200\).
We use the _success rate_ among the \(k\) independent simulations to evaluate the solution's quality. We consider a simulation to be successful if \(\mathbf{x}_{SOL}\) is within the \(d\)-dimensional ball of the global minimum: \(|\mathbf{x}^{*}-\mathbf{x}_{SOL}|\leqslant 0.1\). This condition ensures that the approximate solution lies in the basin of attraction of the global minimizer. In section 4.1 we fix the mass transfer parameter \(q=2\); the effect of increasing \(q=4,8\) is discussed in section 4.2.
### Examples of SBRD with mass transfer parameter \(q=2\)
Extensive comparisons of the gradient-based deterministic SBGD were performed in [5]. In this paper, we focus on the impact of randomization on success rates in comparison to SBGD. We consider four benchmarks using the Ackley, Rastrigin, Rosenbrock, Styblinski-Tang objective functions in \(d\)-dimensions.
The **Ackley** function
\[F_{\text{Ackley}}(\mathbf{x})=-20\exp\Big{\{}-\frac{0.2}{\sqrt{d}}\Big{\{} \sum_{i=1}^{d}x_{i}^{2}\Big{\}}^{\nicefrac{{1}}{{2}}}\Big{\}}-\exp\Big{\{} \frac{1}{d}\sum_{i=1}^{d}\cos(2\pi x_{i})\Big{\}}+20+e, \tag{4.1}\]
and the **Rastrigin** function
\[F_{\text{Rastgin}}(\mathbf{x})=10d+\sum_{i=1}^{d}\Big{\{}x_{i}^{2}-10\cos(2\pi x _{i})\Big{\}}. \tag{4.2}\]
have their global minimum at the origin, \(\mathbf{x}^{*}=0\). The **Rosenbrock** function
\[F_{\text{Rsnbrk}}(\mathbf{x})=\sum_{i=1}^{d-1}\big{(}100(x_{i+1}-x_{i}^{2})^{ 2}+(1-x_{i})^{2}\big{)}, \tag{4.3}\]
has its global minimum at \(\mathbf{x}^{*}=(1,\ldots,1)\). And finally, the **Styblinski-Tang** function
\[F_{\text{ST}}(\mathbf{x})=\frac{1}{2}\sum_{i=1}^{d}(x_{i}^{4}-16x_{i}^{2}+5x_{ i}), \tag{4.4}\]
has its global minimum at \(\mathbf{x}^{*}=(-2.903534,\ldots,-2.903534)\). The two-dimensional landscapes of these benchmark examples are shown in Figure 4.1.
Tables 4.1, 4.2 and 4.3 show the advantage of SBRD over SBGD for Rastrigin, Rosenbrock and Styblinski-Tang functions. We bold-face the success rate of SBRD in the tables when the advantage of SBRD over SBGD is at least 1%. Observe that the advantage of randomization is only relevant in higher dimensions where SBGD has a very low success rate. We recall that the same improved success rate of SBRD over SBGD was already recorded for the Ackley test function in Table 1.1. This observation remains valid when the range of initial agents for Ackley test function lies outside the neighborhood of its global minimum; this is documented in Table 4.4.
Figure 4.1. Two-dimensional landscapes for the test functions Ackley (4.1), Rastrigin (4.2), Rosenbrock (4.3), and Styblinski-Tang (4.4) with a contour plot on the bottom and a red star indicating the global minimum.
Figures 4.2 and 4.3 provide additional information about the 'inner working' of the SBRD dynamics. Figure 4.2 shows how the SBRD toggles between minimizers and heaviest agents: the loss function decays rapidly for the minimizers. Heavy agents, however, may arise due to merging multiple agents near local minima. The mass of such agents is then slowly transferred to the minimizers with a better minimum. Figure 4.3 demonstrates the difference between the randomized direction and the gradient direction.
### SBRD with higher order mass transition \(q>2\)
In this section we revisit the benchmark examples with different mass transfer parameter \(q\),
\[\left\{\begin{array}{rl}m_{i}^{n+1}&=m_{i}^{n}-\eta_{i}^{n}m_{i}^{n},\qquad \quad i\neq i_{n}\\ m_{i_{n}}^{n+1}&=m_{i_{n}}^{n}+\sum_{i\neq i_{n}}\eta_{i}^{n}m_{i}^{n},\qquad \qquad\eta_{i}^{n}:=\Big{(}\frac{F(\mathbf{x}_{i}^{n})-F_{\min}^{n}}{F_{\max}^ {n}-F_{\min}^{n}}\Big{)}^{q}\in(0,1],\end{array}\right. \tag{4.5}\]
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \(d\)\(N\) & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{25} & \multicolumn{3}{c|}{50} & \multicolumn{3}{c|}{100} \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline & **12.0\%** & 10.3\% & **52.1\%** & 18.7\% & **92.7\%** & 39.4\% & **99.2\%** & 56.7\% \\
3 & 2.4\% & 2.2\% & 8.1\% & 9.6\% & 27.2\% & 33.9\% & **82.6\%** & 71.0\% \\
4 & 2.3\% & 2.1\% & 3.5\% & 3.0\% & **9.4\%** & 3.9\% & **27.0\%** & 6.5\% \\
5 & 1.1\% & 0.8\% & 1.3\% & 1.6\% & **5.9\%** & 3.2\% & **10.2\%** & 6.1\% \\
6 & 0.5\% & 0.6\% & 1.1\% & 1.2\% & 1.6\% & 1.7\% & **5.1\%** & 2.6\% \\ \hline \end{tabular}
\end{table}
Table 4.2. Success rates for the optimization of the Rosenbrock function (4.3) with initial agents \(\mathbf{x}_{i}^{0}\in[-2.048,2.048]^{d}\).
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \(d\)\(N\) & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{25} & \multicolumn{3}{c|}{50} & \multicolumn{3}{c|}{100} \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline & **31.9\%** & 28.0\% & **96.8\%** & 67.8\% & **100.0\%** & 95.3\% & 100.0\% & 100.0\%\% \\
3 & 5.2\% & 5.6\% & **17.6\%** & 13.6\% & **57.9\%** & 28.6\% & **92.4\%** & 52.0\% \\
4 & 0.3\% & 1.0\% & 2.2\% & 3.9\% & **7.2\%** & 5.7\% & **17.9\%** & 11.4\% \\
5 & 0.1\% & 0.0\% & 0.2\% & 0.4\% & 0.8\% & 0.4\% & **3.2\%** & 1.2\% \\
6 & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.1\% & 0.2\% & 0.4\% \\ \hline \end{tabular}
\end{table}
Table 4.1. Success rates of SBRD vs. SBGD for global optimization of the \(d\)-dimensional Rastrigin function (4.2).
We find that increasing \(q\) in the mass transfer protocol (4.5), improves the success rate of SBRD. Previously, we found \(q=2\) to be an optimal choice for SBGD. However, as shown in Tables 4.5 and 4.5 for the Ackley test function, higher \(q=4\) and respectively \(q=8\), has a dramatic effect in improving the success rate of SBRD over SBGD. Randomization favors higher transfer parameter \(q\). Indeed, increasing \(q\) enforces smaller amounts of mass transfer in (4.5) so that SBRD becomes more 'egalitarian': both the heavier leading agents and the lighter exploring agents are allowed more time (iterations) to settle or to explore, and hence the rate of change for mass configuration of the swarm become smaller. In particular, this allows a more effective exploration of the random-based descent, improving the overall performance of SBRD. This is demonstrated in Table 4.7.
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \(d\)\(N\) & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{25} & \multicolumn{3}{c|}{50} & \multicolumn{3}{c|}{100} \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline
12 & 2.8\% & 3.7\% & 39.3\% & 60.1\% & 74.8\% & 96.2\% & 94.5\% & 99.9\% \\
14 & 0.3\% & 0.0\% & **19.6\%** & 0.9\% & **51.3\%** & 2.0\% & **81.3\%** & 9.9\% \\
16 & 0.0\% & 0.0\% & **2.7\%** & 0.0\% & **21.9\%** & 0.0\% & **47.4\%** & 0.0\% \\
18 & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.7\% & 0.0\% & **7.3\%** & 0.0\% \\ \hline \end{tabular}
\end{table}
Table 4.4. Same as Table 1.1 except the range of initial agents with \(\mathbf{x}_{i}^{0}\in[-3,-1]^{d}\) does not contain the global minimum of the Ackley function.
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \(d\)\(N\) & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{25} & \multicolumn{3}{c|}{50} & \multicolumn{3}{c|}{100} \\ \hline & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD & SBRD & SBGD \\ \hline
2 & **97.0\%** & 92.8\% & 100.0\% & 99.9\% & 100.0\% & 100.0\% & 100.0\% & 100.0\% \\
4 & 29.5\% & 35.3\% & **83.7\%** & 79.0\% & **99.2\%** & 97.4\% & 100.0\% & 99.9\% \\
6 & 7.8\% & 10.4\% & 28.5\% & 32.5\% & 54.5\% & 55.4\% & **86.3\%** & 83.2\% \\
8 & 2.2\% & 2.5\% & 7.6\% & 9.7\% & 13.7\% & 18.7\% & **36.7\%** & 35.4\% \\
10 & 0.4\% & 0.6\% & 2.6\% & 3.2\% & 5.9\% & 6.0\% & 10.2\% & 12.5\% \\
12 & 0.1\% & 0.2\% & 0.5\% & 0.8\% & 1.3\% & 2.2\% & 2.9\% & 3.8\% \\ \hline \end{tabular}
\end{table}
Table 4.3. Success rates for the optimization of the Styblinski-Tang function (4.4) with initial agents \(\mathbf{x}_{i}^{0}\in[-3,3]^{d}\). |
2307.11148 | Flavour hierarchies from emergent fundamental partial compositeness | Composite Higgs extensions of the Standard Model provide an explanation for
the large hierarchies between the Yukawa couplings. We study their realisation
in the context of fundamental partial compositeness where the Standard Model
fermions mix linearly with bound states of the new sector, consisting of a
fermion and a scalar. The properties of this composite are unravelled with the
functional renormalisation group approach using dynamically emergent
composites. Specifically, we extract the scaling of correlation functions and
provide indicative estimates for the minimal incarnation of the theory. | Florian Goertz, Álvaro Pastor-Gutiérrez, Jan M. Pawlowski | 2023-07-20T18:00:01Z | http://arxiv.org/abs/2307.11148v1 | # Flavour hierarchies from emergent fundamental partial compositeness
###### Abstract
Composite Higgs extensions of the Standard Model provide an explanation for the large hierarchies between the Yukawa couplings. We study their realisation in the context of fundamental partial compositeness where the Standard Model fermions mix linearly with bound states of the new sector, consisting of a fermion and a scalar. The properties of this composite are unravelled with the functional renormalisation group approach using dynamically emergent composites. Specifically, we extract the scaling of correlation functions and provide indicative estimates for the minimal incarnation of the theory.
## I Introduction
Understanding the hierarchical structure of fermion masses and mixings is one of the major open questions in fundamental physics. The concept of partial compositeness (PC) for fermions, first proposed in [1] and put forward in [2; 3; 4; 5] in extra-dimensional duals of composite Higgs (CH) [6; 7; 8] models, offers a promising means to address this question. In this approach, the fermion mass terms of the Standard Model (SM) are generated from linear mixings of SM-like fermions of each flavour with composite fermionic operators \(\mathcal{O}_{B}\), containing fundamental fields of a new sector that are bound together by a novel confining interaction. Below the condensation scale \(\Lambda_{c}\), these terms lead to the light fermion mass eigenstates being a superposition of elementary SM-like fermions and composite resonances excited by the operators \(\mathcal{O}_{B}\) in the infrared (which explains the denotation 'partial compositeness'). The latter resonances provide the connection to the composite Higgs and thus to electroweak symmetry breaking (EWSB).
Small differences in the scaling dimensions of the composite operators translate to exponentially large differences in the strengths of the linear mixings at low energies due to the renormalisation-group evolution from a large UV flavour scale \(\Lambda_{\rm UV}\), where the couplings of the SM fermions with the strongly coupled sector are generated, down to the condensation scale \(\Lambda_{c}\). Assuming an almost conformal behaviour between those scales, the linear-mixing coefficient will scale as \(\sim(\Lambda_{c}/\Lambda_{\rm UV})^{d-5/2}\), with the dimension \(d=[\mathcal{O}_{B}]\) of the composite operator, see e.g.[9]. After integrating out the heavy states, these hierarchically different couplings to the bound states of the new strong sector lead to hierarchical mass eigenvalues of quarks and leptons.
While the initial focus in the literature was on effective low-energy descriptions of PC (see [9; 10; 11] for reviews), more recently UV-complete realisations have been explored [12; 13; 14; 15; 16; 17; 18; 19; 20]. These consider the fundamental degrees of freedom and the dynamics that lead to the bound states that mix linearly with the elementary SM-like fermions. Here, an obvious approach is to assume the composite fermions being composites of three fundamental fermionic degrees of freedom [12; 13; 14; 16]. Such constructions however face severe challenges since the scaling dimension of the composite operators needs to deviate very significantly from the canonical value of \([\mathcal{O}_{B}]_{0,\,c}=3[\mathcal{F}]_{c}=9/2\). The latter would lead to fermion masses, that are too small. The linear mixings are suppressed by \([\mathcal{O}_{B}]_{0,\,c}+3/2-4=2\) powers of the flavour scale \(\Lambda_{\rm UV}\) where they emerge as higher dimensional operators involving fundamental fermions (flavour bounds indicate \(\Lambda_{\rm UV}\gg\Lambda_{c}\)). This issue is particularly severe in the case of the large top-quark mass.
In fact, lattice results suggest that large deviations from the canonical scaling dimension (i.e., large anomalous dimensions) are not possible for the straightforward realisation with three-fermion bound states [21; 22; 23; 24]. Thus, it is not obvious whether this completion improves the situation compared to the technicolour (TC) way of generating fermion masses via bilinear couplings to the EWSB condensate - where in general the masses are also suppressed too much (unless the scaling dimension of the Higgs-like bound state becomes as small as to re-introduce UV instabilities), see e.g. [9]. In consequence, finding alternative mechanisms of realising PC becomes a priority.
One of these alternatives is based on the assumption that the fermionic composites are not formed by three elementary fermions but by an elementary fermion \(\mathcal{F}\) and a scalar \(\mathcal{S}\), see [17; 18; 19]. These scenarios have been dubbed "fundamental partial compositeness" (FPC), and rely on renormalisable Yukawa-like couplings of the SM-like fermions to \(\mathcal{S}\) and \(\mathcal{F}\). The latter two form a bound state \(\mathcal{B}\sim\mathcal{S}\mathcal{F}\) that mixes with the SM fermions in the infrared (IR). Here, \(\mathcal{B}\) refers to the lightest resonance in a tower of states excited by \(\mathcal{O}_{B}\)[20].
The inclusion of scalars and hence the resulting canonical dimension of \([\mathcal{O}_{B}]_{0,\,c}=[\mathcal{F}]_{c}+[\mathcal{S}]_{c}=5/2\) carry the crucial advantage that the linear mixings are not necessarily suppressed. Thus, the hope is to obtain the heavy top quark mass, while the light fermion masses are still set via natural input parameters. The presence of elementary scalars may reintroduce a hierarchy problem, but the approach could be seen as an intermediate step |
2305.07153 | Towards best practices in AGI safety and governance: A survey of expert
opinion | A number of leading AI companies, including OpenAI, Google DeepMind, and
Anthropic, have the stated goal of building artificial general intelligence
(AGI) - AI systems that achieve or exceed human performance across a wide range
of cognitive tasks. In pursuing this goal, they may develop and deploy AI
systems that pose particularly significant risks. While they have already taken
some measures to mitigate these risks, best practices have not yet emerged. To
support the identification of best practices, we sent a survey to 92 leading
experts from AGI labs, academia, and civil society and received 51 responses.
Participants were asked how much they agreed with 50 statements about what AGI
labs should do. Our main finding is that participants, on average, agreed with
all of them. Many statements received extremely high levels of agreement. For
example, 98% of respondents somewhat or strongly agreed that AGI labs should
conduct pre-deployment risk assessments, dangerous capabilities evaluations,
third-party model audits, safety restrictions on model usage, and red teaming.
Ultimately, our list of statements may serve as a helpful foundation for
efforts to develop best practices, standards, and regulations for AGI labs. | Jonas Schuett, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, Ben Garfinkel | 2023-05-11T21:54:30Z | http://arxiv.org/abs/2305.07153v1 | # Towards best practices in AGI safety
###### Abstract
A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI)--AI systems that achieve or exceed human performance across a wide range of cognitive tasks. In pursuing this goal, they may develop and deploy AI systems that pose particularly significant risks. While they have already taken some measures to mitigate these risks, best practices have not yet emerged. To support the identification of best practices, we sent a survey to 92 leading experts from AGI labs, academia, and civil society and received 51 responses. Participants were asked how much they agreed with 50 statements about what AGI labs should do. Our main finding is that participants, on average, agreed with all of them. Many statements received extremely high levels of agreement. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. Ultimately, our list of statements may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs.
###### Abstract
We propose a novel approach to design a novel model for the spatial-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio-temporal spatio spatio-temporal spatio-temporal spatio-temporal spatio spatio-temporal spatio-temporal spatio spatio-temporal spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio-temporal spatio spatio spatio-temporal spatio spatio spatio-temporal spatio spatio spatio-temporal spatio spatio spatio-temporal spatio spatio spatio-temporal spatio spatio spatio spatio-temporal spatio spatio spatio spatio-temporal spatio
Introduction
Background.Over the past few months, a number of powerful artificial intelligence (AI) systems were released [56, 64, 81] and integrated into products that are now being used by millions of people around the world [46, 78, 87]. At the same time, some leading AI companies have become more explicit that their ultimate goal is to build artificial general intelligence (AGI)--AI systems that achieve or exceed human performance across a wide range of cognitive tasks [3, 39, 63]. The prospect of AGI used to be a fringe area [26, 25, 15], but the debate has now entered the public discourse [89, 33, 37, 47] and the political stage [85, 82, 13, 77].1 There are now increasing efforts to develop standards and regulations that would apply to organizations that try to build AGI. However, there are still a number of open questions about the substance of such standards and regulations.
Footnote 1: In some cases, policymakers use the term “AGI” explicitly [32, 82]. In other cases, they talk about developers of “general-purpose AI systems” and “foundation models” [13, 12] or “generative AI systems” [85, 77], which also includes organizations that try to build AGI.
Purpose.This paper is intended to contribute to the creation of best practices in AGI safety and governance. We want to make sure that the views of relevant experts are taken into account. More specifically, we want to find out which practices already have broad support and where more work is needed. To this end, we surveyed 51 leading experts from AGI labs, academia, and civil society. Our findings can be used as evidence in discussions about the creation of best practices. We hope that AGI labs will follow emerging best practices on a voluntary basis. But best practices could also inform standard-setting processes (e.g. by ISO and NIST) and regulatory efforts. Consider the following simple model of how governance mechanisms get codified into law: (1) different companies experiment with different governance mechanisms; (2) best practices emerge; (3) best practices inform standard-setting processes; (4) standards get codified into law. The main purpose of this paper is to support step (2). However, in practice, these steps are often performed in parallel, not in a sequential way. The paper could therefore also inform steps (3) and (4).
Related work.AGI labs share some information about their governance practices [19, 23, 36, 56] and occasionally even propose best practices [20]. There do not seem to be any independent efforts to create best practices for the governance of organizations that try to build "AGI". However, there are efforts that target developers of "general-purpose AI systems", "foundation models", or "large-scale AI models", which also includes AGI labs. Most notably, the Partnership on AI has initiated a multistakeholder dialogue to develop shared protocols for the safety of large-scale AI models [61], while The Future Society seeks to create an industry code of conduct for developers of general-purpose AI systems and foundation models [80]. There are also efforts to adapt AI risk management standards like the NIST AI Risk Management Framework [53] or ISO/IEC 23894 [35] to the needs of developers of general-purpose AI systems [11]. The Alignment Research Center (ARC) is also developing a new standard on dangerous capabilities evaluations that is targeted at "leading AI companies" [6]. Finally, the proposed EU AI Act will likely contain rules for developers of general-purpose AI systems and foundation models [12], though the issue remains disputed [1].
Terminology.By "AGI", we mean AI systems that reach or exceed human performance across a wide range of cognitive tasks.2 (Note that we do not make any claims about when, if at all, AGI will be built.)3 By "AGI labs", we mean organizations that have the stated goal of building AGI. This includes OpenAI, Google DeepMind, and Anthropic. Since other AI companies like Microsoft and Meta conduct similar research (e.g. training very large models), we also refer to them as "AGI labs" in this paper. By "AGI safety and governance practices", we mean internal policies, processes, and organizational structures at AGI labs intended to reduce risk.
Footnote 2: There is no generally accepted definition of the term “AGI”. According to Goertzel [24], the term was first used by Gubrud [30] in the article “Nanotechnology and international security”. It was popularized through the book “Artificial general intelligence” edited by Goertzel and Pennachin [26]. We acknowledge that our definition is vague. For more information on how to make this definition more concrete, we refer to the relevant literature [25, 52, 9]. Different definitions emphasize different elements. For example, in their charter, OpenAI uses a definition that focuses on economic value: “highly autonomous systems that outperform humans at most economically valuable work” [54]. But note that they have recently used a simplified definition: “AI systems that are generally smarter than humans” [3]. The term “AGI” is related to the terms “strong AI” [71], “superintelligence” [14, 15], and “transformative AI” [29].
Overview.The paper proceeds as follows. Section 2 contains information about the sample, the survey, and our analysis. Section 3 reports our results, namely to what extent respondents agreed with different statements about what AGI labs should do, whether there were noticeable differences between sectors and genders, and which additional practices respondents suggested. Figure 2 shows the percentages of responses for all statements listed in the survey. Section 4 discusses our key results, their policy implications, and the main limitations of our study. It also suggests directions for future work. Section 5 concludes. Appendix A contains a list of all participants who gave us permission to mention their names and affiliations. Appendix B contains a list of all statements used in the survey. Appendices D, E, and F contain additional figures, tables, and analyses.
## 2 Methods
### Sample
Sample size.We invited 92 experts to take the survey and received 51 responses. The response rate was 55.4%, which is high compared to previous expert surveys of AI researchers [28; 90; 79].
Sample selection.Participants were selected in a four-step approach. In the first step, we selected relevant sectors: AGI labs, academia, civil society (including nonprofit organizations and think tanks), and other (including government, consulting firms, and other tech companies). In the second step, we selected specific organizations within each sector. In the third step, we selected experts within each organization. In the fourth step, we added individual experts who were not affiliated with any of the organizations identified in the second step. The final sample represented all of the selected sectors identified in the first step. Figure 1 shows the division of respondents by sector and gender. 33 respondents (64.7%) gave us permission to list them publicly as respondents to the survey. The full list can be found in Appendix A.
Sample type.Our sample could best be described as a purposive sample [59]. We selected individual experts based on their knowledge and experience in areas relevant for AGI safety and governance, but we also considered their availability and willingness to participate. We used a number of proxies for expertise, such as the number, quality, and relevance of their publications as well as their role at relevant organizations.
Overall, we believe the selection reflects an authoritative sample of current AGI safety and governance-specific expertise. For a discussion of limitations related to our sample, see Section 4.4.
Figure 1: **Sample by sector and gender** | The figure shows the sector of work and gender of the respondents. Respondents could choose more than one sector in which they work.
Figure 2: **Percentages of responses for all statements \(|\)** The figure shows the percentage of respondents choosing each answer option. At the end of each bar we show the number of people who answered each item. The items are ordered by the total number of respondents that “strongly” agreed. The full statements can be found in Appendix B.
Figure 3: **Mean agreement for all statements \(|\)** The figure shows the mean and 95% confidence interval for each of the 50 statements. “I don’t know responses” were excluded from the analysis.
### Survey design.
Informed consent had to be given before proceeding to the main survey. The survey began by defining the terms "AGI", "AGI labs", and "AGI safety and governance practices" as noted above. Respondents were then asked to what extent they agree or disagree with statements about what AGI labs should do. We asked respondents for their gender and where they worked. Finally, respondents were able to list important AGI safety and governance practices they thought were missing from the survey. Respondents took a median of 11 minutes to complete the survey.
Statements about AGI safety and governance practices.The statements covered many different areas, including development, deployment, monitoring, risk management, external scrutiny, information security, communication, and culture. They were extracted from (1) current practices at individual AGI labs (e.g. pre-deployment risk assessments [19, 36] and dangerous capabilities evaluations [56]), (2) planned practices at individual labs (e.g. third-party model audits [3]), (3) proposals in the literature (e.g. third-party governance audits [51] and incident reporting [45]), and (4) discussion with experts and colleagues. In total, the survey contained 50 statements, 30 of which respondents were required to respond to and 20 where answers were optional. Appendix B contains a full list of all statements.
Response scale.Respondents were asked to indicate their level of agreement based on a 5-point Likert scale: "strongly disagree" (-2), "somewhat disagree" (-1), "neither agree nor disagree" (0), "somewhat agree" (1), "strongly agree" (2). They also had the option to say "I don't know".
Demographic questions.Respondents were asked what their gender was ("man", "woman", "another gender", "prefer not to say") and what sector they worked in ("AGI lab [e.g. OpenAI, Google DeepMind, Anthropic, Microsoft, and Meta]", "other tech company", "consulting firm", "think tank", "nonprofit organization", "government", "academia", "other", "prefer not to say"). For the sector question, respondents were able to choose more than one option.
Survey distribution.The survey took place between 26 April and 8 May 2023. Respondents were sent an initial email invitation and a reminder email using Qualtrics. A one hour virtual workshop was held which invited the same individuals as the sampling frame. The workshop explored questions on how AGI safety and governance practices could be created and implemented. 21 people attended the workshop along with the seven authors of this paper who took notes and moderated the discussion. During the workshop, attendees were reminded to participate in the survey. Additional follow-up emails were sent to respondents in the final three days of the survey in order to ensure the sample was more representative of the sampling frame and that emails had not gone unseen due to email filters that may have flagged the Qualtrics survey invitations and reminder emails as spam.
Anonymity.Responses to the survey were anonymous. The part of the survey that asked respondents for their views was a separate Qualtrics survey to both the informed consent survey and where respondents noted their name and affiliation. We will not make any of the demographic data or text responses public to further ensure that responses cannot be reverse-identified. Respondents were informed of these measures in the informed consent section.
### Analysis
Demographic groups.We categorized sector responses as follows: AGI lab, academia, civil society ("think tank", "nonprofit organization"), other ("other tech company", "consulting firm", "government", "other").
Group differences.To test for differences in the overall population of responses across all items, we used the Mann-Whitney U test. To test for differences between groups in responses for each practice, we used Chi-squared tests. Certain subgroups had to be removed from the gender ("another gender", "prefer not to say") and sector ("other", "prefer not to say") analyzes due to sample sizes falling below 5 [44]. Where applicable throughout, the Holm-Bonferroni correction was used to correct for multiple comparisons: the original alpha-value (0.05) is divided by the number of remaining tests, counting down from the highest to the lowest p-value. The p-values were then compared to the Holm-Bonferroni-adjusted significance levels to determine the significance of each test.
Open science.The survey draft, pre-registration, pre-analysis plan, code, and data can be found on OSF ([https://osf.io/s7vhr](https://osf.io/s7vhr)). To protect the identity of respondents, we will not make any demographic data or text responses public. We largely followed the pre-analysis plan. Any deviations from the pre-registered analyses can be found in Appendix F, along with the pre-registered cluster analysis.
## 3 Results
In this section, we report the main results of the survey, namely respondents' level of agreement (Section 3.1), differences between sectors and genders (Section 3.2), and additional practices that were suggested by respondents (Section 3.3). Additional figures, tables, and analyses can be found in Appendices D, E, and F.
### Level of agreement
Overall agreement.There was a broad consensus that AGI labs should implement most of the safety and governance practices in a 50-point list. For 98% of the practices, a majority (more than 50%) of respondents strongly or somewhat agreed. For 56% of the practices, a majority (more than 50%) of respondents strongly agreed. The mean agreement across all 50 items was 1.39 on a scale from -2 (strongly disagree) to 2 (strongly agree)--roughly halfway between somewhat agree and strongly agree. On average, across all 50 items, 85.2% of respondents either somewhat or strongly agreed that AGI labs should follow each of the practices. On average, only 4.6% either somewhat or strongly disagered that AGI labs should follow each of the practices. The broad level of agreement can be seen in Figure 2, which shows the percentage of respondents that answered "strongly agree", "somewhat agree", "neither agree nor disagree", "somewhat disagree", "strongly disagree", and "I don't know" for each of the potential AGI best practices. For none of the practices, a majority (more than 50%) of respondents somewhat or strongly disagered. Indeed, the highest total disagreement on any item was 16.2% for the item "avoid capabilities jumps". Across all 2,285 ratings respondents made, only 4.5% were disagreement ratings.
Highest agreement.The items with the highest total agreement proportions all had agreement ratings from 98% of respondents were: dangerous capabilities evaluations, internal review before
Figure 4: **Statements with highest and lowest mean agreement** | The figure shows the mean agreement and 95% confidence interval for the five highest and lowest mean agreement items.
publication, monitor systems and their uses, pre-deployment risk assessment, red teaming, safety restrictions, and third-party model audits. Seven items had no disagreement ratings at all: dangerous capabilities evaluations, industry sharing of security information, KYC screening, pre-deployment risk assessment, publish alignment strategy, safety restrictions, and safety vs. capabilities. Figure 4 shows the statements with the highest and lowest mean agreement. The mean agreement for all statements can be seen in Figure 3. The statements with the highest mean agreement were: pre-deployment risk assessment (\(M=1.9\)), dangerous capabilities assessments (\(M=1.9\)), third-party model audits (\(M=1.8\)), safety restrictions (\(M=1.8\)), and red teaming (\(M=1.8\)).
Lowest agreement.The five items with the highest total disagreement proportions among respondents were: avoid capabilities jumps (16.2%), inter-lab scrutiny, (15.4%), no unsafe open-sourcing, (13.7%), treat updates similarly to new models, (13.7%), and notify other labs, (13.2%). The five statements with the lowest mean agreement were: notify other labs (\(M=0.4\)), avoid capabilities jumps (\(M=0.6\)), inter-lab scrutiny (\(M=0.7\)), notify affected parties (\(M=0.9\)), and notify a state actor before deployment (\(M=0.9\)). Note that all practices, even those with lowest mean agreement, show a positive mean agreement, that is above the midpoint of "neither agree nor disagree" and in the overall agreement part of the scale.
"I don't know" and "neither agree nor disagree".The five practices with the highest proportion of "I don't know" and "neither agree nor disagree" responses can be seen in Figure 5. Enterprise risk management (25.5%), notify affected parties (22.2%), inter-lab scrutiny (17.9%), notify other labs (15.8%), and security standards (13.7%) show the highest "I don't know" responses. The four practices with the highest "neither agree nor disagree" responses were: notify other labs (28.9%), notify affected parties (16.7%), avoid capabilities jumps (16.2%), and tracking model weights (12.8%). Avoiding hype, enterprise risk management, gradual scaling, and notify a state actor before deployment are all tied for fifth highest "neither agree nor disagree" responses (11.8%).
### Differences between sectors and genders
Statistical tests.We used two statistical tests to test for differences between sectors and genders. Firstly, we conducted Mann-Whitney U tests to test for differences in the overall mean agreement across all items. This is a test of whether two independent samples are drawn from the same underlying distribution, and does not assume that this underlying distribution is normal, making it an appropriate test statistic for our data. Secondly, we conducted Chi-squared tests of independence
Figure 5: **Statements with the highest proportion of “I don’t know” and “neither agree nor disagree” responses**
to test for significant differences in the distribution of agreement and disagreement responses for each item by gender and sector. This test compares the observed frequencies across the categories of interest with the frequencies which would be expected if there was no difference between the responses in each category.
Differences between sectors.We found a significant difference in overall mean agreement across items between respondents from AGI labs and academia (U = 325295.0, p < 0.001, \(\alpha\) = 0.017), as well as between respondents from AGI labs and civil society (U = 1106715.0, p < 0.001, \(\alpha\) = 0.017). Respondents from AGI labs (\(M\) = 1.54) showed significantly higher mean agreement than respondents from academia (\(M\) = 1.16) and civil society (\(M\) = 1.36). There was no significant difference in overall mean agreement between academia and civil society. When comparing sector groups at the item level we found no significant differences between sector groups for any of the items. The mean agreement by sector can be seen in Figures 6 and 7 in Appendix D.
Differences between genders.We found no significant differences between responses from men and women--neither in overall mean agreement, nor at the item level. The mean agreement by gender can be seen in Figure 8 in Appendix D.
### Suggested practices
While our selection of 50 practices covers a lot of ground, the list is clearly not comprehensive. We therefore asked respondents which AGI safety and governance practices were missing. Respondents suggested an additional 50 unique practices. Two practices were mentioned by two respondents, namely that AGI labs should have a merge-and-assist-clause as well as some kind of internal review board. Another theme that was mentioned by several respondents was the need to adequately balance profits and societal benefits. Besides that, all practices were only mentioned by one respondent. Some of the suggestions were slight variations or elaborations of our statements. The full list of practices noted as missing from the survey can be found in Appendix C.
## 4 Discussion
In this section, we give an overview of our results (Section 4.1), discuss some of the specific results (Section 4.2), their policy implications (Section 4.3), and the main limitations of our study (Section 4.4). We also suggest directions for future work (Section 4.5).
### Overview of results
Level of agreement.Overall, the study found a remarkably high level of agreement among leading AGI safety and governance experts for the practices presented (Section 3.1, see Appendix B for all practices). For all but one statement, a majority of respondents either somewhat or strongly agreed with the practice. We suspect that the abstract framing of the items was a contributing factor to this high level of agreement. This likely resulted in higher agreement than if the items had specified exactly how to instantiate each of the practices. However, we see this high level of agreement as "a feature, not a bug". Our findings can be used as a foundation for efforts to develop best practices, standards, and regulations for AGI labs. Practices with broad support can then be made concrete, developed, and enshrined (Section 4.3). Doing this work is beyond the scope of a single survey and will require more in-depth discussion (Section 4.5).
Despite the broad overall agreement, our survey also revealed relative differences between practices. Many items showed extremely high agreement along with minimal (e.g. third-party model audits, red teaming) or no disagreement (pre-deployment risk assessment, dangerous capabilities evaluations, publish alignment strategy, KYC screening, safety restrictions). Other items elicited higher proportions of disagreement (e.g. avoid capabilities jumps, inter-lab scrutiny), but all items had positive mean agreement. Some items revealed areas of uncertainty (e.g. enterprise risk management, notify other labs, notify affected parties), with higher "I don't know" and "neither agree nor disagree" responses. These practices may benefit from particular attention from future research to determine what the causes of these uncertainties are. For example, uncertainties may have been caused by specific formulations or by more fundamental questions about whether the practice should be implemented.
Differences between sectors and genders.Interestingly, respondents from AGI labs had significantly higher overall mean agreement ratings than respondents from academia or civil society (Section 3.2). This suggests that, on average, individuals closer to the technology developed by AGI labs endorse the practices to a higher degree. This difference was not found at the item-level, where we found no significant differences between sectors. No significant overall mean agreement or item-level differences between men and women were found. It is important to note the comparably small sample sizes used in the testing of group differences (\(N\) = 25 for AGI lab, \(N\) = 13 for academia, and \(N\) = 13 for civil society), and therefore any statistical significance in the results should be interpreted accordingly. In addition, it should be noted that it may be the case that the lack of significant differences at the item-level are at least in part driven by the smaller number of respondents per item. Generally, at such a small sample size, significant difference tests can be capricious and may lack sensitivity.
Suggested practices.Finally, participants suggested 50 additional unique governance practices for AGI labs (Section 3.3, Appendix C). This indicates that the 50 practices used in the survey are not sufficient for "good" governance of AGI labs. More research is needed to paint a more complete picture of an "ideal" governance regime. In general, we see the list of additional statements and the high level of agreement across our 50 items as a powerful indicator of the opportunity that exists to improve the safety and governance practices at AGI labs. To mitigate the risks from increasingly capable AI systems, AGI labs need a portfolio of governance mechanisms. We will discuss the specific results for items within the context of the current AGI safety and governance landscape in the next section.
### Discussion of specific results
Below, we discuss responses to specific statements. We categorize statements into eight areas: (1) development, (2) deployment, (3) post-deployment, (4) risk management, (5) external scrutiny, (6) information security, (7) communication, and (8) other. These categories are intended to improve readability. We did not use them in the survey. Values in brackets refer to the mean agreement (M) on a scale from -2 ("strongly disagree") to 2 ("strongly agree").
Development.The need to conduct evaluations for dangerous capabilities was among the highest rated items (\(M\) = 1.9). OpenAI, Google DeepMind, and Anthropic are already working on such evaluations [56; 38; 6].4 For example, before releasing GPT-4, OpenAI commissioned ARC to evaluate risky emergent behaviors, such as situational awareness, persuasion, and long-horizon planning [56]. A related statement about pausing the development process if dangerous capabilities are detected also received broad support (\(M\) = 1.6). It is worth noting that, while not statistically significant, respondents from AGI labs (\(M\) = 1.4) were more skeptical than other respondents (\(M\) = 1.9). Despite the broad support, many questions about dangerous capabilities evaluations remain open (e.g. what exactly labs should do if they detect certain dangerous capabilities and whether coordinated pausing is feasible). We strongly encourage more work on this. Perhaps unsurprisingly, the statement that AGI labs should implement state-of-the-art safety and alignment techniques (\(M\) = 1.7) and that a significant fraction of employees should work on enhancing model safety and alignment rather than capabilities (\(M\) = 1.7) also received broad support, while statements about tracking model weights (\(M\) = 1.3), model containment (\(M\) = 1.3), and gradual scaling (\(M\) = 1.2) received less support. The statement with the least support of all development-related statements was about the need to pre-register large training runs with an appropriate state actor (\(M\) = 1.1), just above "somewhat agree". We would speculate that respondents were uncertain about which state actor would be appropriate, which we left intentionally open.
Footnote 4: Note that [38] only represents the views of the alignment team. It is not officially endorsed by Google DeepMind.
Deployment.While participants, on average, strongly agreed with the statement that labs should put in place certain safety restrictions (\(M\) = 1.8), they only somewhat agreed with statements about specific deployment strategies, such as staged deployment (\(M\) = 1.3), API access (\(M\) = 1.2), and no unsafe open-sourcing (1.3). We suspect that the main reason for this slightly reduced support is that the statements were too general. The "right" deployment strategy might depend on a number of contextual factors [75]. It is also worth noting that the statement on API access used a softer
formulation than all other statements ("AGI labs should consider doing X" instead of "AGI labs should do X"). Otherwise, the level of agreement might have been even lower. For more information about different deployment strategies, we refer to the relevant literature [76, 20, 72, 75]. The need to conduct know-your-customer (KYC) screenings was moderately supported (\(M\) = 1.4). OpenAI already lists this as one of their safety best practices [58]. The statements that AGI labs should treat model updates similarly to new models (\(M\) = 1.1) and internal deployments similarly to external deployments (\(M\) = 1.0) also received moderate support, while the statement that AGI labs should avoid capabilities jumps (\(M\) = 0.6), not deploying models that are much more capable than any existing models, was among the least supported items. Respondents from AGI labs (\(M\) = 0.9) were slightly more supportive of that statement than other participants (\(M\) = 0.4), but this difference was not statistically significant.
Post-deployment.There was broad support for the claim that AGI labs should closely monitor deployed systems and their uses (\(M\) = 1.7). OpenAI [19, 20] and Google DeepMind [36] are already doing this, and although we could not find any public statements about this from Anthropic, we strongly suspect that they are doing the same. Participants also strongly agreed with the statement that AGI labs should continually evaluate models for dangerous capabilities after deployment (\(M\) = 1.7) and report safety incidents (e.g. via the AI Incident Database [45]) (\(M\) = 1.7). We could not find any public statements about the extent to which different AGI labs are already doing this. Participants also thought that AGI labs should have an emergency response plan (e.g. when to restrict access or switch off systems) (\(M\) = 1.6). Again, we could not find any public information on this.
Risk management.Participants strongly agreed with statements about pre-deployment (\(M\) = 1.9) and pre-training risk assessments (\(M\) = 1.6). While AGI labs already conduct extensive pre-deployment risk assessments [36, 19, 56], we could not find any public information about pre-training risk assessments. Participants somewhat agreed with various statements about risk governance [84, 43], namely that AGI labs should have a board risk committee (\(M\) = 1.4), a chief risk officer (\(M\) = 1.4), and an internal audit team (\(M\) = 1.3). Based on public information, AGI labs do not seem to have any of these structures. This is a noticeable gap that warrants further discussion [68, 70]. The statement about enterprise risk management received even less support (\(M\) = 1.0). It was also the item with the highest "I don't know" rate (25.5%), which indicates that many respondents simply did not know what enterprise risk management is and how it works. We mentioned two examples of enterprise risk management frameworks--the NIST AI Risk Management Framework [53] and ISO 31000 [34]--but we suspect that many respondents did not know these frameworks either. We should have described the concept in a more accessible way.
External scrutiny.There was broad support for third-party model audits (\(M\) = 1.8), red teaming (\(M\) = 1.8), and bug bounty programs (\(M\) = 1.5). There is extensive academic discussion about third-party model audits [65, 18, 50, 22, 66, 51] and OpenAI has already announced that they plan to commission third-party model audits in the future [3]. We could not find similar statements from Google DeepMind and Anthropic. OpenAI has also recently announced a bug bounty program [55]. Again, Google DeepMind and Anthropic do not seem to have similar programs. In contrast, red teaming is already a common practice at OpenAI [48, 56], Google DeepMind [62], and Anthropic [23]. Participants also strongly agreed with the statement that AGI labs should increase the level of external scrutiny in proportion to the capabilities of their models (\(M\) = 1.6). Yet, it is unclear what exactly that entails (e.g. larger red teams, combining different methods, or more time for investigations). Third-party governance audits were slightly less supported (\(M\) = 1.3), perhaps because the mechanism is less well-known, even though there is some literature on the topic [49, 51].
One of the lowest rated items was inter-lab scrutiny (\(M\) = 0.7). It is worth noting that, while not statistically significant, we saw higher support for this statement from respondents from AGI labs (\(M\) = 1.2) in comparison to respondents from academia (\(M\) = 0.3) and civil society (\(M\) = 0.2). This was also the case for the statement that AGI labs should grant independent researchers access to deployed models (\(M\) = 1.2). While not statistically significant either, this statement was also supported more by respondents from AGI labs (\(M\) = 1.4) than by respondents from academia (\(M\) = 1.0) and civil society (\(M\) = 0.8).
Information security.Practices related to information security generally received broad support, especially statements about security incident response plans (\(M\) = 1.7), protection against espionage
(\(M\) = 1.6), implementing security standards (\(M\) = 1.5), industry sharing of security information (\(M\) = 1.5), dual control (\(M\) = 1.4), and military-grade information security (\(M\) = 1.4), whereby information security of AGI labs should be proportional to the capabilities of their models, eventually matching or exceeding that of intelligence agencies. It is worth noting that the statement about security standards was much higher rated than the statement about enterprise risk management frameworks discussed above (\(M\) = 1.0), although they were phrased similarly.
Communication.Participants strongly agreed with the statement that, before publishing research, AGI labs should conduct an internal review to assess potential harms from that research (\(M\) = 1.7). The statement should be read in the context of the broader debate around publication norms [21, 60, 8]. The core consideration in the debate around publication norms is that there are risks that stem from the publication of the research itself--not just by the development and deployment of individual models--since some research findings can be misused [83, 17, 27, 4, 7, 73, 16]. For example, this could include research about the development of models for the discovery of new drugs which could be misused for the design of biochemical weapons [83].
Participants also thought that AGI labs should publish statements about their alignment strategy (\(M\) = 1.5), their views about AGI risk (\(M\) = 1.4), and their governance structure (\(M\) = 1.4). Over the past few months, AGI labs have become more transparent about their alignment strategy [41, 40, 57, 5, 38] and their views about the risks from AGI [3, 5], though some of these statements have also been criticized [2, 74]. AGI labs are less transparent about their governance structures. Existing statements only describe how specific decisions were made [36] or describe structures that deal with risks of specific model types [19]. Perhaps surprisingly, participants only moderately agreed with the claim that AGI labs should avoid hype when releasing new models (\(M\) = 1.2).
We asked participants whether AGI labs should notify different actors before deploying powerful AI systems. These statements were among the least supported items. Respondents somewhat agreed with the statement that AGI labs should notify affected parties (\(M\) = 0.9), but respondents from civil society (\(M\) = 1.3) agreed more than individuals from academia (\(M\) = 0.8) and AGI labs (\(M\) = 0.8), though this difference was not statistically significant. Respondents also somewhat agreed with the statement that AGI labs should notify appropriate state actors (\(M\) = 0.9), but in this case, respondents from AGI labs (\(M\) = 0.5) were more skeptical than respondents from academia (\(M\) = 1.5) and civil society (\(M\) = 1.0), but again, this difference was not significant. Finally, respondents showed the lowest agreement of any item for AGI labs notifying other AGI labs before deploying powerful models (\(M\) = 0.4), but respondents from civil society (\(M\) = 0.0) had lower agreement ratings than respondents from academia (\(M\) = 0.8) and AGI labs (\(M\) = 0.7), but not significantly so. While it is possible that respondents had substantive reasons why they thought this would be less desirable, it is also possible that they thought this might not be feasible. In the latter case, our findings suggest that it might be more feasible than one might expect. There is already some evidence that AGI labs notify each other before releasing powerful models. For example, OpenAI's GPT-4 and Anthropic's Claude were released on the same day. It seems unlikely that this was a coincidence, though of course it may very well be.
Other.Finally, participants somewhat agreed with the statement that AGI labs should perform rigorous background checks before hiring/appointing members of the board of directors, senior exectives, and key employees. Participants somewhat agreed with that statement (\(M\) = 1.3). Although not statistically significant, respondents from AGI labs (\(M\) = 1.6) were more supportive than other participants (\(M\) = 1.2).
### Policy implications
The findings of our survey have implications for AGI labs, regulators, and standard-setting bodies. Since most practices are not inherently about AGI labs, our findings might also be relevant for other AI companies.
Implications for AGI labs.It is not always clear to what extent individual labs already follow the stated practices, but it seems unlikely that they follow each of them to a sufficient degree. We therefore encourage AGI labs to use our findings to conduct an internal gap analysis and to take action if they discover major blind spots. Three areas seem particularly noteworthy. First, some AGI labs have announced plans to commission third-party model audits in the future (Altman, 2023).
Our findings can be seen as an encouragement to follow through. Second, there are already some efforts to evaluate whether a model has certain dangerous capabilities [56; 6; 38]. The results of our study strongly support such efforts. Our findings also imply that there needs to be more work on what AGI labs should do if they detect certain dangerous capabilities (e.g. coordinate a temporary pause on large training runs). Third, our findings suggest that AGI labs need to improve their risk management practices. In particular, there seems to be room for improvement when it comes to their risk governance. AGI labs should seriously consider setting up an internal audit function [70], appointing a chief risk officer, establishing a board risk committee, and implementing a customized enterprise risk management framework.
Implications for regulators.The White House recently invited the chief executive officers of several AGI labs to "share concerns about the risks associated with AI" [86] and announced new actions to "promote responsible AI innovation" [85]. The findings of our study can inform efforts to regulate AGI labs, most of which are based in the US. In the EU, our findings can inform the debate on how the proposed AI Act should account for general-purpose AI systems [13; 12; 1]. In the UK, our findings can be used to draft upcoming AI regulations as announced in the National AI Strategy [32] and the recent White Paper [82]. The UK government has explicitly said that it "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously" [32]. It therefore seems plausible that upcoming regulations will contain provisions that would apply to AGI labs. This would mainly include Google DeepMind, which is based in the UK, though the implications of the recent merger with Google Brain are unclear [31]. Relevant actors who are responsible for drafting regulations could use our findings to decide what specific provisions to include (e.g. requirements to audit powerful systems before deployment, to evaluate models for dangerous capabilities, and to establish a proper risk management system).
Implications for standard-setting bodies.There do not seem to be any (public) efforts to create standards specifically for AGI labs. But our findings can inform the above-mentioned initiatives to develop shared protocols for the safety of large-scale AI models (Partnership on AI, 2023) and an industry code of conduct for developers of general-purpose AI systems and foundation models [80]. Moreover, our findings can inform efforts to apply existing standards to an AGI lab context. For example, Barrett et al. [10] have suggested ways in which the NIST AI Risk Management Framework [53] can account for catastrophic risks. They will soon publish a follow-up work that adapts the framework to the needs of developers of general-purpose AI systems [11]. In the EU, CENELEC--a cooperation between two of the three European Standardisation Organisations--is currently working on harmonized standards that specify the risk management provision in the proposed AI Act [69]. Our findings suggest that the risk management system should also include pre-training risk assessments. They also highlight the need for dangerous capabilities evaluations as part of risk assessment and the need for pausing if sufficiently dangerous capabilities are detected. Finally, our findings stress the importance of various risk governance practices, such as setting up an internal audit function, appointing a chief risk officer, establishing a board risk committee, and implementing a customized enterprise risk management framework, which are not mentioned explicitly in Article 9 of the proposed AI Act.
### Limitations
Sample limitations.While we had a strong response rate of 55.4%, our present sample has at least three limitations. First, overall, the sample size (\(N\) = 51) is comparably small. This is limiting with regards to testing for statistically significant differences between groups. In terms of the representativeness of the sample within the context of AGI safety and governance experts, this small sample size is less worrying because the 92 experts of our sampling frame represent a large number of the leading experts in this relatively small field. Second, we likely missed leading experts in our sampling frame that should have been surveyed. The sampling frame required subjective decisions on what constituted a leading expert in the field and was likely biased towards experts that were known to the author team. In turn, there might have been a self-selection effect that occurs in terms of who decides to complete the survey which may have made the results less representative of the total sampling frame. Third, the sampling frame leaned strongly towards the selection of leading experts who specifically have track records in areas relevant for AGI safety and governance. While we see this as offering certain strengths and benefits for the purpose of our study, future expert
elicitations may benefit from a more comprehensive sampling frame that also includes scholars and practitioners from fields such as safety engineering, science and technology studies, organization studies, human-computer interaction, and experts from other safety-critical industries (e.g. aviation or nuclear). It might also make sense to include individuals who are more junior, less well-known, and relatively early in their careers.
Response limitations.Since respondents were only able to respond to each item on a scale from "strongly agree" to "strongly disagree", we do not know the reason for their responses. In particular, we did not ask respondents why they agreed or disagreed with individual practices or expressed uncertainty about them. Future research that explores the reasoning and contributing factors to the endorsement of practices will be needed to make further headway on the establishment of best practices.
Statement limitations.Finally, there are at least three limitations regarding the statements listed in Appendix B. First, we were constrained by the length of the survey in terms of the number of practices we could ask about. As such, the list of statements was by no means comprehensive. This can be seen by the many additional suggestions for practices from the respondents (Section 3.3). Second, we tried to capture the general thrust of potential AGI safety and governance practices that have been suggested in the literature and community concisely and clearly. Inevitably, this condensing of complex ideas has led to diminished concreteness and specificity. Although this abstract framing was intentional, it is possible that participants would have responded differently if we had specified more precise mechanisms for how to instantiate each practice or provided further details. For example, we did not specify when AGI labs should do each of the stated practices. It is possible that some respondents interpreted this as "now" or "in the next 1-2 years", while others might have interpreted this as "in the next 3-5 years" or "as we approach AGI". Third, in two instances, the statements included examples that might have been too specific (enterprise risk management and security standards), leading to comparably high "I don't know" responses for these items (Figure 5). In at least one instance, we should have made the language clearer: one statement used the formulation "AGI labs should strongly consider only deploying powerful models via an API" instead of simply saying they should do this. Overall though, the statements should be read as the respondents' views on the overall idea of each AGI safety and governance practice, with the particulars of the "why", "how", and "when" still very much up for debate.
### Future directions
Our survey shows that there is a consensus among leading experts in favor of an entire portfolio of AGI safety and governance mechanisms. We believe there is a wealth of future work that remains to be done in this space. In order to facilitate the foundation for subsequent research, we invited participants of the survey to a virtual workshop on 5 May 2023. The aim was to discuss the required intellectual work that supports the creation of best practices in AGI safety and governance. A total of 21 people attended the workshop, which was held under the Chatham House Rule, along with the seven authors who moderated the discussion and took notes. Below, we report some of the key suggestions from the discussion.
Main blockers.First, we asked participants what, in their view, the primary blockers for the creation of best practices in AGI safety and governance are. One participant suggested a distinction between two types of blockers: blockers for determining best practices and blockers for their dissemination. Examples of the first type of blockers include: (1) lack of appropriate evaluation criteria (e.g. for model audits or dangerous capabilities evaluations), (2) lack of of agreed upon definitions (e.g. of the terms "AGI" and "general-purpose AI"), (3) the field evolves rapidly, (4) iterating on best practices takes time, (5) different views on AGI timelines, (6) many existing initiatives do not address the specific challenges for AGI labs, and (7) various uncertainties (e.g. about the impact of AI on the economy and national security). For the second category, suggested blockers included: (1) collective action problems (e.g. AGI labs might only trade increased safety for reduced profits if other AGI labs also do it), (2) incentives to race (e.g. "if we do not get there first, a less responsible actor will"), (3) antitrust concerns (e.g. for practices that involve cooperation between AGI labs), and (4) liability concerns (e.g. information about identified and disclosed risks could be used as evidence in lawsuits against AGI labs).
Open questions.Next, we asked participants what intellectual work needs to happen to overcome these blockers. Participants suggested the following concrete (research) questions: (1) How can we adapt existing efforts to an AGI context (e.g. the NIST AI Risk Management Framework, [53])? (2) How can we test in a falsifiable way whether an AI system is aligned? (3) How should relevant thresholds be defined and adjusted over time (e.g. amount of compute used for large training runs)? (4) How can we allow external scrutiny of models without revealing sensitive information? (5) How can we monitor how systems are used while respecting user privacy? (6) What constitutes a robust auditing ecosystem and what can we learn from other industries in this respect?
How to answer these questions.Finally, we asked participants what, in their view, the most promising ways to make progress on these questions are. (1) A central theme was the necessity of appropriate enforcement mechanisms. Participants suggested an auditing system where a third party could ensure labs' adherence to the established best practices. This third party could also express concerns more freely, thereby adding a layer of transparency to the process. (2) Participants also emphasized the importance of creating an ecosystem that recognizes and integrates the unique perspectives of different stakeholders. (3) Other participants highlighted the need to put external pressure on AGI labs to improve their practices. Binding regulations are one way to do that. Another way might be to raise public awareness. (4) Participants also suggested conducting a detailed analysis of existing practices at AGI labs. This would enable gap analyses and evaluations of different organizations. (5) Lastly, participants suggested research into an idealized version of a system card.
In addition to these suggestions, we wish to highlight three further directions. First, future surveys and expert elicitation work will be needed to address the acknowledged limitations of this study (Section 4.4). This includes surveying a larger and more comprehensive sample that is put together more systematically. Such studies could also include the additional practices that participants of our survey have suggested (Section 3.3, Appendix C). In addition, it would be useful to conduct studies that explore the rationale behind experts' stance on each practice and what they think are the key considerations and concerns towards implementation. Second, we believe that creating best practices in AGI safety and governance should be an inclusive process. It will be important to conduct surveys of the public and include many different stakeholders via participatory methods. Third, we hope to see future research on each of the proposals. In light of the broad agreement on the practices presented, future work needs to figure out the details of these practices. There is ample work to be done in determining the practical execution of these practices and how to make them a reality. This will require a collaborative effort from both technical and governance experts.
## 5 Conclusion
Our study has elicited current expert opinions on safety and governance practices at AGI labs, providing a better understanding of what AGI labs should do to reduce risk, according to leading experts from AGI labs, academia, and civil society. We have shown that there is broad consensus that AGI labs should implement most of the 50 safety and governance practices we asked about in the survey. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, evaluate models for dangerous capabilities, commission third-party model audits, establish safety restrictions on model usage, and commission external red teams. Ultimately, our list of practices may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs.
The day before our workshop, US Vice President Kamala Harris invited the chief executive officers of OpenAI, Google DeepMind, Anthropic, and other leading AI companies to the White House "to share concerns about the risks associated with AI" [86]. We believe that now is a pivotal time for AGI safety and governance. Experts from many different domains and intellectual communities must come together to discuss what responsible AGI labs should do.
## Acknowledgements
We would like to thank all participants who filled out the survey and attended the workshop. We are grateful for the research assistance and in-depth feedback provided by Leonie Koessler and valuable suggestions from Akash Wasil, Jeffrey Laddish, Joshua Clymer, Aryan Bhatt, Michael Aird, Guive Assadi, Georg Arndt, Shaun Ee, and Patrick Levermore. All remaining errors are our own.
## Appendix A List of participants
The following participants gave us permission to mention their names and affiliations, as specified by them (in alphabetical order). 18 respondents, not listed here, did not provide their permission. Note that respondents do not represent any organizations they are affiliated with. They chose to add their name after completing the survey and were not sent the manuscript before publication. The views expressed in this paper are our own.
1. Allan Dafoe, Google DeepMind
2. Andrew Trask, University of Oxford, OpenMined
3. Anthony M. Barrett
4. Brian Christian, Author and Researcher at UC Berkeley and University of Oxford
5. Carl Shulman
6. Chris Meserole, Brookings Institution
7. Gillian Hadfield, University of Toronto, Schwartz Reisman Institute for Technology and Society
8. Hannah Rose Kirk, University of Oxford
9. Holden Karnofsky, Open Philanthropy
10. Iason Gabriel, Google DeepMind
11. Irene Solaiman, Hugging Face
12. James Bradbury, Google DeepMind
13. James Ginns, Centre for Long-Term Resilience
14. Jason Clinton, Anthropic
15. Jason Matheny, RAND
16. Jess Whittlestone, Centre for Long-Term Resilience
17. Jessica Newman, UC Berkeley AI Security Initiative
18. Joslyn Barnhart, Google DeepMind
19. Lewis Ho, Google DeepMind
20. Luke Muehlhauser, Open Philanthropy
21. Mary Phuong, Google DeepMind
22. Noah Feldman, Harvard University
23. Robert Trager, Centre for the Governance of AI
24. Rohin Shah, Google DeepMind
25. Sean O hEigeartaigh, Centre for the Future of Intelligence, University of Cambridge
26. Seb Krier, Google DeepMind
27. Shahar Avin, Centre for the Study of Existential Risk, University of Cambridge
28. Stuart Russell, UC Berkeley
29. Tantum Collins
30. Toby Ord, University of Oxford
31. Toby Shevlane, Google DeepMind
32. Victoria Krakovna, Google DeepMind
33. Zachary Kenton, Google DeepMind
List of all statements
Below, we list all statements we used in the survey, sorted by overall mean agreement (Section 3.1). Optional statements are marked with an asterisk (*).
1. **Pre-deployment risk assessment.** AGI labs should take extensive measures to identify, analyze, and evaluate risks from powerful models before deploying them.
2. **Dangerous capability evaluations.** AGI labs should run evaluations to assess their models' dangerous capabilities (e.g. misuse potential, ability to manipulate, and power-seeking behavior).
3. **Third-party model audits.** AGI labs should commission third-party model audits before deploying powerful models.
4. **Safety restrictions.** AGI labs should establish appropriate safety restrictions for powerful models after deployment (e.g. restrictions on who can use the model, how they can use the model, and whether the model can access the internet).
5. **Red teaming.** AGI labs should commission external red teams before deploying powerful models.
6. **Monitor systems and their uses.** AGI labs should closely monitor deployed systems, including how they are used and what impact they have on society.
7. **Alignment techniques.** AGI labs should implement state-of-the-art safety and alignment techniques.
8. **Security incident response plan.** AGI labs should have a plan for how they respond to security incidents (e.g. cyberattacks).*
9. **Post-deployment evaluations.** AGI labs should continually evaluate models for dangerous capabilities after deployment, taking into account new information about the model's capabilities and how it is being used.*
10. **Report safety incidents.** AGI labs should report accidents and near misses to appropriate state actors and other AGI labs (e.g. via an AI incident database).
11. **Safety vs capabilities.** A significant fraction of employees of AGI labs should work on enhancing model safety and alignment rather than capabilities.
12. **Internal review before publication.** Before publishing research, AGI labs should conduct an internal review to assess potential harms.
13. **Pre-training risk assessment.** AGI labs should conduct a risk assessment before training powerful models.
14. **Emergency response plan.** AGI labs should have and practice implementing an emergency response plan. This might include switching off systems, overriding their outputs, or restricting access.
15. **Protection against espionage.** AGI labs should take adequate measures to tackle the risk of state-sponsored or industrial espionage.*
16. **Pausing training of dangerous models.** AGI labs should pause the development process if sufficiently dangerous capabilities are detected.
17. **Increasing level of external scrutiny.** AGI labs should increase the level of external scrutiny in proportion to the capabilities of their models.
18. **Publish alignment strategy.** AGI labs should publish their strategies for ensuring that their systems are safe and aligned.*
19. **Bug bounty programs.** AGI labs should have bug bounty programs, i.e. recognize and compensate people for reporting unknown vulnerabilities and dangerous capabilities.
20. **Industry sharing of security information.** AGI labs should share threat intelligence and information about security incidents with each other.*
21. **Security standards.** AGI labs should comply with information security standards (e.g. ISO/IEC 27001 or NIST Cybersecurity Framework). These standards need to be tailored to an AGI context.
22. **Publish results of internal risk assessments.** AGI labs should publish the results or summaries of internal risk assessments, unless this would unduly reveal proprietary information or itself produce significant risk. This should include a justification of why the lab is willing to accept remaining risks.*5
23. **Dual control.** Critical decisions in model development and deployment should be made by at least two people (e.g. promotion to production, changes to training datasets, or modifications to production).*
24. **Publish results of external scrutiny.** AGI labs should publish the results or summaries of external scrutiny efforts, unless this would unduly reveal proprietary information or itself produce significant risk.*
25. **Military-grade information security.** The information security of AGI labs should be proportional to the capabilities of their models, eventually matching or exceeding that of intelligence agencies (e.g. sufficient to defend against nation states).
26. **Board risk committee.** AGI labs should have a board risk committee, i.e. a permanent committee within the board of directors which oversees the lab's risk management practices.*
27. **Chief risk officer.** AGI labs should have a chief risk officer (CRO), i.e. a senior executive who is responsible for risk management.
28. **Statement about governance structure.** AGI labs should make public statements about how they make high-stakes decisions regarding model development and deployment.*
29. **Publish views about AGI risk.** AGI labs should make public statements about their views on the risks and benefits from AGI, including the level of risk they are willing to take in its development.
30. **KYC screening.** AGI labs should conduct know-your-customer (KYC) screenings before giving people the ability to use powerful models.*
31. **Third-party governance audits.** AGI labs should commission third-party audits of their governance structures.*
32. **Background checks.** AGI labs should perform rigorous background checks before hiring/appointing members of the board of directors, senior exectives, and key employees.*
33. **Model containment.** AGI labs should contain models with sufficiently dangerous capabilities (e.g. via boxing or air-gapping).
34. **Staged deployment.** AGI labs should deploy powerful models in stages. They should start with a small number of applications and fewer users, gradually scaling up as confidence in the model's safety increases.
35. **Tracking model weights.** AGI labs should have a system that is intended to track all copies of the weights of powerful models.*
36. **Internal audit.** AGI labs should have an internal audit team, i.e. a team which assesses the effectiveness of the lab's risk management practices. This team must be organizationally independent from senior management and report directly to the board of directors.
37. **No open-sourcing.** AGI labs should not open-source powerful models, unless they can demonstrate that it is sufficiently safe to do so.6 Footnote 6: Throughout the paper, we changed the title of this item to “no unsafe open-sourcing” to avoid misconceptions.
38. **Researcher model access.** AGI labs should give independent researchers API access to deployed models.
39. **API access to powerful models.** AGI labs should strongly consider only deploying powerful models via an application programming interface (API).
40. **Avoiding hype.** AGI labs should avoid releasing powerful models in a way that is likely to create hype around AGI (e.g. by overstating results or announcing them in attention-grabbing ways).
41. **Gradual scaling.** AGI labs should only gradually increase the amount of compute used for their largest training runs.
42. **Treat updates similarly to new models.** AGI labs should treat significant updates to a deployed model (e.g. additional fine-tuning) similarly to its initial development and deployment. In particular, they should repeat the pre-deployment risk assessment.
43. **Pre-registration of large training runs.** AGI labs should register upcoming training runs above a certain size with an appropriate state actor.
44. **Enterprise risk management.** AGI labs should implement an enterprise risk management (ERM) framework (e.g. the NIST AI Risk Management Framework or ISO 31000). This framework should be tailored to an AGI context and primarily focus on the lab's impact on society.
45. **Treat internal deployments similarly to external deployments.** AGI labs should treat internal deployments (e.g. using models for writing code) similarly to external deployments. In particular, they should perform a pre-deployment risk assessment.* 7
Footnote 7: Labeled as “Internal deployments = external deployments” in some figures due to space constraints.
46. **Notify a state actor before deployment.** AGI labs should notify appropriate state actors before deploying powerful models.
47. **Notify affected parties.** AGI labs should notify parties who will be negatively affected by a powerful model before deploying it.*
48. **Inter-lab scrutiny.** AGI labs should allow researchers from other labs to scrutinize powerful models before deployment.*
49. **Avoid capabilities jumps.** AGI labs should not deploy models that are much more capable than any existing models.*
50. **Notify other labs.** AGI labs should notify other labs before deploying powerful models.*
List of suggested practices
Below, we list additional AGI safety and governance practices that respondents suggested. To ensure anonymity, we have rephrased each of the suggested practices in our own words and edited them into the same structure as the statements used in our survey ("AGI labs should...").
1. AGI labs should participate in democratic and participatory governance processes (e.g. citizen assemblies). Issues could include the level of risk that is acceptable and preferences for different governance models.
2. AGI labs should engage the public and civil society groups in determining what risks should be considered and what level of risk is acceptable.
3. AGI labs should contribute to improving AI and AGI literacy among the public and policymakers.
4. AGI labs should be transparent about where training data comes from.
5. AGI labs should use system cards.
6. AGI labs should report what safety and alignment techniques they used to develop a model.
7. AGI labs should publish their ethics and safety research.
8. AGI labs should make capability demonstrations available to policymakers and the public before deployment.
9. AGI labs should have written deployment plans of what they would do with an AGI or other advanced and powerful AI system.
10. AGI labs should publicly predict the frequency of harmful AI incidents.
11. AGI labs should generate realistic catastrophic risk models for advanced AI.
12. AGI labs should track and report on their models' capability to automate AI research and development.
13. AGI labs should engage in efforts to systematically forecast future risks and benefits of the technology they build.
14. AGI labs should generate realistic catastrophic risk models for advanced AI, potentially making these public or using them to raise awareness.
15. AGI labs should publish an annual report where they present the predicted and actual impacts of their work, along with the evidence and assumptions these are based on.
16. AGI labs should pre-register big training runs including the amount of compute used, the data used for training, and how many parameters the model will have.
17. AGI labs should engage in employee and investor education and awareness on the risks of advanced AI systems and potential mitigating procedures that need to be taken that tradeoff profit for societal benefit.
18. AGI labs should adequately protect whistleblowers.
19. AGI labs should have an onboard process for managers and new employees that involves content explaining how the organization believes a responsible AGI developer would behave and how they are attempting to meet that standard.
20. AGI labs should promote a culture that encourages internal deliberation and critique, and evaluate whether they are succeeding in building such a culture.
21. AGI labs should have dedicated programs to improve the diversity, equity, and inclusion of their talent.
22. AGI labs should have independent safety and ethics advisory boards to help with certain decisions.
23. AGI labs should have internal review boards.
24. AGI labs should be set up such that their governance structures permit them to tradeoff profits with societal benefit.
25. AGI labs should have merge and assist clauses.
26. AGI labs should report to an international non-governmental organization (INGO) that is publicly committed to human rights and democratic values.
27. AGI labs should have an independent board of directors with technical AI safety expertise who have the mandate to put the benefits for society above profit and shareholder value.
28. AGI labs should maintain a viable way to divert from building AGI (e.g. to build narrower models and applications), in case building AGI will not be possible to do safely.
29. AGI labs should use the Three Lines of Defense risk management framework.
30. AGI labs should take measures to avoid being sued for trading off profits with societal benefit.
31. AGI labs should be subject to mandatory interpretability standards.
32. AGI labs should conduct evaluation during training, being prepared to stop and analyze any training run that looks potentially risky or harmful.
33. AGI labs should save logs of interactions with the AI system.
34. AGI labs should consider caps on model size.
35. AGI labs should be forced to have systems that consist of ensembles of capped size models instead of one increasingly large model.
36. AGI labs should ensure that AI systems in an ensemble communicate in English and that these communications are logged for future analysis if an incident occurs.
37. AGI labs should limit API access to approved and vetted applications to foreclose potential misuse and dual use risks.
38. AGI labs should conduct simulated cyber attacks on their systems to check for vulnerabilities.
39. AGI labs should have internal controls and processes that prevent a single person or group being able to deploy an advanced AI system when governance mechanisms have found this to be potentially harmful or illegal.
40. AGI labs should disclose the data and labor practices involved in the pre-training and training of powerful AI systems.
41. AGI labs should disclose the environmental costs of developing and deploying powerful AI systems.
42. AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood.
43. AGI labs should coordinate on self-regulatory best practices they use for safety.
44. AGI labs should coordinate on best practices for external auditing and red-teaming.
45. AGI labs should coordinate on best practices for incident reporting.
46. AGI labs should report cluster sizes and training plans to other AGI labs to avoid incorrect perceptions of current capabilities and compute resources.
47. AGI labs should have feedback mechanisms with communities that are affected by their models.
48. AGI labs should have ethical principles and set out "red lines" for their work in advance.
49. AGI labs should incorporate a privacy-preserving in machine learning (PPML) approach to auditing and governing AI models.
50. AGI labs should use responsible AI licenses (RAIL) and engage in other practices that allow for degrees of openness on the spectrum from closed to open.
## Appendix D Additional figures
Figure 7: **Mean agreement of AGI lab respondents and all other respondents \(|\)** The figure shows the mean agreement and 95% confidence interval for each of the 50 practices.
Figure 8: **Mean agreement for men and women** | The figure shows the mean agreement and 95% confidence interval for each of the 50 practices.
Additional tables
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline
**AGI safety and governance practice** & **Strongly** & **Somewhat** & **Nelither agree** (**\#**) & **Somewhat** & **Strongly** & **I don*1** & **Total disagreement** & **Total agreement** & **a** \\ \hline
**AGI safety and governance practice** & **disagree-(2)** & **disagree-(4)** & **non disagree (**\#**)** & **disagree (**\#**)** & **agree (**\#**)** & **agree (**\#**)** & **know (**\#**)** & **Total disagreement** & **Total agreement** & **a** \\ \hline
**The development risk assessment** & 0.05 & 0.06 & 2.05 & 5.99 & 92.28 & 0.05 & 0.05 & 98.06 & 51 \\
**Disagree capabilities evaluations** & 0.05 & 0.06 & 0.05 & 11.89 & 86.36 & 2.05 & 0.05 & 98.06 & 51 \\
**Third-party model audits** & 0.05 & 2.05 & 0.05 & 13.75 & 84.34 & 0.05 & 2.05 & 98.06 & 51 \\
**Safety restrictions** & 0.05 & 0.06 & 2.05 & 15.75 & 82.44 & 0.05 & 0.05 & 98.06 & 51 \\
**Red teaming** & 2.05 & 0.05 & 0.05 & 15.75 & 82.44 & 0.05 & 2.05 & 98.06 & 51 \\
**Monitor systems and their uses** & 0.05 & 2.05 & 0.05 & 19.67 & 78.44 & 0.05 & 2.05 & 98.06 & 51 \\
**Alignment techniques** & 2.05 & 0.05 & 2.05 & 13.75 & 82.44 & 0.05 & 2.05 & 98.06 & 51 \\
**Security incident response plan** & 0.05 & 2.05 & 0.05 & 17.99 & 79.59 & 0.05 & 2.05 & 97.44 & 39 \\
**Post-deployment evaluations** & 0.05 & 2.05 & 0.05 & 18.99 & 78.44 & 0.05 & 2.05 & 97.37 & 37 \\
**Report safety indicators** & 2.05 & 0.05 & 0.05 & 19.66 & 76.56 & 2.05 & 2.05 & 96.15 & 51 \\
**Safety vs. capabilities** & 0.05 & 0.05 & 3.99 & 23.55 & 72.55 & 0.05 & 0.05 & 96.19 & 51 \\
**Internal review before publication** & 2.05 & 0.05 & 0.05 & 23.54 & 74.50 & 0.05 & 2.05 & 98.06 & 51 \\
**Pre-training risk assessment** & 2.05 & 3.95 & 0.05 & 15.75 & 78.44 & 0.05 & 5.98 & 94.15 & 51 \\
**Emergency response plan** & 0.05 & 2.05 & 2.05 & 25.59 & 70.69 & 0.05 & 2.05 & 96.19 & 51 \\
**Protection against exposure** & 2.65 & 0.05 & 0.05 & 26.33 & 71.16 & 0.05 & 2.65 & 97.44 & 38 \\
**Pausing of dangerous models** & 2.05 & 2.05 & 3.95 & 17.66 & 74.54 & 0.05 & 3.95 & 92.25 & 51 \\
**Increasing level of external scrutiny** & 0.05 & 2.05 & 2.05 & 31.48 & 62.76 & 2.05 & 2.05 & 94.19 & 51 \\
**Publish alignment strategy** & 0.05 & 0.05 & 2.05 & 41.05 & 48.79 & 7.76 & 0.05 & 89.79 & 39 \\
**Bug boundary programs** & 0.05 & 2.05 & 2.05 & 39.25 & 54.94 & 2.05 & 2.05 & 94.19 & 51 \\
**Industry sharing of security information** & 0.05 & 0.05 & 3.95 & 39.52 & 6.26 & 2.05 & 0.05 & 92.19 & 38 \\
**Security standards** & 5.95 & 0.05 & 2.05 & 17.65 & 60.89 & 13.75 & 5.99 & 78.46 & 51 \\
**Publish results of internal risk assessments** & 2.75 & 0.05 & 5.45 & 32.45 & 54.18 & 5.49 & 2.79 & 86.59 & 37 \\
**Dual control** & 2.65 & 0.05 & 5.33 & 31.66 & 52.66 & 7.99 & 2.65 & 84.22 & 38 \\
**Publish results of external scrutiny** & 2.65 & 0.05 & 2.65 & 39.56 & 50.05 & 5.32 & 2.65 & 89.56 & 38 \\
**Military-grade information security** & 3.95 & 2.05 & 3.95 & 29.45 & 38.38 & 2.05 & 5.99 & 88.20 & 51 \\
**Boat risk committee** & 5.45 & 0.05 & 2.75 & 27.05 & 54.19 & 10.85 & 5.49 & 81.16 & 37 \\
**Chief risk officer** & 2.65 & 0.05 & 7.99 & 28.99 & 50.05 & 10.59 & 2.65 & 78.99 & 38 \\
**Statement about governance structure** & 2.65 & 2.65 & 5.39 & 31.66 & 55.39 & 2.65 & 5.39 & 86.63 & 38 \\
**Publish views about AGI risk** & 2.05 & 0.05 & 7.88 & 37.33 & 40.99 & 3.99 & 2.05 & 86.39 & 51 \\
**KYC screening** & 0.05 & 0.05 & 10.58 & 42.15 & 44.78 & 2.65 & 0.05 & 86.86 & 38 \\
**Third-party governance audits** & 2.75 & 0.05 & 5.45 & 45.05 & 45.99 & 5.44 & 2.79 & 86.59 & 37 \\
**Background checks** & 0.05 & 5.35 & 7.96 & 31.60 & 50.05 & 5.35 & 5.35 & 81.68 & 38 \\
**Model commitment** & 2.05 & 3.95 & 7.85 & 27.53 & 52.94 & 5.99 & 5.99 & 80.40 & 51 \\
**Signed deployment** & 3.95 & 0.05 & 3.95 & 43.15 & 40.99 & 0.05 & 3.99 & 92.29 & 51 \\
**Tracking model weights** & 0.05 & 2.65 & 12.88 & 30.85 & 46.26 & 7.79 & 2.65 & 76.99 & 39 \\
**Internal audit** & 2.05 & 3.98 & 7.88 & 35.35 & 51.08 & 0.05 & 5.99 & 86.33 & 51 \\
**Noust one-gouraving** & 2.05 & 11.89 & 2.05 & 27.55 & 56.99 & 0.05 & 13.75 & 84.39 & 51 \\
**Resentree model access** & 2.05 & 3.95 & 5.95 & 43.15 & 41.29 & 3.95 & 5.98 & 84.39 & 51 \\
**All access to powerful models** & 3.95 & 3.95 & 7.85 & 31.46 & 45.16 & 7.88 & 7.85 & 7.65 & 51 \\
**Avoking type** & 0.05 & 2.05 & 11.85 & 52.99 & 33.33 & 0.05 & 2.05 & 86.37 & 51 \\
**Gradual scaling** & 3.95 & 0.05 & 11.85 & 41.25 & 39.25 & 3.93 & 3.95 & 80.45 & 51 \\
**Test rules similarly to new models** & 0.05 & 13.75 & 3.95 & 35.35 & 45.19 & 2.05 & 13.75 & 80.49 & 51 \\
**Pre-registration of large training runs** & 3.95 & 5.95 & 7.85 & 41.25 & 37.39 & 3.95 & 9.88 & 78.49 & 51 \\
**Enterprise risk management** & 3.95 & 2.05 & 11.88 & 29.45 & 27.58 & 25.59 & 5.95 & 56.99 & 51 \\
**Treat internal deployments similar to external deployments** & 2.85 & 5.66 & 11.15 & 50.00 & 27.88 & 2.85 & 8.35 & 77.86 & 36 \\
**Notify a state score before-deployment** &
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Mean**} & \multicolumn{4}{c}{**Standard error**} & \multicolumn{4}{c}{**\#**} \\ \cline{2-10}
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{2}{c}{**Mean**} & \multicolumn{2}{c}{**Standard error**} & \multicolumn{1}{c}{**\#**} \\ \cline{2-7}
**AGI safety and governance practice** & **AGI Lab** & **Everyone due** & **AGI Lab** & **Everyone due** & **AGI Lab** & **Everyone due** \\ \hline Pre-deployment risk assessment & 1.96 & 1.82 & 0.04 & 0.11 & 25 & 22 \\ Dangerous capabilities evaluations & 1.92 & 1.91 & 0.06 & 0.06 & 24 & 22 \\ Third-party model audits & 1.80 & 1.82 & 0.13 & 0.08 & 25 & 22 \\ Safety restrictions & 1.92 & 1.73 & 0.06 & 0.12 & 25 & 22 \\ Red teaming & 1.76 & 1.73 & 0.17 & 0.10 & 25 & 22 \\ Monitor systems and their uses & 1.72 & 1.82 & 0.14 & 0.08 & 25 & 22 \\ Alignment techniques & 1.72 & 1.73 & 0.18 & 0.10 & 25 & 22 \\ Security incident response plan & 1.77 & 1.86 & 0.09 & 0.10 & 22 & 14 \\ Post-deployment evaluations & 1.81 & 1.54 & 0.09 & 0.24 & 21 & 13 \\ Report safety incidents & 1.71 & 1.68 & 0.18 & 0.10 & 24 & 22 \\ Safety vs. capabilities & 1.68 & 1.73 & 0.13 & 0.10 & 25 & 22 \\ Internal review before publication & 1.68 & 1.64 & 0.17 & 0.10 & 25 & 22 \\ Pre-training risk assessment & 1.56 & 1.73 & 0.17 & 0.19 & 25 & 22 \\ Emergency response plan & 1.68 & 1.59 & 0.15 & 0.11 & 25 & 22 \\ Protection against equipage & 1.68 & 1.62 & 0.19 & 0.14 & 22 & 13 \\ Pushing training of dangerous models & 1.36 & 1.91 & 0.21 & 0.06 & 25 & 22 \\ Increasing levels of external scrutiny & 1.62 & 1.59 & 0.16 & 0.11 & 24 & 22 \\ Phulish alignment strategy & 1.63 & 1.50 & 0.11 & 0.14 & 19 & 14 \\ Big bumply programs & 1.62 & 1.36 & 0.15 & 0.12 & 24 & 22 \\ Industry sharing of security information & 1.55 & 1.46 & 0.11 & 0.22 & 22 & 13 \\ Security standards & 1.27 & 1.79 & 0.30 & 0.10 & 22 & 19 \\ Phulish results of internal risk assessments & 1.65 & 1.17 & 0.11 & 0.34 & 20 & 12 \\ Dual control & 1.70 & 1.33 & 0.11 & 0.19 & 20 & 12 \\ Phulish results of external scrutiny & 1.59 & 1.45 & 0.11 & 0.21 & 22 & 11 \\ Military-grade information security & 1.28 & 1.59 & 0.25 & 0.13 & 25 & 22 \\ Board risk committee & 1.61 & 1.08 & 0.12 & 0.45 & 18 & 12 \\ Chief risk officer & 1.53 & 1.25 & 0.16 & 0.33 & 19 & 12 \\ Statement about governance structure & 1.57 & 1.31 & 0.13 & 0.26 & 21 & 13 \\ Public views about AGI risk & 1.39 & 1.32 & 0.22 & 0.12 & 23 & 22 \\ KX screening & 1.43 & 1.38 & 0.15 & 0.18 & 21 & 13 \\ Third-party governance adds & 1.30 & 1.42 & 0.22 & 0.19 & 20 & 12 \\ Background checks & 1.62 & 1.17 & 0.13 & 0.27 & 21 & 12 \\ Model containment & 1.25 & 1.41 & 0.24 & 0.14 & 24 & 22 \\ Staged deployment & 1.32 & 1.36 & 0.22 & 0.12 & 25 & 22 \\ Tracking model weights & 1.35 & 1.38 & 0.17 & 0.27 & 20 & 13 \\ Internal audit & 1.12 & 1.55 & 0.23 & 0.13 & 25 & 22 \\ No unsafe open-sourcing & 1.24 & 1.32 & 0.25 & 0.19 & 25 & 22 \\ Researcher model access & 1.38 & 0.90 & 0.19 & 0.18 & 24 & 21 \\ API access to powerful models & 1.08 & 1.47 & 0.25 & 0.14 & 24 & 19 \\ Avoiding hyper & 1.24 & 1.09 & 0.14 & 0.16 & 25 & 22 \\ Gradual scaling & 1.13 & 1.23 & 0.25 & 0.13 & 23 & 22 \\ Trust updates similarly to new models & 1.08 & 1.23 & 0.24 & 0.17 & 24 & 22 \\ Pre-registration of large training runs & 0.87 & 1.32 & 0.26 & 0.18 & 23 & 22 \\ Enterprise risk management & 0.71 & 1.24 & 0.33 & 0.16 & 17 & 17 \\ Trust internal deployment similar to external deployments & 1.10 & 0.83 & 0.23 & 0.21 & 20 & 12 \\ Notify a state actor before deployment & 0.55 & 1.23 & 0.25 & 0.17 & 22 & 22 \\ Notify affected parties & 0.80 & 1.09 & 0.26 & 0.31 & 15 & 11 \\ Inter-lab scrutiny & 1.22 & 0.25 & 0.13 & 0.35 & 18 & 12 \\ Avoid capabilities jumps & 0.89 & 0.42 & 0.27 & 0.36 & 18 & 12 \\ Notify other labs & 0.72 & 0.36 & 0.19 & 0.28 & 18 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Statement Statistics: By sector (AGI labs, all other respondents) | Mean, standard error and sample size (\(n\)) for each of the fifty items divided by respondents’ sector of work. Here we separate out AGI lab respondents from all other respondents, to correspond with Figure 7. The items are ordered by mean agreement score across all respondents.**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**Mean**} & \multicolumn{2}{c}{**Standard error**} & \multicolumn{2}{c}{**\#**} \\ \cline{2-7}
**AGI safety and governance practice** & **Men** & **Women** & **Men** & **Women** & **Men** & **Women** \\ \hline Pre-deployment risk assessment & 1.94 & 1.86 & 0.04 & 0.14 & 32 & 14 \\ Dangerous capabilities evaluations & 1.90 & 2.00 & 0.05 & 0.00 & 31 & 14 \\ Third-party model audits & 1.81 & 1.93 & 0.07 & 0.07 & 32 & 14 \\ Safety restrictions & 1.75 & 2.00 & 0.09 & 0.00 & 32 & 14 \\ Red teaming & 1.84 & 1.79 & 0.07 & 0.11 & 32 & 14 \\ Monitor systems and their uses & 1.62 & 2.00 & 0.12 & 0.00 & 32 & 14 \\ Alignment techniques & 1.62 & 2.00 & 0.15 & 0.00 & 32 & 14 \\ Security incident response plan & 1.75 & 1.92 & 0.09 & 0.08 & 24 & 12 \\ Post-deployment evaluations & 1.70 & 2.00 & 0.10 & 0.00 & 23 & 11 \\ Report safety incidents & 1.81 & 1.79 & 0.07 & 0.11 & 31 & 14 \\ Safety vs. capabilities & 1.59 & 1.86 & 0.11 & 0.10 & 32 & 14 \\ Internal review before publication & 1.59 & 1.86 & 0.14 & 0.10 & 32 & 14 \\ Pre-training risk assessment & 1.78 & 1.64 & 0.11 & 0.23 & 32 & 14 \\ Emergency response plan & 1.66 & 1.71 & 0.10 & 0.13 & 32 & 14 \\ Protection against equipage & 1.57 & 1.75 & 0.19 & 0.13 & 23 & 12 \\ Paining training of dangerous models & 1.75 & 1.64 & 0.09 & 0.23 & 32 & 14 \\ Increasing levels of external scrutiny & 1.56 & 1.79 & 0.10 & 0.11 & 32 & 14 \\ Publish alignment strategy & 1.43 & 1.60 & 0.12 & 0.16 & 23 & 10 \\ Bag bound programs & 1.53 & 1.57 & 0.10 & 0.14 & 32 & 14 \\ Industry sharing of security information & 1.35 & 1.73 & 0.13 & 0.14 & 23 & 11 \\ Security standards & 1.44 & 1.75 & 0.22 & 0.13 & 27 & 12 \\ Publish results of internal risk assessments & 1.33 & 1.82 & 0.14 & 0.12 & 21 & 11 \\ Dual control & 1.50 & 1.50 & 0.13 & 0.22 & 22 & 10 \\ Publish results of external scrutiny & 1.36 & 1.73 & 0.12 & 0.14 & 22 & 11 \\ Military-grade information security & 1.47 & 1.36 & 0.15 & 0.25 & 32 & 14 \\ Board risk committee & 1.40 & 1.60 & 0.22 & 0.16 & 20 & 10 \\ Chief risk office & 1.58 & 1.25 & 0.16 & 0.18 & 19 & 12 \\ Statement about governance structure & 1.43 & 1.73 & 0.14 & 0.14 & 23 & 11 \\ Public views about AGI risk & 1.33 & 1.50 & 0.12 & 0.17 & 30 & 14 \\ KYC screening & 1.23 & 1.50 & 0.16 & 0.15 & 22 & 12 \\ Third-party governance audits & 1.48 & 1.33 & 0.13 & 0.19 & 21 & 12 \\ Background checks & 1.42 & 1.22 & 0.16 & 0.28 & 24 & 9 \\ Model containment & 1.39 & 1.36 & 0.16 & 0.20 & 31 & 14 \\ Suged deployment & 1.34 & 1.57 & 0.15 & 0.17 & 32 & 14 \\ Tracking model weights & 1.27 & 1.33 & 0.18 & 0.26 & 22 & 12 \\ Internal audit & 1.28 & 1.43 & 0.16 & 0.17 & 32 & 14 \\ No unsafe open-sourcing & 1.38 & 1.07 & 0.18 & 0.32 & 32 & 14 \\ Researearcher model access & 1.13 & 1.50 & 0.16 & 0.14 & 30 & 14 \\ API access to powerful models & 1.24 & 1.31 & 0.19 & 0.24 & 29 & 13 \\ Avoiding Type & 1.06 & 1.21 & 0.13 & 0.19 & 32 & 14 \\ Gradual scaling & 1.13 & 1.57 & 0.12 & 0.17 & 30 & 14 \\ Treat updates similarly to new models & 1.00 & 1.36 & 0.17 & 0.29 & 31 & 14 \\ Pre-registration of large training runs & 1.13 & 1.36 & 0.15 & 0.25 & 31 & 14 \\ Enterprise risk management & 0.90 & 1.23 & 0.24 & 0.20 & 21 & 13 \\ Treat internal deployments similar to external deployments & 0.95 & 1.30 & 0.17 & 0.26 & 22 & 10 \\ Notify a state accre before deployment & 0.93 & 1.14 & 0.18 & 0.21 & 30 & 14 \\ Notify affected parties & 1.07 & 1.10 & 0.18 & 0.31 & 15 & 10 \\ Inter-lab security & 0.72 & 0.82 & 0.27 & 0.23 & 18 & 11 \\ Avoid capabilities jumps & 0.62 & 1.00 & 0.20 & 0.44 & 21 & 9 \\ Notify other labs & 0.40 & 0.89 & 0.21 & 0.26 & 20 & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Statement Statistics: By gender \(|\)** Mean, standard error and sample size (\(n\)) for each of the fifty items divided by respondents’ gender. These represent the two groups with sufficiently high sample sizes for analyses of group differences. The items are ordered by mean agreement score across all respondents.
\begin{table}
\begin{tabular}{l l r r} \hline \hline
**Sector** & **Sector subgroup** & **Percentage of total sample** & **Raw frequency** \\ \hline AGI lab & & 43.9\% & 25 \\ \hline Academia & & 22.8\% & 13 \\ \hline Civil society & & \\ & Think tank & 10.5\% & 6 \\ & Nonprofit & 12.3\% & 7 \\ & organization & \\ \hline Other & & \\ & Other tech company & 1.8\% & 1 \\ & Government & 0\% & 0 \\ & Consulting firm & 1.8\% & 1 \\ & Other & 1.8\% & 1 \\ \hline Prefer not to say & & 5.3\% & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Demographics of sample: Sector \(|\)** Percentage and frequency of respondents by sector. Note that respondents could report more than one sector
\begin{table}
\begin{tabular}{l l l} \hline \hline & **Raw frequency** & **Percentage of total sample** \\
**Gender** & & \\ \hline Man & 32 & 62.7\% \\ Woman & 14 & 27.5\% \\ Prefer not to say & 5 & 9.8\% \\ Another gender & 0 & 0.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Demographics of sample: Gender \(|\)** Percentage and frequency of respondents by gender
Additional analyses
#### Deviations from the pre-registered pre-analysis plan
We pre-registered the survey on OSF ([https://osf.io/s7vhr](https://osf.io/s7vhr)). We generally followed the pre-analysis plan. We present several additional top-line statistics that were not noted in the pre-analysis plan, such as how many statements received a majority of agreement responses. We did not conduct the pre-registered regression analyses to test for the effect of sector or gender due to the small sample size. We ran the pre-registered Mann-Whitney U and Chi-squared tests instead, with appropriate correction for multiple comparisons where applicable (using the Holm-Bonferroni correction). We did not run the Kolmogorov-Smirnov tests, since the Mann-Whitney U-test was more appropriate for the observed distributions.
#### Cluster analysis
In an attempt to discover groups of response patterns within the population, we attempted to cluster respondents using their pattern of responses across questions and their reported demographic data. In line with our pre-analysis plan, we conducted k-means clustering on the dataset of responses and demographic labels (for the variables "gender" and "sector"). The aim of this analysis is to discover high-dimensional clusters or groups of response patterns within the population of respondents, and to visualize these in a more interpretable, low-dimensional manner. To achieve this, we performed a number of standard data pre-processing steps for dimensionality reduction techniques [42].
We firstly pre-processed the data to remove respondents with missing demographic data. The gender and sector demographic variables were then transformed into binary features with one-hot encoding. In the final data pre-processing step, we standardized the data to ensure that the variables were approximately equally scaled (this was done using the StandardScaler functionality from the library sklearn). To partition the processed data for visualization, we employed the standard k-means clustering algorithm. In this algorithm, the number of clusters is a hyperparameter, which must be estimated or inferred. To select the optimal number of clusters in a principled manner, we employed two accepted methods - the Elbow method and silhouette analysis [67]- which evaluated the inertia and silhouette score of the model for a range of clusters \(n\in\{2,3,4,5,6,7,8,9,10\}\), where n represents the number of clusters).
Based on this analysis, we found the optimal number of clusters to be four, and performed k-means clustering with four clusters accordingly. To visualize this clustered data, we first reduced the dimensionality of the embedded data to two dimensions (that is, two axes for visualization) using principal component analysis (PCA), and then visualized the results using a scatter plot. We found the clusters to be poorly separated, implying that it is difficult to represent groups in this dataset in a low-dimensional manner (in support of this, the Elbow error metric was relatively high for all given numbers of clusters \(n\in\{2,3,4,5,6,7,8,9,10\}\). This could be due to a number of reasons: the relatively small sample of this population, poor scaling of the variables of the data (as discussed above), or the presence of non-convex clusters.
All of the code for this analysis, along with some instructive visualizations, can be found on OSF ([https://osf.io/s7vhr](https://osf.io/s7vhr)). |
2307.09707 | Improved Label Design for Timing Synchronization in OFDM Systems against
Multi-path Uncertainty | Timing synchronization (TS) is vital for orthogonal frequency division
multiplexing (OFDM) systems, which makes the discrete Fourier transform (DFT)
window start at the inter-symbol-interference (ISI)-free region. However, the
multi-path uncertainty in wireless communication scenarios degrades the TS
correctness. To alleviate this degradation, we propose a learning-based TS
method enhanced by improving the design of training label. In the proposed
method, the classic cross-correlator extracts the initial TS feature for
benefiting the following machine learning. Wherein, the network architecture
unfolds one classic cross-correlation process. Against the multi-path
uncertainty, a novel training label is designed by representing the ISI-free
region and especially highlighting its approximate midpoint. Therein, the
closer to the region boundary of ISI-free the smaller label values are set,
expecting to locate the maximum network output in ISI-free region with a high
probability. Then, to guarantee the correctness of labeling, we exploit the
priori information of line-of-sight (LOS) to form a LOS-aided labeling.
Numerical results confirm that, the proposed training label effectively
enhances the correctness of the proposed TS learner against the multi-path
uncertainty. | Chaojin Qing, Shuhai Tang, Na Yang, Chuangui Rao, Jiafan Wang | 2023-07-19T01:38:20Z | http://arxiv.org/abs/2307.09707v1 | # Improved Label Design for Timing Synchronization in OFDM Systems against Multi-path Uncertainty
###### Abstract
Timing synchronization (TS) is vital for orthogonal frequency division multiplexing (OFDM) systems, which makes the discrete Fourier transform (DFT) window start at the inter-symbol-interference (ISI)-free region. However, the multi-path uncertainty in wireless communication scenarios degrades the TS correctness. To alleviate this degradation, we propose a learning-based TS method enhanced by improving the design of training label. In the proposed method, the classic cross-correlator extracts the initial TS feature for benefiting the following machine learning, Wherein, the network architecture unfolds one classic cross-correlation process. Against the multi-path uncertainty, a novel training label is designed by representing the ISI-free region and especially highlighting its approximate midpoint. Therein, the closer to the region boundary of ISI-free the smaller label values are set, expecting to locate the maximum network output in ISI-free region with a high probability. Then, to guarantee the correctness of labeling, we exploit the priori information of line-of-sight (LOS) to form a LOS-aided labeling. Numerical results confirm that, the proposed training label effectively enhances the correctness of the proposed TS learner against the multi-path uncertainty.
Timing synchronization, OFDM, label design, multi-path uncertainty, machine learning.
## I Introduction
Orthogonal frequency division multiplexing (OFDM) technology has been widely applied in the modern wireless and mobile communication systems, e.g., the fifth generation (5G) system [1]. At an OFDM receiver, the correct timing synchronization (TS) locates the starting of discrete Fourier transform (DFT) window per OFDM symbol within its inter-symbol-interference (ISI)-free region [2]. However, this task is hardly complete in multi-path propagation scenarios. In fact, wireless propagation scenarios emerge a lot of multi-path uncertainty, e.g., uncertain multi-path delays and complex path gains, etc [3]. Under such uncertainty, the error probability of TS is increased.
To suppress the multi-path uncertainty, the joint TS and channel estimation method via the iterative interference cancellation is proposed in [4]. While this iterative processing requires high computational complexity and large processing delay. Machine learning, due to its powerful ability in tackling nonlinear problems [5], can be an alternative way to improve the TS correctness against the impact of multi-path interference. The authors in [6] investigate a neural network (NN)-based signal detection with impacts of TS error and multi-path uncertainty, yet this study does not focus on the TS task. In a recent study [7], a convolutional NN (CNN)-based TS is investigated, which features a specially designed network architecture to improve the TS correctness. However, this method takes a long time for training and is not conducive to practical application. In [8], the residual timing offset estimation is accomplished by assuming the achievement of coarse TS and channel estimation, and therefore it omits the impacts of multi-path uncertainty. Since training labels to be learned are usually helpful to improve the model training without fine-tuning [9], the authors in [10] improve the TS correctness against multi-path interference by specially designing training labels. While the uncertainty of multi-path delay is hardly considered in [10], which greatly affects the correctness of label designing. Thus, the incorrect labeling limits the improvement of TS correctness provided by [10]. To our best knowledge, there is limited literature addressing this issue by designing the training label against uncertain multi-path delay.
In this paper, a learning-based TS method aided by the improved label designing is proposed, which aims to improve its adaptability against the multi-path uncertainty. Specifically, we design novel training label by assigning nonzero values to the label values indexed in the ISI-free region while setting other label values to zeros. In the designed training label, the values of these labels are set smaller when they are closer to the ISI-free region boundary, which highlights the ISI-free middle region to reduce the risk of timing error. Nevertheless, the uncertain multi-path delay may result in a variation of ISI-free region with environmental changes, leading to the incorrect labeling and affecting the TS correctness. To overcome this issue, we further relax the labeling restrictions to line-of-sight (LOS) cases to reserve an enough region for accommodating the uncertain multi-path delay for non-line-of-sight (NLOS) cases. This avoids the highlighted midpoint being outside the ISI-free region and thus forms the LOS-based priori information. Without excessively increasing the computational complexity, we combine a single hidden back-propagation NN (BPNN) with the classic TS processing to form the learning-based TS (so called TS learner), in which the employed BPNN only unfolds one cross-correlation process. Numerical results show that the proposed training label can effectively enhance the adaptability and correctness of TS learner against the multi-path uncertainty.
_Notations:_\([\cdot]^{T}\), \(\mathrm{E}\{\cdot\},|\cdot|,[\cdot]\), and \((\cdot)^{*}\) denote the operations of transpose, statistical expectation, absolute, ceiling, and complex conjugate, respectively.
## II System Model
We consider an OFDM system with \(N\) subcarriers, as shown in Fig. 1. At the transmitter, a transmitted OFDM symbol is expressed as
\[s\left(n\right)=\left\{\begin{array}{ll}\sum\limits_{k=0}^{N-1}d\left(k\right) e^{j\frac{2\pi k}{N}n},&0\leq n\leq N\\ s\left(n+N\right),&-N_{g}\leq n<0\end{array}\right., \tag{1}\]
where \(d(k)\) represents the data symbol or the element of training sequence modulated on the \(k\)th subcarrier. \(N_{g}\) is the length of cyclic prefix (CP). In (1), \(\mathrm{E}\{|s(n)|^{2}\}=P_{t}\) with \(P_{t}\) being the transmitted signal power. After transmitting \(s(n)\) over multi-path channel [11], the received sample is given by
\[y(n)=e^{j\frac{2\pi n}{N}}\sum\limits_{l=1}^{L}h_{l}s(n-\theta-\tau_{l})+w(n), \tag{2}\]
where \(\theta\) denotes the timing offset to be estimated, \(\varepsilon\) is the normalized carrier frequency offset, and \(w(n)\) stands for the additive white Gaussian noise with zero-mean and variance \(\sigma^{2}\). In (2), \(h_{l}\) and \(\tau_{l}\) are the complex gain and normalized multi-path delay of the \(l\)-th resolvable path, respectively. Besides, the multi-path delays are shorten than the CP length to prevent ISI, i.e., \(N_{g}>\tau_{l}\), and \(\tau_{l}=l-1\) is considered in this paper.
By considering a \(N_{w}\)-length observed interval, \(N_{w}\)-samples of the received samples are buffered as the observed vector \(\mathbf{y}\in\mathbb{C}^{N_{w}\times 1}\), i.e.,
\[\mathbf{y}=\left[y\left(0\right),y\left(1\right),\cdots,y\left(n\right),\cdots,y\left(N_{w}-1\right)\right]^{T}, \tag{3}\]
where \(N_{w}=2N+N_{g}\) aims to observe a complete training sequence, and correspondingly, the length of searching range for unknown \(\theta\) is \(N_{s}=N_{w}-N=N+N_{g}\). In Section III, the proposed TS method for estimating timing offset is elaborated, in which the estimation value of \(\theta\) is denoted as \(\widehat{\theta}\).
## III Learning-based TS aided by Label Designing
### _Improvement of Label Designing_
By paying special attention to the uncertain multi-path delay, a novel training label \(\mathbf{t}\in\mathbb{R}^{N_{s}\times 1}\) is designed to enhance the TS learner, i.e.,
\[\mathbf{t}=\left[\underbrace{0,\cdots,0}_{\theta+\tau_{L}},\underbrace{\zeta \left(1\right),\cdots,\zeta\left(D\right)}_{\text{ISI-free region}},\underbrace{0, \cdots,0}_{N_{s}-\theta-N_{g}-1}\right]^{T}, \tag{4}\]
where \(D=N_{g}-\tau_{L}+1\), denoting the length of ISI-free region. For convenience, a discrete interval for searching \(\theta\) is denoted as \(\Omega\) and defined as that \(\Omega:\left\{m\left|0\leq m\leq N_{s}-1,\forall m\in\mathbb{Z}\right.\right\}\). Therein, the ISI-free region (denoted as \(\Omega_{\text{free}}\)) is defined as that \(\Omega_{\text{free}}:\left\{m\left|\tau_{L}\leq m-\theta\leq N_{g}\right.\right\}\), and the ISI region, denoted as \(\Omega_{\text{ISI}}\), corresponds to a complementary set of \(\Omega_{\text{free}}\) in \(\Omega\), i.e., \(\Omega_{\text{ISI}}\cap\Omega_{\text{free}}=\emptyset\) and \(\Omega_{\text{ISI}}\cup\Omega_{\text{free}}=\Omega\). For \(\forall m\in\Omega_{\text{free}}\), the \(m\)-th entry in \(\mathbf{t}\) is denoted as \(\zeta\left(d\right)\), \(d=1,2,\cdots,D\), i.e.,
\[\zeta\left(d\right)=\left\{\begin{array}{ll}d,&1\leq d<\lceil D+1/2\rceil\\ D-d+1,&\lceil D+1/2\rceil\leq d\leq D\end{array}\right., \tag{5}\]
where each value of \(\zeta(d)\) satisfies that the closer to the leftmost or rightmost boundary of \(\Omega_{\text{free}}\), the smaller values are set. Different from [10], the designed \(\mathbf{t}\) in (4) not only represents \(\Omega_{\text{free}}\), but also highlights its middle region. Since the minimal training loss can be achieved after BPNN training [12], the maximum value of network output will be concentrated nearby the midpoint of \(\Omega_{\text{free}}\) with a high probability. That is, the probability of correct TS is increased.
In (4), an approximate midpoint of \(\Omega_{\text{free}}\), denoted as \(\mu=\theta+\lceil\left(\tau_{L}+N_{g}\right)/2\rceil\), is considered as the ideal case of correct labeling for (4), i.e., \(\mu\in\Omega_{\text{free}}\). However, due to the uncertain multi-path delay in NLOS cases, \(\mu\notin\Omega_{\text{free}}\) may appear, which makes error labeling. If \(\mu\notin\Omega_{\text{free}}\) has appeared in (4), the trained TS learner will learn the incorrect label, and thus degrades its TS correctness significantly. To tackle this issue, the labeling restriction is relaxed to LOS cases to reserve the enough region for accommodating the uncertain multi-path delays of NLOS cases.
### _Labeling by LOS-based Priori Information_
By separately denoting \(\xi\), \(G_{r}\), \(G_{t}\), and \(\lambda\) as the propagation distance, received antenna gain, transmitted antenna gain, and wavelength, the received signal power in a LOS scenario, denoted as \(P_{r}(\xi)\), is given by \(P_{r}(\xi)=\frac{\lambda^{2}}{(4\pi\beta^{2})}P_{t}G_{t}G_{r}\)[13]. With the given transmitted power \(P_{t}\) and the resolvable received power (i.e., a constant value denoted by \(P_{\text{res}}\)), the maximum propagation distance in a LOS scenario (denoted as \(\xi_{\text{LOS}}\)) is \(\xi_{\text{LOS}}=\max_{\xi}\left\{P_{r}\left(\xi\right)\geq P_{\text{res}}\right\}\). Therein, the resolvable received power is defined as the received power of the minimal resolvable path during the synchronization phase [14]. For \(\forall\xi\) with a given \(P_{t}\), relative to LOS scenarios, \(P_{r}\left(\xi\right)\) of a NLOS scenario is inevitably reduced due to obstacles [13]. Accordingly, with a given \(P_{t}\), the maximum propagation distance in a NLOS scenario, denoted as \(\xi_{\text{NLOS}}\), satisfies \(\xi_{\text{NLOS}}\leq\xi_{\text{LOS}}\). Correspondingly, \(\tau_{L}\leq\frac{\xi_{\text{LOS}}}{c\cdot T}\) is satisfied, where \(T\) and \(c\) denote the sampling interval and light speed, respectively. Usually, \(\tau_{L}\) of NLOS scenario is difficult to obtain due to the uncertain multi-path delay, while \(\xi_{\text{LOS}}\) seems to be much easier to obtain, so that \(\frac{\xi_{\text{LOS}}}{c\cdot T}\) can be easily captured. Without loss of generality, \(\frac{\xi_{\text{LOS}}}{c\cdot T}<N_{g}\) is assumed. From [15], the CP length is much greater than the maximum propagation delay. Thus, we explore the prior information of \(\frac{\xi_{\text{LOS}}}{c\cdot T}<N_{g}\) to improve the TS correctness. In fact, this prior information is a loose upper bound, which is only utilized to reflect the heuristic idea of this letter. With the development of integrated sensing and communication (ISAC), the sensing technology (e.g., [16]) can be employed to obtain a tight bound to further improve the TS correctness. Then, in a real propagation environment, \(\tau_{L}\leq\frac{\xi_{\text{LOS}}}{c\cdot T}<N_{g}\) can be satisfied, yielding
\[\theta+\tau_{L}\leq\theta+\frac{\xi_{\text{LOS}}}{c\cdot T}<\theta+N_{g}. \tag{6}\]
Fig. 1: System model.
The above (6) can be viewed as the LOS-based priori information, obtaining a narrowed interval of ISI-free region, i.e.,
\[\Omega_{\text{affine}}:\left\{m|\frac{\xi_{\text{los}}}{c\cdot T}\leq m-\theta<N_ {g}\right\}s.t.\ \Omega_{\text{affine}}\subseteq\Omega_{\text{free}}. \tag{7}\]
Although the uncertain multi-path delay makes \(\mu\in\Omega_{\text{free}}\) hardly, it can be easily achieved by \(\mu\in\Omega_{\text{affine}}\). Then, \(m\) in (7) is substituted by \(\mu\), in which \(\frac{\xi_{\text{los}}}{cT}\leq\mu-\theta<N_{g}\). By using (6), we have
\[\theta+\tau_{L}\leq\mu<\theta+N_{g}, \tag{8}\]
and thus \(\mu\in\Omega_{\text{free}}\) is satisfied. By considering \(\mu=\theta+\left\lceil\left(\tau_{L}+N_{g}\right)/2\right\rceil\), the \(\tau_{L}\) for training can be obtained via \(\frac{\xi_{\text{los}}}{cT}\). In (6), if \(\left\lceil N_{g}/2\right\rceil\leq\frac{\xi_{\text{los}}}{cT}\times N_{g}\), the \(\tau_{L}\) for training can be set as \(2\frac{\xi_{\text{los}}}{cT}-N_{g}\leq\tau_{L}\leq\frac{\xi_{\text{los}}}{cT}\). Otherwise, \(1\leq\tau_{L}\leq\frac{\xi_{\text{los}}}{cT}\) can be considered. For the both cases, a training option of \(\tau_{L}\) aided by the LOS priori information of (6) is expressed as
\[\left\{\begin{array}{l}\tau_{L}\sim U\left[1,\left\lceil\frac{\xi_{\text{los }}}{cT}\right\rceil\right],\quad\frac{\xi_{\text{los}}}{cT}\in\left(0,\left \lceil\frac{N_{g}}{2}\right\rceil\right)\\ \tau_{L}\sim U\left[2\left\lceil\frac{\xi_{\text{los}}}{cT}\right\rceil-N_{g}, \left\lceil\frac{\xi_{\text{los}}}{cT}\right\rceil\right],\quad\frac{\xi_{ \text{los}}}{cT}\in\left[\left\lceil\frac{N_{g}}{2}\right\rceil,N_{g}\right) \end{array}\right.. \tag{9}\]
According to (6)-(9), the \(\tau_{L}\) for training is derived to promise the correct labeling given in (4). Since the ISI-free's approximate midpoint \(\mu\) is highlighted with \(\mu=\theta+\left\lceil\left(\tau_{L}+N_{g}\right)/2\right\rceil\) and \(\tau_{L}=2\left\lceil\mu-\theta\right\rceil-N_{g}\), the setting of \(\tau\) in (9) can guarantee the condition that \(\mu\in\Omega_{\text{affine}}\subseteq\Omega_{\text{free}}\). On the basis of this, the NN trained by the label designed in (4) can achieve \(\widehat{\theta}\in\Omega_{\text{free}}\) with a high probability. Furthermore, the more cases of \(\tau_{L}\) that are learned, the greater robustness against the varied \(\tau_{L}\) that can be achieved. Therefore, it is reasonable to relax \(\tau_{L}\) to a random value instead of a constant value, as done in (9).
### _Learning-based TS Method_
#### Iii-C1 Network Architecture
The architecture of the designed TS learner is illustrated in TABLE I, which has an input layer, a hidden layer and an output layer. To avoid excessively increasing complexity, the neurons of input layer, hidden layer, and output layer are set as \(N_{s}\), \(N\), and \(N_{s}\), respectively. Therein, the \(N\)-neuron hidden layer is derived from considering unfolding one cross-correlation process. Since the input scale of NN may differ from those of other layers, it is practical to employ the \(\ell_{2}\)-norm in the input layer. Besides, the hidden layer employs the sigmoid activation function, i.e., \(f(x)=1/(1+e^{-x})\), for the reason that it is easy to calculate and suitable for shallow NNs [17].
#### Iii-C2 Initial Feature Extraction
By employing the classic TS method, its timing metric is utilized as the initial \(N_{s}\) features of TS to facilitate the model learning [18], i.e.,
\[F\left(m\right)=\left|\sum_{n=0}^{N-1}x^{*}\left(n\right)y\left(m+n\right) \right|^{2},0\leq m\leq N_{s}-1, \tag{10}\]
where \(x(n)\) is a local training sequence, e.g., the Zadoff-Chu sequence in [19]. By using the \(\ell_{2}\)-norm of \(F(m)\), the network input, denoted as \(Q(m)\), is normalized to the interval \([0,1)\), i.e., \(0\leq Q(m)<1\), which is expressed as \(Q(m)=F(m)/\sqrt{\sum_{m=0}^{N_{s}-1}|F(m)|^{2}}\). The vector form of \(Q(m)\), denoted as \(\mathbf{Q}\in\mathbb{R}_{s}^{N_{s}\times 1}\), is expressed as \(\mathbf{Q}=\left[Q\left(0\right),\cdots,Q\left(N_{s}-1\right)\right]^{T}\).
#### Iii-C3 Offline Training
First, by using (1)-(10), the training data set \(\left\{\mathbf{Q}_{i},\mathbf{t}_{i}\right\}_{i=1}^{N_{s}\times 10^{+}}\) are obtained. For the \(i\)-th training sample, we employ the exponentially decayed channel model with its decayed factor \(\eta_{i}\) being \(\eta_{i}\sim U[0.01,0.5]\) for combating the uncertainty of complex path gain. Meanwhile, \(\frac{\xi_{\text{los}}}{cT}=\left\lceil 7N_{g}/8\right\rceil\) is given, which derives \(\tau_{L,i}\sim U[\left\lceil 3N_{g}/4\right\rceil,\left\lceil 7N_{g}/8\right\rceil]\) for the label designing in (4) to combat the uncertain multi-path delay. Besides, the SNR for generating the \(i\)-th training sample is randomly selected from \(\left\{-2\mathrm{dB},0\mathrm{dB},2\mathrm{dB},...,10\mathrm{dB}\right\}\) with a probability of 1/7.
In the developed TS learner, the optimizer employs the stochastic gradient descent (SGD) algorithm and sets the initial learning rate as \(\alpha=0.001\)[20]. In this paper, the loss function is defined as
\[L_{\boldsymbol{\Theta}}=\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}\|G_{\boldsymbol{ \Theta}}\left(\mathbf{Q}_{i}\right)-\mathbf{t}_{i}\|_{2}^{2}, \tag{11}\]
where \(\boldsymbol{\Theta}\) is a set of network parameters (i.e., weights and biases) to be optimized, and \(G_{\boldsymbol{\Theta}}(\cdot)\) is a mapping function parameterized by \(\boldsymbol{\Theta}\). By setting \(J\) as the total iterative steeps, the network optimization is defined as [21]
\[\boldsymbol{\Theta}_{j+1}\leftarrow\boldsymbol{\Theta}_{j}-\alpha\nabla L_{ \boldsymbol{\Theta}_{j}}, \tag{12}\]
where \(\boldsymbol{\Theta}_{j}\), \(j=1,2,\cdots,J\), denotes the network parameters after the \(j\)th optimization, and \(\nabla L_{\boldsymbol{\Theta}_{j}}\) is the gradient of \(L_{\boldsymbol{\Theta}_{j}}\).
#### Iii-C4 Online Deployment
According to (1)-(2) and (10), the initial TS features are first extracted, and then normalized to form the network input \(\mathbf{Q}\). With the trained \(G_{\boldsymbol{\Theta}}\), the network output \(\mathbf{O}\in\mathbb{R}^{N_{s}\times 1}\) is obtained by \(\mathbf{O}=G_{\boldsymbol{\Theta}}\left(\mathbf{Q}\right)\). Finally, the output \(\mathbf{O}\) is expressed as \(\left[O(0),\cdots,O(m),\cdots,O(N_{s}-1)\right]^{T}\) for timing offset estimation, i.e., \(\widehat{\theta}=\underset{m\in\Omega}{\arg\max}\left|O\left(m\right)\right|\).
## IV Numerical Results
### _Parameter Setting_
In the simulations, we consider that \(N=128\), \(N_{g}=32\), \(N_{w}=288\), and \(N_{s}=160\). The error probability of TS is utilized to evaluate the TS correctness, which is defined as the probability of estimating the timing offset outside of the ISI-free region. The OFDM technology is primarily employed to combat the ISI caused by the multi-path propagation [22], and thereby the frequency selective fading channels are mainly considered in the simulations. We leverage the delay spread profile to quantify the uncertainty of multi-path fading [23]. To simulate the multi-path uncertainty, Fig. 3 to Fig. 5 depict the TS correctness of the proposed method for the cases where the training delay spread profile differs from the testing ones. The case of the maximum LOS propagation delay (i.e., \(\frac{\xi_{\text{los}}}{cT}\)) is set only to provide the LOS priori information (i.e., (6)) for assisting the improvement of label designing in (4). Consequently, we consider a relatively large value of \(\frac{\xi_{\text{los}}}{cT}\) to guarantee the correctness of LOS priori information, i.e., \(\frac{\xi_{\text{los}}}{cT}\) is set as \(\left\lceil 7N_{g}/8\right\rceil\). In the simulations, the exponentially decayed Rayleigh fading channel in [11] and different 5G tapped-delay-line (TDL) channel models (e.g., TDL-B and TDL-C given in
3GPP TR 38.901 [24]) are employed. Besides, the signal-to-noise ratio (SNR) is defined as \(\mathrm{SNR}=10\log_{10}(P_{t}/\sigma_{n}^{2})\)[25], and correspondingly, \(\sum_{l=1}^{L}|h_{l}|^{2}=1\) is considered [11].
For the ease of description, the TS learner with the training labels proposed in this paper and [10] are referred as to "Prop" and "Ref [10]", respectively. "Ref [7]" is the TS method given in [7]. "Ref [4]" stands for the iterative-based TS method in [4]. Besides, the classic TS method in [26], denoted as "Ref [26]", serves as the baseline.
### _Computational Complexity and Processing Delay_
TABLE II presents the expression of computational complexity, in which the complex multiplication (CM) is used for evaluating the computational complexity. By considering the impact of searching length \(N_{s}\), Fig. 2 plots the CM of each given TS method in terms of \(N_{s}\), where \(N_{s}\) increases from 160 to 1024 with the interval being 16. All the evaluations are carried out on an Intel Core i5-11300H, 3.10GHz CPU, and \(L=28\) is considered for \(10^{4}\) experiments in both TABLE II and Fig. 2. In Fig. 2, the CM of "Prop" is smaller than those of "Ref [4]", "Ref [7]", and "Ref [10]". Similarly, TABLE II reflects that "Prop" obtains a lower computational complexity and processing delay, and the case where \(N_{s}=160\) is given. The reason is that "Prop" unfolds one cross-correlation process, while others unfold at least two iterations of the cross-correlation process. From TABLE I, the CM of the designed NN of "Prop" is \(0.5NN_{s}\), which does not exceed the CM of one cross-correlation process, i.e., \(NN_{s}\). Naturally, the CM of "Prop" will not exceed the CM of two cross-correlation process. Thus, the relatively lightweight NN is employed by "Prop" compared with the given TS methods.
### _Robustness Analysis_
To analyze the robustness against the training sequence length (i.e., \(N\)), Fig. 4 plots the error probability of TS, where \(N=96\), \(N=128\), and \(N=160\) are considered. Except for \(N\), other parameters remain the same as those mentioned in Section III-C. For each given value of \(N\) in Fig. 4, "Prop" reaches the smallest error probability compared with other given TS methods. This reflects the robustness of "Prop" against the change of \(N\). Meanwhile, it is worth noting that the error probability of "Prop" reduces with the increases of \(N\). This is due to the fact that the ability of anti-noise in cross-correlator is enhanced (deiteriorated) with the increase (decrease) of \(N\). Therefore, the proposed learning-based TS method assisted by the improved label design can robustly improve the TS correctness.
### _Generalization Analysis_
Fig. 5 presents the comparison of error probability in different channel models to analyze the generalization performance of "Prop". Except for the testing wireless channel models, other parameters are the same as those mentioned in Section IV-B. Importantly, the NN adopted in this letter dose not need to be retrained when the testing channel model differs from the training one. For each given channel model in Fig. 5, "Prop" reaches the lowest error probability among the given TS methods. Furthermore, it is not obvious that, for all given SNRs, the fluctuations of the error probabilities of "Prop" caused by different channel models. Therefore, the TS correctness of "Prop" is superior to those of other TS methods. This shows that the proposed TS methods possesses a good generalization performance against different 5G TDL channel models.
## V Conclusion
Against the influence of uncertain multi-path delay, the proposed learning-based TS method aided by the improved label designing is investigated in OFDM systems. By highlighting the ISI-free region and its the approximate midpoint against uncertain multi-path delay, the designed training label effectively reduces the risk that the DFT window starts at the ISI region. Meanwhile, with the LOS-based priori information, the incorrect labeling affected by the uncertain multi-path delay is further rectified. Simulation results validates the effectiveness and generalization of designed training label in improving the TS correctness of the learning-based TS method against multi-path uncertainty. In our future works, we will investigate the the sensing-aided TS from the perspective of ISAC.
|
2307.06434 | Improved selective background Monte Carlo simulation at Belle II with
graph attention networks and weighted events | When measuring rare processes at Belle II, a huge luminosity is required,
which means a large number of simulations are necessary to determine signal
efficiencies and background contributions. However, this process demands high
computation costs while most of the simulated data, in particular in case of
background, are discarded by the event selection. Thus, filters using graph
neural networks are introduced at an early stage to save the resources for the
detector simulation and reconstruction of events discarded at analysis level.
In our work, we improved the performance of the filters using graph attention
and investigated statistical methods including sampling and reweighting to deal
with the biases introduced by the filtering. | Boyang Yu, Nikolai Hartmann, Luca Schinnerl, Thomas Kuhr | 2023-07-12T19:58:50Z | http://arxiv.org/abs/2307.06434v1 | Improved selective background Monte Carlo simulation at Belle II with graph attention networks and weighted events
###### Abstract
When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions. However, this process demands high computation costs while most of the simulated data, in particular in case of background, are discarded by the event selection. Thus, filters using graph neural networks are introduced at an early stage to save the resources for the detector simulation and reconstruction of events discarded at analysis level. In our work, we improved the performance of the filters using graph attention and investigated statistical methods including sampling and reweighting to deal with the biases introduced by the filtering.
## 1 Introduction
The standard model is to date the most thoroughly tested fundamental theory that describes nature. Nonetheless, there are still many primary questions to be answered. To complement the standard model, many new physics scenarios have been proposed and high energy physics experiments have been designed to search for new particles and new processes. To enhance the chances of detecting interesting events with low cross sections, the SuperKEKB accelerator [1] is anticipated to achieve a high luminosity, with the Belle II experiment [2] producing a large volume of data for analysis. To extract physical information, Monte Carlo simulations of even larger sizes are compared with the real data.
Although the simulations are computationally expensive, most of the simulated events are eventually discarded due to the selection requirements necessary for the data analysis, called skimming (Figure 1). Therefore, it has been proposed by James Kahn [3] to use a filtering neural network to pre-select events that have the potential to pass the skim. In the first part of this study, we focused on the efficacy of attention mechanisms to improve the performance of the filter.
The pre-selection process will however introduce biases due to wrong discarding of events. To address this issue, a contribution of distance correlation to the loss term was presented to punish biases during training [4]. In this work, we investigated two further methods for dealing with this problem.
## 2 Dataset preparation
Following the previous studies mentioned in the introduction, we selected a dataset based on the Full Event Interpretation (FEI) [5] skim from the Belle II experiment. The FEI skim was chosen as the study object mainly because of its low, but sufficient retention rate (around 6%), providing enough data to train classification models. The selected skim selection is used by various studies, meaning that the achievement of these works can be directly used by others.
The dataset is constructed from simulated \(\Upsilon(4S)\to B^{0}\bar{B}^{0}\) events where the skim selection corresponds to the hadronic \(B^{0}\) FEI. Each instance is labeled with a _True_ or _False_ value, indicating whether it is able to pass the skim or not. For this study, a training set of \(9\times 10^{5}\) samples is used, and \(1\times 10^{5}\) and \(5\times 10^{5}\) samples are reserved for the validation set and test set, respectively. For each dataset there is an equal amount of _True_ and _False_ labeled instances. The training and validation sets are used for the learning process of the neural network models and gradient boosting decision tree (GBDT) [6] classifiers, while the test set is only used for the final evaluation of different performance metrics, including classification accuracy, speedup, and feature importance.
The information generated by Monte Carlo has a rooted tree structure in which each node represents a single particle and the connection between particles is indicated by the mother particle index on each node. For the building of data sets, a number of generator-level features of each particle are selected, which can be divided into two categories based on their usage: _Generated Variables_ and _Physics Observables_.
The _Generated Variables_ play a crucial role in constructing the input for the graph neural network structures, by providing the information to distinguish between background and signal decays. These features will be collected over nodes from graphs by the neural network to calculate global features that are decisive in the final classification. The generated variables include **PDG ID**[7], **Mother array index**, **Production Time**, **Energy**, **Momentum**, and **Position**.
The _Physics Observables_ on the other hand are not used in the training of the neural networks, since they are not available after Monte Carlo event generation. These features are important for physics analyses and it is therefore vital to study any potential biases of their distributions. Based on the previous study [4], a set of 14 physics observables that exhibit the strongest biases among 29 features defined in [3] were selected, among them is the beam-constrained mass \(\mathbf{M}_{\mathrm{bc}}\), which is a crucial variable for the FEI skim.
Decay events are represented as graphs by utilizing the mother particle array index carried by each particle, with the expectation that all decays are generated by \(\Upsilon(4S)\to B^{0}\bar{B}^{0}\) (as shown in Figure 2). The node features are the _Generated Variables_. One decay event can be fully represented by a single graph, with a boolean label indicating whether the decay satisfies the FEI skim.
## 3 Neural network filter
Graph Convolutional Networks (GCN) [8] used in the previous study [4] showed strong ability to perform the classification. An advanced version of GCN benefiting from attention mechanisms [9]
Figure 1: The use of NN filter in the data flow.
was proposed known as Graph Attention Networks (GAT) [10]. Through experiments, we demonstrated that the GAT architecture (Figure 3 left) performs better than its predecessor.
We compared different construction approaches and found an optimal version. In this approach, the PDG IDs of particles are first fed through an embedding layer to convert them to tokens represented in a 540-dimensional space. These tokenized IDs are then concatenated with the other generated variables to form the initial node features. The preprocessed node features are then fed into the GAT module. In addition to the node features, the GAT module takes the adjacency matrix representing the connections between the particles in the graph as input. The adjacency matrix remains fixed during the training and provides the GAT module with information on how particles are connected in each graph.
The GAT module is a crucial component for updating both node features and global features. After the Graph Attention (GATConv Layer in the figure), the node features on each node are updated and used as the new input node features for the next layer. Additionally, the updated node features are fed into the Global Attention Pooling layer, which sums the features from
Figure 3: (left) Final structure of the neural network filter. (right) Performance.
all nodes in the graph with inferred weights and generates a set of graph structure-independent global features. These global features are then concatenated with the global feature output from the previous layer, which is initialized to 0, and mapped to a fixed number of dimensions by a Dense layer. This process results in a new set of global features that are used to update the next layer. Finally, after passing the last layer, the global features are mapped to a single neural network output value, the score \(p\in[0,1]\).
The search for the best model and optimal hyperparameters was carried out based on several criteria, including the accuracy on validation set, loss on both validation and training sets, training time on the training set, and ROC curve and AUC values on the test set. The best AUC value was improved from 0.9083 to 0.9122.
## 4 Bias reduction
After the neural network filter, each event is assigned a score indicating the probability of this event passing the following skims. The trivial approach is to define a threshold, where the events with a higher score are kept. However, this selection will reject some samples that have been able to pass the skims, resulting in false negative events and leading to deviations, or biases (Figure 4 green), in the following analysis. The objective of this part is to identify and minimize biases by utilizing statistical tools.
The first step is to measure the biases with the help of the 14 physics observables attached to each event. The histograms of each variable of all the _True_ labeled events are compared with those of all the _True-Positive_ events, whose scores are required to be larger than the threshold. Figure 4 (red) shows the deviation of the reproduced \(M_{\mathrm{bc}}\) before any post-processing.
One approach of mitigating the biases is to use a sampling method, where events are randomly selected, and weighted by the inverse sampling probability, and no threshold is needed. However, this leads to a lower effective sample size due to higher statistical uncertainty when using weighted events and must be taken into account to estimate the performance gain. To train the neural network filter with this in mind, the loss function is chosen to be the speedup metric, which represents the time reduction of producing the same effective sample size of events. Maximizing the speedup function during training optimizes the network to assign the best sampling weight to each event. The agreement between True-Pass and sampled distributions is satisfactory, as shown in Figure 4 (red), using \(M_{\mathrm{bc}}\) as an example. Although the network is trained with speedup as the loss function, the final speedup achieved is only 2.1, indicating that only half of the production time can be saved for each event without introducing any bias.
To achieve a higher speedup, the reweighting method was studied. In this approach, the neural network filter is trained to predict the probability of an event to pass. Filtered events
Figure 4: Reproduced \(M_{\mathrm{bc}}\) with trivial selection (green), sampling (red), GBDT reweighting (purple) and Histogram reweighting (yellow).
are selected by a fixed threshold on this output probability. Then, a GBDT classifier is trained to predict the probability of _True-Positive_ events in the set of all events with _True_ label. The weights, or the inverse conditional probability of _True-Positives_, are derived from the GBDT output using two methods. For _GBDT Reweighting_ the inverse of the classifier output is directly used as the weight, while for _Histogram Reweighting_ the weight is derived from histograms of the classifier output. The performance of both methods is shown in Figure 4 (purple and yellow). The bias of \(M_{\text{bc}}\) is similar to the one using sampling method since it is one of the variables used in the training of the GBDT classifier. For some other variables the biases are higher.
The performances of all the methods were compared according to their highest achieved speedup and their highest biases in the reconstructed physics observables, which are measured by Kolmogorov-Smirnov statistic (KS Test). A trade off between the speedup and the bias minimizing is presented in the table 1.
## 5 Summary
We improved the graph neural network filter by incorporating attention mechanism and synchronising the update of node and global features. To avoid the bias generated by discarding _False-Negative_ events, we introduced importance sampling, GBDT reweighting and Histogram reweighting, based on statistical uncertainty, to compensate the selected events, either randomly or through a threshold, with higher weights. The sampling method has the strongest bias-mitigation ability but achieves a speedup factor of two. The reweighting methods offer higher speedup factors of around 6 but still have small bias and require careful tuning and validation.
## 6 Acknowledgments
This work was supported by the German Federal Ministry of Education and Research (BMBF) in the collaborative project IDT-UM (Innovative Digitale Technologien zur Erforschung von Universitum und Materie).
|
2305.18073 | Development of a Power Quality Based Digital Energy Meter Educational
Platform | Phasor Measurement Units (PMUs) are being used extensively for electrical
grid monitoring and control. However, their cost prohibits further adoption on
the distribution grid and easy access for educational purposes. This paper
proposes that simple and fundamental functions of a PMU can be achieved using
an energy metering IC and integrated into smart electricity meters, providing a
lower cost and more widely available method of monitoring and control of
distribution grids, and presents a proof-of-concept platform with
aforementioned functionality. The described platform's construction and
flexibility emphasizes its educational capabilities PMUs and electricity meters
as well. | Mislav Bui, Marko Jurčević | 2023-05-29T13:19:03Z | http://arxiv.org/abs/2305.18073v1 | # Development of a Power Quality Based Digital Energy Meter Educational Platform
###### Abstract
Phasor Measurement Units (PMUs) are being used extensively for electrical grid monitoring and control. However, their cost prohibits further adoption on the distribution grid and easy access for educational purposes. This paper proposes that simple and fundamental functions of a PMU can be achieved using an energy metering IC and integrated into smart electricity meters, providing a lower cost and more widely available method of monitoring and control of distribution grids, and presents a proof-of-concept platform with aforementioned functionality. The described platform's construction and flexibility emphasizes its educational capabilities PMUs and electricity meters as well.
synchronosors, phasor measurement units (PMUs), smart meters, smart grids, time synchronization
## I Introduction
Phasor Measurement Units (PMUs) are devices which have the ability to achieve high-precision measurements of voltage and current, providing synchrophasors - phasor values referenced to specific points in time [1]. PMUs can be used in various ways with most the common application of monitoring and controlling electrical grids in real time, increasing understanding of grid instabilities and faults, as well as locating them and protecting the grid from such events. In addition to real-time monitoring, the collected data can be used for post-mortem analyses to prevent further problem occurrences. They have thus become a popular and advanced tool used to monitor the electrical grid, helping to enforce strict stability, reliability and grid protection requirements. However, their high cost is still limiting more widespread adoption and usage, necessitating scarce and highly optimized placement, for which specific algorithms have been developed. Strategic placement can be used to monitor, control and protect specific parts of infrastructure, especially of transmission grids and substations, but lacks flexibility and wide coverage needed for distribution grid monitoring [2].
The distribution grid is projected to become more unstable and will need better monitoring and control due to the impact of distributed energy resources (DER) and active loads. Renewable energy sources are becoming more widely available, even to consumers, who are becoming producers as well (or so-called prosumers), extensively using small-scale photovoltaic panels and wind turbines. This means that unlike traditional grids, smart grids of the future will have a large number of sources, many of them connected directly to the distribution grid, not the transmission grid. All of these factors indicate that the distribution grid is becoming increasingly dynamic and large-scale deployment of PMUs will be needed regardless of cost [3].
In contrast to PMUs, electricity meters are very common devices on the distribution grid. The smart energy meter is novel type of electricity meter also becoming more common in recent years owing to its advanced features such as automated and real-time reading, measurement of additional properties, such as phase angle and frequency in addition to energy consumption, as well as two-way communication.
This paper describes a simple, low-cost, prototype proof-of-concept PMU-like platform based on a common all-in-one energy metering IC, the Cirrus Logic CS5463. The prototype platform is a device connected to a precise time source with grid voltage and current inputs and can measure many different aspects of input signals. Such a device could be deployed alongside regular PMUs (on grid locations where PMUs are not presently installed) and enhance their functionality, especially for post-mortem analyses of grid-related problems. With further development such functionality could be implemented directly in smart electricity meters, providing analysis of issues in real-time at the level of distribution grids. Application of machine-learning (ML) or tiny machine-learning algorithms (TinyML) could further advance real-time monitoring and recognition of grid-related events of interest. Additionally, a platform like this can be applied for educational purposes as well. The spread of PMU usage means that students - future engineers and technicians - will need to be familiarized with synchrophasors and related devices through education and training, both theoretical and practical. This platform could aid in education and training due to its low cost and high flexibility and adaptability, since every part of it is exposed and accessible to the user, who can modify the platform's behavior as well.
## II Platform description
The proposed measurement platform consists of the following three subsystems:
* front-end and energy measurement IC,
* microcontroller (MCU) operating the measuring process and timestamp measured data,
* server used for receiving data and providing accurate time to the MCU.
A general overview is provided in Fig. 1.
### _Front-end and energy measurement IC_
The front-end used in this proof-of-concept (PoC) prototype is a Cirrus Logic CDB5463U-Z evaluation board with an added signal conditioning circuit in the form of a voltage divider connected to the VIN+ input of the board (connector J23 [4]). The board, visible in Fig. 2, has separate differential voltage and current inputs. It already includes common-mode and differential noise RC filters on both inputs (components C9, C17 and C13 [4]), but the added voltage divider is necessary to limit the input signal amplitude to 500 mV\({}_{\text{pp}}\), the maximum differential input amplitude which the included CS5463 energy measurement IC supports. To conserve the symmetry of the common-mode filters and prevent common-mode noise being converted into differential noise, an additional resistor of similar value to the total parallel resistance in the voltage divider was added to the VIN- input, while the evaluation board's filters are already configured such that the differential cutoff frequency is much lower than the common-mode cutoff frequency as well, thus further reducing the impact of possible asymmetry [5].
### _Microcontroller_
An MCU controls the CS5463 energy measurement IC, receives measurements from it and timestamps each measured value. In this PoC the MCU used is an STMicroelectronics STM32L072CZ on the B-L072Z-LRWAN1 development board. It communicates with the CS5463 using SPI (Serial Peripheral Interface) and additional connections for the Interrupt and Reset pin, while the connection to the server is made using the UART (Universal Asynchronous Receiver-Transmitter) protocol and a UART to USB bridge.
The MCU runs a program which receives accurate time data from the server, reads data (measurements) from the CS5463, timestamps them and sends them to the server. The program has several interrupt service routines (ISRs) handling UART communications and accurate timekeeping. The flow diagram is shown in Fig. 3.
Fig. 1: A block diagram overview of the platform.
Fig. 3: MCU program flow diagram.
Fig. 2: The CDB5463U-Z evaluation board.
#### Ii-B1 Time synchronization with the server
The synchronization process follows the standard Precision Time Protocol (PTP) synchronization algorithm [6]. PTP message reception is done using an Oregona Systems syn1588 Network Interface Card (NIC, see section II-C). After starting the process, the MCU waits for a Sync message from the server which includes the current server time tl. When received, the MCU extracts the server time tl and takes the current MCU time t2. It then sends a Delay request message to the server, taking the sending time t3 as well, and waits for a Delay response message. The Delay response message includes the server's Delay request receipt time t4. After receiving the Delay response message, the MCU has all 4 times stored and calculates the offset using the formula
\[offset=((t2-t1)-(t4-t3))/2 \tag{1}\]
and subtracts the offset from the current MCU time. As the process starts at the time of pulse reception from the syn1588 NIC, it takes the MCU time of synchronization start and MCU time of synchronization end. Using these values, it calculates the pulse reception MCU time pulse_time.
#### Ii-B2 Timekeeping and interrupt service routines
The resolution of time kept by the MCU is 0.1 ms. An ISR handles timekeeping using the pulse_time variable. After the first synchronization, the MCU no longer requests actual timestamps from the server and only increases the pulse_time variable by 1000.0 ms at each subsequent pulse, then sets MCU time to that time. Another ISR is responsible for increasing the internal timer by 1 every 0.1 ms, handling the interrupt created by the MCU's internal timer. Stopping measurements is done using the _user button_[7] on the development board. When the user presses this button, the program takes a final measurement, indicates to the server that a stop has been requested and stops.
#### Ii-B3 Data frame structure
The data frames sent from the MCU subsystem to the server can be completely customized according to the desired purpose. For the purpose of this paper, determining accuracy of the platform, data frames consist of 12 bytes shown in Fig. 4. In this paper only one, voltage channel, is analyzed. The first byte is an indicator byte, indicating to the server that data frames are being sent. The next 6 bytes are measured data, the first 3 being the instantaneous voltage value and the last 3 the RMS voltage value. Measured data is transmitted unchanged with the server being responsible for parsing. Instantaneous voltage values are represented using normalized 2's complement, while RMS voltage values are represented in unsigned binary notation [8]. The final 5 bytes are reserved for the timestamp. To reduce the amount of data being sent to the server and increase measurement speed, only a partial timestamp is kept and sent: a UNIX timestamp in microseconds divided by 100 (reflecting the resolution of time kept by the MCU, resulting in no data loss) and reduced by a constant (\(16\times 10^{12}\) at the time of writing).
### _Server_
The server is a general-purpose PC running Microsoft Windows 10 with a multi-core CPU and an Oregona Systems syn1588 PCIe NIC. The syn1588 receives accurate GPS time through a PTP network and provides it through its API, as well as having a precise 1 pulse per second (1PPS) TTL-compatible output [9]. The server runs a program which sets accurate time on the MCU subsystem, receives the timestamped measurements from it and saves them to a file. To decouple saving data, which can be a slow operation, from receiving data, the program is split into two threads. The program works as follows:
1. After starting, the program elevates itself to the real-time Windows scheduling priority to reduce latency as much as possible, initializes the USB COM port and syn1588 NIC, and creates a file for writing data to. It also creates two threads which execute in parallel.
2. The first thread is used for communication with the MCU subsystem. After the previous step, it takes an accurate timestamp from the syn1588 NIC using its API and synchronizes the time on the MCU using the PTP time synchronization algorithm described in section II-B. The program keeps track of the aforementioned constant and can increase it in order to not overflow the 5 bytes reserved for the timestamp. The thread then starts listening on the USB COM port for incoming data frames, which are read and passed to the second thread. If the user stops the measurement in the way described in section II-B, the thread will no longer accept data frames and will terminate.
3. The second thread is used for saving data. It receives data frames from the first thread through a buffer and writes it to the created file. After no more data frames are received from the first thread when the user stops measurements, it will terminate.
4. Finally, the program closes the USB COM port and the file and terminates.
Data is saved in its raw binary format and must be parsed. This is done using a separate program which reads the raw data frame by frame, converts it into a readable format according to the frame description in section II-B and saves the converted measurements into a separate file.
## III Measurements and results
The platform's voltage input was connected directly to a Rigol DG4062 waveform generator as the signal source. A SEL-735 Power Quality and Revenue Meter that also includes PMU functions, was connected to the same source as well, in
Fig. 4: Data frame structure.
order to take measurements and determine accuracy of the described platform with respect to the PMU as the reference. The CS5463 was set to take 4000 instantaneous measurements per second with an RMS computation cycle of 80 measurements, resulting in a computation cycle frequency of 50 Hz, while the data frame was set according to the description in section II-B. Two sets of voltage measurements were taken, one of a sine and one of a triangle wave, both of \(f=50Hz\) frequency and \(V_{in}=20V_{pp}\) amplitude nominally. Each set includes about 10 minutes of measurements from both the platform and the reference PMU. A PMU 735 was connected to the local phasor data concentrator. The whole setup is synced to the UTC time using a PTP transparent Ethernet network. All measurements were done with the PMU and the platform at the same time in order to be directly comparable, which is possible due to all measurements being timestamped. From these large datasets, random measurements were taken and compared between the platform and the PMU. The comparison was done between differing numbers of measurements to assess the impact of comparison set size on calculated accuracy. For each comparison set, a mean value \(\overline{V}\) and standard error \(s\) were calculated for both the PMU and the platform using the formulas
\[\overline{V}=\frac{1}{n}\Sigma_{i=1}^{n}V_{i} \tag{2}\]
\[s=\sqrt{\frac{1}{n-1}\Sigma_{i=1}^{n}(V_{i}-\overline{V})^{2}} \tag{3}\]
Additionally, a mean percentage difference was calculated using the formula
\[|\overline{V}_{PMU}-\overline{V}_{platform}|/\overline{V}_{PMU} \tag{4}\]
The voltage divider's voltage ratio was determined to be
\[R_{2}/(R_{1}+R_{2})=0.02120 \tag{5}\]
using the calibrated Agilent 3458A high-precision multimeter.
### _Sine wave_
Basic error comparison is provided. Results for comparison set sizes of n = 10, 20 and 30 measurements are shown in Table I and Fig. 5. The mean percentage difference is small (less than 0.1%) but relatively consistent.
### _Triangle wave_
Results for comparison set sizes of n = 10, 20 and 30 measurements are shown in Table II and Fig. 6. The mean percentage difference is again consistent but significantly larger than for the sine wave input.
### _Discussion_
Taking the 735 PMU as an accurate (calibrated) standard, the mean percentage difference calculated in sections III-A and III-B is a result of multiple possible factors which affect accuracy, particularly the CS5463's calibration, front-end frequency response and the possibly differing RMS calculation methods between the CS5463 and the PMU used. Since the standard error of the platform is small, indicating high precision, the mean percentage difference being consistent for both inputs is possibly due to an inaccuracy of calibration. However, for the triangle wave input the difference is significantly larger, although again consistent, with the standard error being small as well. In addition to calibration inaccuracies, since the triangle have contains a large amount of harmonics, the front-end frequency response and RMS calculation methods could have a much larger impact on measurements.
## IV Conclusion
The paper describes a flexible educational platform with implemented basic PMU-like functionality. The platform has instantaneous and RMS voltage and current measurement capabilities and accurate time synchronization through PTP. Its accuracy was tested against a SEL-735 PMU as a standard and was found to be dependent on the input signal but overall a difference of less than 1% was observed. The main reasons for development of the platform were low cost (relative to a standard PMU) and high flexibility and customizability for the user's needs, making it functional as an educational platform.
With further development of this proof-of-concept, similar functionality could be implemented in regular smart meters for monitoring of the distribution grid. Even though the sampling rate of all-in-one electricity metering IC solutions like the CS5463 is relatively low, it is sufficient for monitoring slow and medium-speed faults and events like slow voltage variations, voltage dips, interruptions and breaks, and low-frequency transients [10]. Specific improvements and challenges to overcome include:
* Accurate timekeeping and simplification of the server. A smart meter cannot be connected directly to a PC with a NIC like the syn1588 used in this platform, but time of a large group of meters could be synchronized with an accurate time source through power line communication [11]. Another possible simplification is using a more specialized and smaller server computer like the Raspberry Pi Compute Module 4 which includes integrated hardware PTP support [12]. Being used as the server, such a device could provide better near-real time performance, lower power consumption, smaller size and even more system flexibility.
* High speed MCU. An MCU with a higher processor clock speed can provide a better time resolution (due to faster interrupts and more precise clock divider) as well as a higher sampling rate from the metering IC. Another option is a custom-built IC or an FPGA.
* Metering IC with a higher sampling rate or custom-built ADC and DSP.
* Better front-end based on a transformer or Hall sensor for higher accuracy.
|
2302.13181 | Data-Copying in Generative Models: A Formal Framework | There has been some recent interest in detecting and addressing memorization
of training data by deep neural networks. A formal framework for memorization
in generative models, called "data-copying," was proposed by Meehan et. al.
(2020). We build upon their work to show that their framework may fail to
detect certain kinds of blatant memorization. Motivated by this and the theory
of non-parametric methods, we provide an alternative definition of data-copying
that applies more locally. We provide a method to detect data-copying, and
provably show that it works with high probability when enough data is
available. We also provide lower bounds that characterize the sample
requirement for reliable detection. | Robi Bhattacharjee, Sanjoy Dasgupta, Kamalika Chaudhuri | 2023-02-25T22:31:01Z | http://arxiv.org/abs/2302.13181v2 | # Data-Copying in Generative Models: A Formal Framework
###### Abstract
There has been some recent interest in detecting and addressing memorization of training data by deep neural networks. A formal framework for memorization in generative models, called "data-copying" was proposed by Meehan et. al (2020). We build upon their work to show that their framework may fail to detect certain kinds of blatant memorization. Motivated by this and the theory of non-parametric methods, we provide an alternative definition of data-copying that applies more locally. We provide a method to detect data-copying, and provably show that it works with high probability when enough data is available. We also provide lower bounds that characterize the sample requirement for reliable detection.
Machine Learning, ICML
## 1 Introduction
Deep generative models have shown impressive performance. However, given how large, diverse, and uncurated their training sets are, a big question is whether, how often, and how closely they are memorizing their training data. This question has been of considerable interest in generative modeling (Lopez-Paz and Oquab, 2016; Xu et al., 2018) as well as supervised learning (Brown et al., 2021; Feldman, 2020). However, a clean and formal definition of memorization that captures the numerous complex aspects of the problem, particularly in the context of continuous data such as images, has largely been elusive.
For generative models, (Meehan et al., 2020) proposed a formal definition of memorization called "data-copying", and showed that it was orthogonal to various prior notions of overfitting such as mode collapse (Thanh-Tung and Tran, 2020), mode dropping (Yazici et al., 2020), and precision-recall (Sajjadi et al., 2018). Specifically, their definition looks at three datasets - a training set, a set of generated example, and an independent test set. Data-copying happens when the training points are considerably closer on average to the generated data points than to an independently drawn test sample. Otherwise, if the training points are further on average to the generated points than test, then there is underfitting. They proposed a three sample test to detect this kind of data-copying, and empirically showed that their test had good performance.
However, despite its practical success, this method may not capture even blatant cases of memorization. To see this, consider the example illustrated in Figure 1, in which a generated model for the halfmoons dataset outputs one of its training points with probability \(0.4\), and otherwise outputs a random point from an underfit distribution. When the test of (Meehan et al., 2020) is applied to this distribution, it is unable to detect any form of data copying; the generated samples drawn from the underfit distribution are sufficient to cancel out the effect of the memorized examples. Nevertheless, this generative model is clearly an egregious memorizer as shown in points \(x_{1}\) and \(x_{2}\) of Figure 1.
Figure 1: In this figure, the blue points are sampled from the halfmoons dataset (with Gaussian noise). The red points are sampled from a generated distribution that is a mixture of (40 %) blatant data copier (that outputs a random subset of the training set), and (60 %) a noisy underfit version of halfmoons. Although the generated distribution is clearly doing some form of copying at points \(x_{1}\) and \(x_{2}\), detecting this is challenging because of the canceling effect of the underfit points.
This example suggests a notion of _point-wise_ data copying, where a model \(q\) can be thought of as copying a given training point \(x\). Such a notion would be able to detect \(q\)'s behavior nearby \(x_{1}\) and \(x_{2}\) regardless of the confounding samples that appear at a global level. This stands in contrast to the more global distance based approach taken in Meehan et. al. which is unable to detect such instances. Motivated by this, we propose an alternative point-by-point approach to defining data-copying.
We say that a generative model \(q\) data-copies an individual training point, \(x\), if it has an unusually high concentration in a small area centered at \(x\). Intuitively, this implies \(q\) is highly likely to output examples that are very similar to \(x\). In the example above, this definition would flag \(q\) as copying \(x_{1}\) and \(x_{2}\).
To parlay this definition into a global measure of data-copying, we define the overall _data-copying rate_ as the total fraction of examples from \(q\) that are copied from some training example. In the example above, this rate is \(40\%\), as this is the fraction of examples that are blatant copies of the training data.
Next, we consider how to detect data-copying according to this definition. To this end, we provide an algorithm, Data_Copy_Detect, that outputs an estimate for the overall data-copying rate. We then show that under a natural smoothness assumption on the data distribution, which we call _regularity_, Data_Copy_Detect is able to guarantee an accurate estimate of the total data-copying rate. We then give an upper bound on the amount of data needed for doing so.
We complement our algorithm with a lower bound on the minimum amount of a data needed for data-copying detection. Our lower bound also implies that some sort of smoothness condition (such as regularity) is necessary for guaranteed data-copying detection; otherwise, the required amount of data can be driven arbitrarily high.
### Related Work
Recently, understanding failure modes for generative models has been an important growing body of work e.g. (Salimans et al., 2016; Richardson and Weiss, 2018; Sajjadi et al., 2018). However, much of this work has been focused on other forms of overfitting, such as mode dropping or mode collapse.
A more related notion of overfitting is _memorization_(Lopez-Paz and Oquab, 2016; Xu et al., 2018; Chatterjee, 2018), in which a model outputs exact copies of its training data. This has been studied in both supervised (Brown et al., 2021; Feldman, 2020) and unsupervised (van den Burg and Williams, 2021; Bai et al., 2021) contexts. Memorization has also been considered in language generation models (Carlini et al., 2022).
The first work to explicitly consider the more general notion of _data-copying_ is (Meehan et al., 2020), which gives a three sample test for data-copy detection. We include an empirical comparison between our methods in Section 5.2, where we demonstrate that ours is able to capture certain forms of data-copying that theirs is not.
Finally, we note that this work focuses on detecting natural forms of memorization or data-copying, that likely arises out of poor generalization, and is not concerned with detecting _adversarial_ memorization or prompting, such as in (Carlini et al., 2019), that are designed to obtain sensitive information about the training set. This is reflected in our definition and detection algorithm which look at the specific generative model, and not the algorithm that trains it. Perhaps the best approach to prevent adversarial memorization is training the model with differential privacy (Dwork, 2006), which ensures that the model does not change much when one training sample changes. However such solutions come at an utility cost.
## 2 A Formal Definition of Data-Copying
We begin with the following question: what does it mean for a generated distribution \(q\) to copy a single training example \(x\)? Intuitively, this means that \(q\) is guilty of overfitting \(x\) in some way, and consequently produces examples that are very similar to it.
However, determining what constitutes a'very similar' generated example must be done contextually. Otherwise the original data distribution, \(p\), may itself be considered a copier, as it will output points nearby \(x\) with some frequency depending on its density at \(x\). Thus, we posit that \(q\) data copies training point \(x\) if it has a significantly higher concentration nearby \(x\) than \(p\) does. We express this in the following definition.
**Definition 2.1**.: Let \(p\) be a data distribution, \(S\sim p^{n}\) a training sample, and \(q\) be a generated distribution trained on \(S\). Let \(x\in S\) be a training point, and let \(\lambda>1\) and \(0<\gamma<1\) be constants. A generated example \(x^{\prime}\sim q\) is said to be a \((\lambda,\gamma)\)**-copy** of \(x\) if there exists a ball \(B\) centered at \(x\) (i.e. \(\{x^{\prime}:||x^{\prime}-x||\leq r\}\)) such that following hold:
* \(x^{\prime}\in B\).
* \(q(B)\geq\lambda p(B)\)
* \(p(B)\leq\gamma\)
Here \(q(B)\) and \(p(B)\) denote the probability mass assigned to \(B\) by \(p\) and \(q\) respectively.
The parameters \(\lambda\) and \(\gamma\) are user chosen parameters that characterize data-copying. \(\lambda\) represents the rate at which
must overrepresent points close to \(x\), with higher values of \(\lambda\) corresponding to more egregious examples of data-copying. \(\gamma\) represents the maximum size (by probability mass) of a region that is considered to be data-copying - the ball \(B\) represents all points that are "copies" of \(x\). Together, \(\lambda\) and \(\gamma\) serve as practitioner controlled knobs that characterize data-copying about \(x\).
Our definition is illustrated in Figure 2 - the training data is shown in blue, and generated samples are shown in red. For each training point, we highlight a region (in green) about that point in which the red density is much higher than the blue density, thus constituting data-copying. The intuition for this is that the red points within any ball can be thought of as "copies" of the blue point centered in the ball.
Having defined data-copying with respect to a single training example, we can naturally extend this notion to the entire training dataset. We say that \(x^{\prime}\sim q\) is copied from training set \(S\) if \(x^{\prime}\) is a \((\lambda,\gamma)\)-copy of some training example \(x\in S\). We then define the _data-copy rate_ of \(q\) as the fraction of examples it generates that are copied from \(S\). Formally, we have the following:
**Definition 2.2**.: Let \(p,S,q,\lambda\), and \(\gamma\) be as defined in Definition 2.1. Then the **data-copy rate**, \(cr\left(q,\lambda,\gamma\right)\) of \(q\) (with respect to \(p,S\)) is the fraction of examples from \(q\) that are \((\lambda,\gamma)\)-copied. That is,
\[cr\left(q,\lambda,\gamma\right)=\Pr_{x^{\prime}\sim q}[q\left(\lambda,\gamma \right)\text{-copies }x^{\prime}].\]
In cases where \(\lambda,\gamma\) are fixed, we use \(cr_{q}=cr(q,\lambda,\gamma)\) to denote the data-copy rate.
Despite its seeming global nature, \(cr_{q}\) is simply an aggregation of the point by point data-copying done by \(q\) over its entire training set. As we will later see, estimating \(cr_{q}\) is often reduced to determining which subset of the training data \(q\) copies.
### Examples of data-copying
We now give several examples illustrating our definitions. In all cases, we let \(p\) be a data distribution, \(S\), a training sample from \(p\), and \(q\), a generated distribution that is trained over \(S\).
The uniform distribution over \(S\):In this example, \(q\) is an egregious data copier that memorizes its training set and randomly outputs a training point. This can be considered as the canonical _worst_ data copier. This is reflected in the value of \(cr_{q}\) - if \(p\) is a continuous distribution with finite probability density, then for any \(x\in S\), there exists a ball \(B\) centered at \(x\) for which \(q(B)>>p(B)\). It follows that \(q\)\((\lambda,\gamma)\)- copies \(x\) for all \(x\in S\) which implies that \(cr_{q}=1\).
The perfect generative model:\(q=p\):In this case, \(q(B)=p(B)\) for all balls, \(B\), which implies that \(q\) does not perform any data-copying (Definition 2.1). It follows that \(cr_{q}=0\), matching the intuition that \(q\) does not data-copy at all.
Kernel Density Estimators:Finally, we consider a more general situation, where \(q\) is trained by a _kernel density estimator_ (KDE) over \(S\sim p^{n}\). Recall that a kernel density estimator outputs a generated distribution, \(q\), with pdf defined by
\[q(x)=\frac{1}{n\sigma_{n}}\sum_{x_{i}\in S}K\left(\frac{x-x_{i}}{\sigma_{n}} \right).\]
Here, \(K\) is a kernel similarity function, and \(\sigma_{n}\) is the bandwidth parameter. It is known that for \(\sigma_{n}=O(n^{-1/5})\), \(q\) converges towards \(p\) for sufficiently well behaved probability distributions.
Despite this guarantee, KDEs intuitively appear to perform some form of data-copying - after all they implicitly include each training point in memory as it forms a portion of their outputted pdf. However, recall that our main focus is in
Figure 2: In the three panels above, the blue points are a training sample from \(p\), and the red points are generated examples from \(q\). In the middle panel, we highlight in green regions that are defined to be _data-copying regions_, as \(q\) overrepresents them with comparison to \(p\). In the third panel, we then color all points from \(q\) that are considered to be copied green.
understanding _overfitting_ due to data-copying. That is, we view data-copying as a function of the outputted pdf, \(q\), and not of the training algorithm used.
To this end, for KDEs the question of data-copying reduces to the question of whether \(q\) overrepresents areas around its training points. As one would expect, this occurs _before_ we reach the large sample limit. This is expressed in the following theorem.
**Theorem 2.3**.: _Let \(1<\lambda\) and \(\gamma>0\). Let \(\sigma_{n}\) be a sequence of bandwidths and \(K\) be any regular kernel function. For any \(n>0\) there exists a probability distribution \(\pi\) with full support over \(\mathbb{R}^{d}\) such that with probability at least \(\frac{1}{3}\) over \(S\sim\pi^{n}\), a KDE trained with bandwidth \(\sigma_{n}\) and kernel function \(K\) has data-copy rate \(\text{cr}_{q}\geq\frac{1}{10}\)._
This theorem completes the picture for KDEs with regards to data-copying - when \(n\) is too low, it is possible for the KDE to have a significant amount of data-copying, but as \(n\) continues to grow, this is eventually smoothed out.
The Halfmoons datasetReturning to the example given in Figure 1, observe that our definition exactly captures the notion of data-copying that occurs at points \(x_{1}\) and \(x_{2}\). For even strict choices of \(\lambda\) and \(\gamma\), Definition 2.1 indicates that the red distribution copies both \(x_{1}\) and \(x_{2}\). Furthermore, the data-copy rate, \(\text{cr}_{q}\), is \(40\%\) by construction, as this is the proportion of points that are outputted nearby \(x_{1}\) and \(x_{2}\).
### Limitations of our definition
Definition 2.1 implicitly assumes that the goal of the generator is to output a distribution \(q\) that approaches \(p\) in a mathematical sense; a perfect generator would output \(q\) so that \(q(M)=p(M)\) for all measurable sets. In particular, instances where \(q\) outputs examples that are far away from the training data are considered completely irrelevant in our definition.
This restriction prevents our definition from capturing instances in which \(q\) memorizes its training data and then applies some sort of transformation to it. For example, consider an image generator that applies a color filter to its training data. This would not be considered a data-copier as its output would be quite far from the training data in pixel space. Nevertheless, such a generated distribution can be very reasonably considered as an egregious data copier, and a cursory investigation between its training data and its outputs would reveal as much.
The key difference in this example is that the generative algorithm is no longer trying to closely approximate \(p\) with \(q\) - it is rather trying to do so in some kind of transformed space. Capturing such interactions is beyond the scope of our paper, and we firmly restrict ourselves to the case where a generator is evaluated based on how close \(q\) is to \(p\) with respect to their measures over the input space.
## 3 Detecting data-copying
Having defined \(\text{cr}_{q}\), we now turn our attention towards _estimating it_. To formalize this problem, we will require a few definitions. We begin by defining a generative algorithm.
**Definition 3.1**.: A **generative algorithm**, \(A\), is a potentially randomized algorithm that outputs a distribution \(q\) over \(\mathbb{R}^{d}\) given an input of training points, \(S\subset\mathbb{R}^{d}\). We denote this relationship by \(q\sim A(S)\).
This paradigm captures most typical generative algorithms including both non-parametric methods such as KDEs and parametric methods such as variational autoencoders.
As an important distinction, in this work we define data-copying as a property of the generated distribution, \(q\), rather than the generative algorithm, \(A\). This is reflected in our definition which is given solely with respect to \(q,S,\) and \(p\). For the purposes of this paper, \(A\) can be considered an arbitrary process that takes \(S\) and outputs a distribution \(q\). We include it in our definitions to emphasize that while \(S\) is an i.i.d sample from \(p\), it is _not_ independent from \(q\).
Next, we define a _data-copying detector_ as an algorithm that estimates \(\text{cr}_{q}\) based on access to the training sample, \(S\), along with the ability to draw any number of samples from \(q\). The latter assumption is quite typical as sampling from \(q\) is a purely computational operation. We do not assume any access to \(p\) beyond the training sample \(S\). Formally, we have the following definition.
**Definition 3.2**.: A **data-copying detector** is an algorithm \(D\) that takes as input a training sample, \(S\sim p^{n}\), and access to a sampling oracle for \(q\sim A(S)\) (where \(A\) is an arbitrary generative algorithm). \(D\) then outputs an estimate, \(D(S,q)=\hat{\text{cr}}_{q}\), for the data-copy rate of \(q\).
Naturally, we assume \(D\) has access to \(\lambda,\gamma>0\) (as these are practitioner chosen values), and by convention don't include \(\lambda,\gamma\) as formal inputs into \(D\).
The goal of a data-copying detector is to provide accurate estimates for \(\text{cr}_{q}\). However, the precise definition of \(\text{cr}_{q}\) poses an issue: data-copy rates for varying values of \(\lambda\) and \(\gamma\) can vastly differ. This is because \(\lambda,\gamma\) act as thresholds with everything above the threshold being counted, and everything below it being discarded. Since \(\lambda,\gamma\) cannot be perfectly accounted for, we will require some tolerance in dealing with them. This motivates the following.
**Definition 3.3**.: Let \(0<\epsilon\) be a tolerance parameter. Then the **approximate data-copy rates**, \(\text{cr}_{q}^{-\epsilon}\) and \(\text{cr}_{q}^{\epsilon}\), are defined as the values of \(\text{cr}_{q}\) when the parameters \((\lambda,\gamma)\) are shifted by a factor of \((1+\epsilon)\) to respectively decrease and
increase the copy rate. That is,
\[cr_{q}^{-\epsilon}=cr\left(q,\lambda(1+\epsilon),\gamma(1+\epsilon)^{-1} \right),\] \[cr_{q}^{\epsilon}=cr\left(q,\lambda(1+\epsilon)^{-1},\gamma(1+ \epsilon)\right).\]
The shifts in \(\lambda\) and \(\gamma\) are chosen as above because increasing \(\lambda\) and decreasing \(\gamma\) both reduce \(cr_{q}\) seeing as both result in more restrictive conditions for what qualifies as data-copying. Conversely, decreasing \(\lambda\) and increasing \(\gamma\) has the opposite effect. It follows that
\[cr_{q}^{-\epsilon}\leq cr_{q}\leq cr_{q}^{\epsilon},\]
meaning that \(cr_{q}^{-\epsilon}\) and \(cr_{q}^{\epsilon}\) are lower and upper bounds on \(cr_{q}\).
In the context of data-copying detection, the goal is now to estimate \(cr_{q}\) in comparison to \(cr_{q}^{\pm\epsilon}\). We formalize this by defining _sample complexity_ of a data-copying detector as the amount of data needed for accurate estimation of \(cr_{q}\).
**Definition 3.4**.: Let \(D\) be a data-copying detector and \(p\) be a data distribution. Let \(\epsilon,\delta>0\) be standard tolerance parameters. Then \(D\) has **sample complexity**, \(m_{p}(\epsilon,\delta)\), with respect to \(p\) if for all \(n\geq m_{p}(\epsilon,\delta)\), \(\lambda>1\), \(0<\gamma<1\), and generative algorithms \(A\), with probability at least \(1-\delta\) over \(S\sim p^{n}\) and \(q\sim A(S)\),
\[cr_{q}^{-\epsilon}-\epsilon\leq D(S,q)\leq cr_{q}^{\epsilon}+\epsilon.\]
Here the parameter \(\epsilon\) takes on a somewhat expanded as it is both used to additively bound our estimation of \(cr_{q}\) and to multiplicatively bound \(\lambda\) and \(\gamma\).
Observe that there is no mention of the number of calls that \(D\) makes to its sampling oracle for \(q\). This is because samples from \(q\) are viewed as _purely computational_, as they don't require any natural data source. In most cases, \(q\) is simply some type of generative model (such as a VAE or a GAN), and thus sampling from \(q\) is a matter of running the corresponding neural network.
## 4 Regular Distributions
Our definition of data-copying (Definition 2.1) motivates a straightforward point by point method for data-copying detection, in which for every training point, \(x_{i}\), we compute the largest ball \(B_{i}\) centered at \(x_{i}\) for which \(q(B_{i})\geq\lambda p(B_{i})\) and \(p(B_{i})\leq\gamma\). Assuming we compute these balls accurately, we can then query samples from \(q\) to estimate the total rate at which \(q\) outputs within those balls, giving us our estimate of \(cr_{q}\).
The key ingredient necessary for this idea to work is to be able to reliably estimate the masses, \(q(B)\) and \(p(B)\) for any ball in \(\mathbb{R}^{d}\). The standard approach to doing this is through _uniform convergence_, in which large samples of points are drawn from \(p\) and \(q\) (in \(p\)'s case we use \(S\)), and then the mass of a ball is estimated by counting the proportion of sampled points within it. For balls with a sufficient number of points (typically \(O(d\log n)\)), standard uniform convergence arguments show that these estimates are reliable.
However, this method has a major pitfall for our purpose - in most cases the balls \(B_{i}\) will be very small because data-copying intrinsically deals with points that are very close to a given training point. While one might hope that we can simply ignore all balls below a certain threshold, this does not work either, as the sheer number of balls being considered means that their union could be highly non-trivial.
To circumvent this issue, we will introduce an interpolation technique that estimates the probability mass of a small ball by scaling down the mass of a sufficiently large ball with the same center. While obtaining a general guarantee is impossible - there exist pathological distributions that drastically change their behavior at small scales - it turns out there is a relatively natural condition under which such interpolation will work. We refer to this condition as _regularity,_ which is defined as follows.
**Definition 4.1**.: Let \(k>0\) be an integer. A probability distribution \(p\) is \(k\)**-regular** the following holds. For all \(\epsilon>0\), there exists a constant \(0<p_{\epsilon}\leq 1\) such that for all \(x\) in the support of \(p\), if \(0<s<r\) satisfies that \(p(B(x,r))\leq p_{\epsilon}\), then
\[\left(1+\frac{\epsilon}{3}\right)^{-1}\frac{r^{k}}{s^{k}}\leq\frac{p(B(x,r))}{ p(B(x,s))}\leq\left(1+\frac{\epsilon}{3}\right)\frac{r^{k}}{s^{k}}.\]
Finally, a distribution is **regular** if it is \(k\)-regular for some integer \(k>0\).
Here we let \(B(x,r)=\{x^{\prime}:||x-x^{\prime}||\leq r\}\) denote the closed \(\ell_{2}\) ball centered at \(x\) with radius \(r\).
The main intuition for a \(k\)-regular distribution is that at a sufficiently small scale, its probability mass scales with distance according to a power law, determined by \(k\). The parameter \(k\) dictates how the probability density behaves with respect to the distance scale. In most common examples, \(k\) will equal the _intrinsic dimension_ of \(p\).
As a technical note, we use an error factor of \(\frac{\epsilon}{3}\) instead of \(\epsilon\) for technical details that enable cleaner statements and proofs in our results (presented later).
### Distributions with Manifold Support
We now give an important class of \(k\)-regular distributions.
**Proposition 4.2**.: _Let \(p\) be a probability distribution with support precisely equal to a compact \(k\) dimensional submanifold (with or without boundary) of \(\mathbb{R}^{d}\), \(M\). Additionally,
suppose that \(p\) has a continuous density function over \(M\). Then it follows that \(p\) is \(k\)-regular._
Proposition 4.2 implies that most data distributions that adhere to some sort of manifold-hypothesis will also exhibit regularity, with the regularity constant, \(k\), being the intrinsic dimension of the manifold.
### Estimation over regular distributions
We now turn our attention towards designing estimation algorithms over regular distributions, with our main goal being to estimate the probability mass of arbitrarily small balls. We begin by first addressing a slight technical detail - although the data distribution \(p\) may be regular, this does not necessarily mean that the regularity constant, \(k\), is known. Knowledge of \(k\) is crucial because it determines how to properly interpolate probability masses from large radius balls to smaller ones.
Luckily, estimating \(k\) turns out to be an extremely well studied task, as for most probability distributions, \(k\) is a measure of the _intrinsic dimension_. Because there is a wide body of literature in this topic, we will assume from this point that \(k\) has been correctly estimated from \(S\) using any known algorithm for doing so (for example (Block et al., 2022)). Nevertheless, for completeness, we provide an algorithm with provable guarantees for estimating \(k\) (along with a corresponding bound on the amount of needed data) in Appendix B.
We now return to the problem of \(p(B(x,r))\) for a small value of \(r\), and present an algorithm, \(Est(x,r,S)\) (Algorithm 1), that estimates \(p(B(x,r))\) from an i.i.d sample \(S\sim p^{n}\).
```
1\(n\leftarrow|S|\)
2\(b\gets O\left(\frac{d\ln\frac{n}{\epsilon^{2}}}{\epsilon^{2}}\right)\)
3\(r_{*}=\min\{s>0,|S\cap B(x,s)|=b\}\).
4if\(r_{*}>r\)then
5 Return \(\frac{br^{k}}{nr_{*}^{k}}\)
6else
7 Return \(\frac{|T\cap B(x,r)|}{n}\)
```
**Algorithm 1**\(Est(x,r,S)\)
We now leverage our subroutine, \(Est\), to construct a data-copying detector, \(Data\_Copy\_Detect\) (Algorithm 2), that has bounded sample complexity when \(p\) is a regular distribution. Like all data-copying detectors (Definition 3.2), \(Data\_Copy\_Detect\) takes as input the training sample \(S\), along with the ability to sample from a generated distribution \(q\) that is trained from \(S\). It then performs the following steps:
1. (line 1) Draw an i.i.d sample of \(m=O\left(\frac{dn^{2}\ln\frac{md}{\epsilon^{4}}}{\epsilon^{4}}\right)\) points from \(q\).
2. (lines 6 - 10) For each training point, \(x_{i}\), determine the largest radius \(r_{i}\) for which \[\frac{|B(x_{i},r_{i})\cap T|}{m}\geq\lambda Est(x_{i},r_{i},S),\] \[Est(x_{i},r_{i},S)\leq\gamma.\]
3. (lines 12 - 13) Draw a fresh sample of points from \(U\sim q^{O(1/\epsilon^{2})}\), and use it to estimate the probability mass under \(q\) of \(\cup_{i=1}^{m}B(x_{i},r_{i})\).
In the first step, we draw a _large_ sample from \(q\). While this is considerably larger than the amount of training data we have, we note that samples from \(q\) are considered free, and thus do not affect the sample complexity. The reason we need this many samples is simple - unlike \(p\), \(q\) is not necessarily regular, and consequently we need enough points to properly estimate \(q\) around every training point in \(S\).
The core technical details of \(Data\_Copy\_Detect\) are contained within step 2, in which data-copying regions surrounding each training point, \(x_{i}\), are found. We use \(Est(x,r,S)\) and \(\frac{|B(x,r)\cap T|}{m}\) as proxies for \(p\) and \(q\) in Definition 2.1, and then search for the maximal radius \(r_{i}\) over which the desired criteria of data-copying are met for these proxies.
The only difficulty in doing this is that this could potentially require checking an infinite number of radii, \(r_{i}\). Fortunately, this turns out not to be needed because of the following observation - we only need to check radii at which a new point from \(T\) is included in the estimation \(q_{i}(r)\). This is because these our estimation for \(q_{i}(r)\) does not change between them meaning that our estimate of the ratio between \(q\) and \(p\) is maximal nearby these points.
Once we have computed \(r_{i}\), all that is left is to estimate the data-copy rate by sampling \(q\) once more to find the total mass of data-copying region, \(\cup_{i=1}^{n}B(x_{i},r_{i})\).
### Performance of Algorithm 2
We now show that given enough data, \(Data\_Copy\_Detect\) provides a close approximation of \(cr_{q}\).
**Theorem 5.1**.: \(Data\_Copy\_Detect\) _is a data-copying detector (Definition 3.2) with sample complexity at most_
\[m_{p}(\epsilon,\delta)=O\left(\frac{d\ln\frac{d}{\delta ep_{*}}}{\epsilon^{2}p _{*}}\right),\]
_for all regular distributions, \(p\)._
Theorem 2 shows that our algorithm's sample complexity has standard relationships with the tolerance parameters, \(\epsilon\) and \(\delta\), along with the input space dimension \(d\). However, it includes an additional factor of \(\frac{1}{p_{*}}\), which is a distribution specific factor measuring the regularity of the probability distribution. Thus, our bound cannot be used to give a bound on the amount of data needed without having a bound on \(p_{*}\).
We consequently view our upper bound as more akin to a convergence result, as it implies that our algorithm is guaranteed to converge as the amount of data goes towards infinity.
### Applying Algorithm 2 to Halfmoons
We now return to the example presented in Figure 3 and empirically investigate the following question: is our algorithm able to outperform the one given in (Meehan et al., 2020) over this example?
To investigate this, we test both algorithms over a series of distributions by varying the parameter \(\rho\), which is the proportion of points that are "copied." Figure 3 demonstrates a case in which \(\rho=0.4\). Additionally, we include a parameter, \(c\), for (Meehan et al., 2020)'s algorithm which represents the number of clusters the data is partitioned into (with \(c\)-means clustering) prior to running their test. Intuitively, a larger number of clusters means a better chance of detecting more localized data-copying.
The results are summarized in the following table where we indicate whether the algorithm determined a statistically significant amount of data-copying over the given generated distribution and corresponding training dataset. Full experimental details can be found in Sections A and A.3 of the appendix.
As the table indicates, our algorithm is able to detect statistically significant data-copying rates in all cases it exists. By contrast, (Meehan et al., 2020)'s test is only capable of doing so when there is a large data-copy rate and when the number of clusters, \(c\), is quite large.
## 6 Is smoothness necessary for data copying detection?
Algorithm 2's performance guarantee requires that the input distribution, \(p\), be regular (Definition 4.1). This condition is essential for the algorithm to successfully estimate the probability mass of arbitrarily small balls. Additionally, the parameter, \(p_{*}\), plays a key role as it serves as a measure of how "smooth" \(p\) is with larger values implying a higher degree of smoothness.
This motivates a natural question - can data copying detection be done over unsmooth data distributions? Unfortunately, the answer turns out to be no. In the following result, we show that if the parameter, \(p_{*}\) is allowed to be arbitrarily
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline
**Algo** & \(\mathbf{q}=\mathbf{p}\) & \(\rho=\mathbf{0.1}\) & \(\mathbf{0.2}\) & \(\mathbf{0.3}\) & \(\mathbf{0.4}\) \\ \hline \hline
**Ours** & no & yes & yes & yes & yes \\ \hline \(\mathbf{c}=\mathbf{1}\) & no & no & no & no & no \\ \hline \(\mathbf{c}=\mathbf{5}\) & no & no & no & no & yes \\ \hline \(\mathbf{c}=\mathbf{10}\) & no & no & no & no & yes \\ \hline \(\mathbf{c}=\mathbf{20}\) & no & no & no & yes & yes \\ \hline \end{tabular}
\end{table}
Table 1: Statistical Significance of data-copying Rates over Halfmoons
small, then this implies that for any data-copy detector, there exists \(p\) for which the sample complexity is arbitrarily large.
**Theorem 6.1**.: _Let \(B\) be a data-copying detector. Let \(\epsilon=\delta=\frac{1}{3}\). Then, for all integers \(\kappa>0\), there exists a probability distribution \(p\) such that \(\frac{1}{9\kappa}\leq p_{\epsilon}\leq\frac{1}{\kappa}\), and \(m_{p}(\epsilon,\delta)\geq\kappa\), implying that_
\[m_{p}(\epsilon,\delta)\geq\Omega\left(\frac{1}{p_{\epsilon}}\right).\]
Although Theorem 6.1 is restricted to regular distributions, it nevertheless demonstrates that a bound on smoothness is essential for data copying detection. In particular, non-regular distributions (with no bound on smoothness) can be thought of as a degenerate case in which \(p_{\epsilon}=0\).
Additionally, Theorem 6.1 provides a lower bound that complements the Algorithm 2's performance guarantee (Theorem 5.1). Both bounds have the same dependence on \(p_{\epsilon}\) implying that our algorithm is optimal at least in regards to \(p_{\epsilon}\). However, our upper bound is significantly larger in its dependence on \(d\), the ambient dimension, and \(\epsilon\), the tolerance parameter itself.
While closing this gap remains an interesting direction for future work, we note that the existence of a gap isn't too surprising for our algorithm, \(Data\_Copy\_Detect\). This is because \(Data\_Copy\_Detect\) essentially relies on manually finding the entire region in which data-copying occurs, and doing this requires precise estimates of \(p\) at all points in the training sample.
Conversely, detecting data-copying only requires an _overall_ estimate for the data-copying rate, and doesn't necessarily require finding all of the corresponding regions. It is plausible that more sophisticated techniques might able to estimate the data-copy rate _without_ directly finding these regions.
## 7 Conclusion
In conclusion, we provide a new modified definition of "data-copying" or generating memorized training samples for generative models that addresses some of the failure modes of previous definitions (Meehan et al., 2020). We provide an algorithm for detecting data-copying according to our definition, establish performance guarantees, and show that at least some smoothness conditions are needed on the data distribution for successful detection.
With regards to future work, one important direction is in addressing the limitations discussed in section 2.2. Our definition and algorithm are centered around the assumption that the goal of a generative model is to output \(q\) that is close to \(p\) in a mathematical sense. As a result, we are unable to handle cases where the generator tries to generate _transformed_ examples that lie outside the support of the training distribution. For example, a generator restricted to outputting black and white images (when trained on color images) would remain completely undetected by our algorithm regardless of the degree with which it copies its training data. To this end, we are very interested in finding generalizations of our framework that are able to capture such broader forms of data-copying.
## Acknowledgments
We thank NSF under CNS 1804829 for research support.
|
2305.01522 | Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization | Counterfactual learning to rank (CLTR) relies on exposure-based inverse
propensity scoring (IPS), a LTR-specific adaptation of IPS to correct for
position bias. While IPS can provide unbiased and consistent estimates, it
often suffers from high variance. Especially when little click data is
available, this variance can cause CLTR to learn sub-optimal ranking behavior.
Consequently, existing CLTR methods bring significant risks with them, as
naively deploying their models can result in very negative user experiences. We
introduce a novel risk-aware CLTR method with theoretical guarantees for safe
deployment. We apply a novel exposure-based concept of risk regularization to
IPS estimation for LTR. Our risk regularization penalizes the mismatch between
the ranking behavior of a learned model and a given safe model. Thereby, it
ensures that learned ranking models stay close to a trusted model, when there
is high uncertainty in IPS estimation, which greatly reduces the risks during
deployment. Our experimental results demonstrate the efficacy of our proposed
method, which is effective at avoiding initial periods of bad performance when
little data is available, while also maintaining high performance at
convergence. For the CLTR field, our novel exposure-based risk minimization
method enables practitioners to adopt CLTR methods in a safer manner that
mitigates many of the risks attached to previous methods. | Shashank Gupta, Harrie Oosterhuis, Maarten de Rijke | 2023-04-26T15:54:23Z | http://arxiv.org/abs/2305.01522v1 | # Safe Deployment for Counterfactual Learning to Rank
###### Abstract.
Counterfactual learning to rank (CLTR) relies on exposure-based inverse propensity scoring (IPS), a LTR-specific adaptation of IPS to correct for position bias. While IPS can provide unbiased and consistent estimates, it often suffers from high variance. Especially when little click data is available, this variance can cause CLTR to learn sub-optimal ranking behavior. Consequently, existing CLTR methods bring significant risks with them, as naively deploying their models can result in very negative user experiences.
We introduce a novel risk-aware CLTR method with theoretical guarantees for safe deployment. We apply a novel exposure-based concept of risk regularization to IPS estimation for LTR. Our risk regularization penalizes the mismatch between the ranking behavior of a learned model and a given safe model. Thereby, it ensures that learned ranking models stay close to a trusted model, when there is high uncertainty in IPS estimation, which greatly reduces the risks during deployment. Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little date is available, while also maintaining high performance at convergence. For the CLTR field, our novel exposure-based risk minimization method enables practitioners to adopt CLTR methods in a safer manner that mitigates many of the risks attached to previous methods.
Learning to Rank; Counterfactual Learning to Rank; Safety +
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Journal of Information Systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information systems Learning to rank
+
Footnote †: journal: Information
a great fit for CLTR, but, unfortunately, it is based on action propensities that do not generalize well to the very large action spaces in CLTR. Therefore, there is a need for a conservative generalization bound that is practical and effective in the CLTR setting.
**Safe CLTR.** To address this gap, we propose an exposure-based counterfactual risk minimization (CRM) method that is specifically designed for safe CLTR. Similar to how exposure-based IPS deals with the large action spaces in ranking settings, our method is based on an exposure-based alternative to action-based generalization bounds. We first introduce a divergence measure based on differences between the distributions of exposure of a new policy and a safe logging policy. Then we provide a novel generalization bound and prove that it is a high-confidence lower-bound on the performance of a learned policy. When uncertain, this bound defaults to preferring the logging policy and thus avoids decreases in performance due to variance. In other words, with high-confidence, ranking models optimized with this bound are guaranteed to never deteriorate the user experience, even when little data is available.
**Main contributions.** We are the first to address CRM for CLTR and contribute a novel exposure-based CRM method for safe CLTR. Our experimental results show that our proposed method is effective at avoiding initial periods of bad performance when little date is available, while also maintaining high performance at convergence. Our novel exposure-based CRM method thus enables safe CLTR that can mitigate many of risks attached to previous methods. We hope that our contribution makes the adoption of CLTR methods more attractive to practitioners working on real-world search and recommendation systems.
## 2. Related Work
We review related work on CLTR and CRM in off-policy learning.
### Counterfactual learning to rank
LTR deals with learning optimal rankings to maximize a pre-defined notion of utility (Kolmogorov, 1958). Traditionally, LTR systems were optimized using supervised learning on manually-created relevance judgements (Beng et al., 2015). But the manual curation of relevance judgements is a time-consuming and costly process (Beng et al., 2015; Li et al., 2017). Also, manually-graded relevance signals do not always align well with actual user preferences (S
Formally, this objective can be expressed as the maximization of the following utility function:
\[U(\pi)=\mathbb{E}_{q}\!\left[\sum_{d\in\mathcal{D}_{q}}\rho(d\mid q,\pi)P(R=1\mid d,q)\right]\!. \tag{1}\]
where \(\rho(d\mid q,\pi)\) is the weight \(\pi\) gives to document \(d\) for query \(q\). The choice of \(\rho\) determines what metric is optimized, for instance, the well-known normalized discounted cumulative gain (NDCG) metric (Krishna et al., 2017):
\[\rho_{\text{DCG}}(d\mid q,\pi)=\mathbb{E}_{y\sim\pi(\cdot\mid q)}\left[(\log_{ 2}(\text{rank}(d\mid y)+1))^{-1}\right]\!. \tag{2}\]
where \(y\) is a ranking sampled from the policy \(\pi\). For this paper, the aim is to optimize the expected number of clicks, the next subsection will explain how we choose \(\rho\) accordingly.
### Counterfactual learning to rank
**Position bias in clicks.** Optimizing the LTR objective in Eq. 1 requires access to the true relevance labels (\(P(R=1\mid d,q)\)), which is often impossible in real-world ranking settings. As an alternative, CLTR uses clicks, since they are present in abundance as logged user interactions. However, clicks are a biased indicator of relevance; for this paper, we will assume the relation between clicks and relevance is determined by a position-based click model (Sawimanathan and Joachims, 2016; Koshino et al., 2017). For a document \(d\) displayed in ranking \(y\) for query \(q\), this means the click probability can be decomposed into a rank-based examination probability and a document-based relevance probability:
\[P(C=1\mid d,q,y)=P(E=1\mid\text{rank}(d\mid y))P(R=1\mid d,q). \tag{3}\]
The key characteristic of the position-based click model is that the probability of examination only depends on the rank at which a document is displayed: \(P(E=1\mid d,q,y)=P(E=1\mid\text{rank}(d\mid y))\). Furthermore, this model assumes that clicks only take place when a document is both relevant to a user and examined by them. Consequently, the click signal is an indication of both the relevance and examination of documents. Thus, the position at which a document is displayed can have a stronger effect on its click probability than its actual relevance (Koshino et al., 2017).
**Inverse-propensity-scoring for CLTR.** We assume a setting where \(N\) interactions have been logged using the logging policy \(\pi_{0}\), for each interaction \(i\) the query \(q_{i}\), the displayed ranking \(y_{i}\), and the clicks \(c_{i}\) are logged:
\[\mathcal{D}=\left\{q_{i},y_{i},c_{i}\right\}_{i=1}^{N}\!. \tag{4}\]
We will use \(c_{i}(d)\in\{0,1\}\) to denote whether document \(d\) was clicked at interaction \(i\). Furthermore, we choose \(\rho\) to match the examination probabilities under \(\pi\):
\[\rho(d\mid q,\pi)=\mathbb{E}_{y\sim\pi(\cdot\mid q)}\left[P(E=1\mid\text{ rank}(d\mid y))\right]=\rho(d). \tag{5}\]
Hence, our optimization objective \(U(\pi)\) is equal to the expected number of clicks (cf. Eq. 1 and 3).
In order to apply IPS, we need the propensity of each document (Krishna et al., 2017), following Oosterhuis and de Rijke (Oosterhuis and de Rijke, 2017) we use:
\[\begin{split}\rho(d\mid q,\pi_{0})&=P(E=1\mid\pi_{ 0},d,q)\\ &=\mathbb{E}_{y\sim\pi_{0}(\cdot\mid q)}\left[P(E=1\mid\text{ rank}(d\mid y))\right]=\rho_{0}(d).\end{split} \tag{6}\]
Thus, the exposure of \(d\) represents how likely it is examined when using \(\pi_{0}\) for logging. Thereby, it indicates how much the clicks on \(d\) underrepresent its relevance. For the sake of brevity, we drop \(q\), \(\pi\) and \(\pi_{0}\) from our notation when their values are clear from the context: i.e., \(\rho(d\mid q,\pi)=\rho(d)\) and \(\rho(d\mid q,\pi_{0})=\rho_{0}(d)\).
The exposure-based IPS estimator takes each click in \(\mathcal{D}\) and weights it inversely to \(\rho_{0}(d)\) to correct for position-bias (Koshino et al., 2017; Oosterhuis and de Rijke, 2017):
\[\hat{U}(\pi)=\frac{1}{N}\sum_{i=1}^{N}\sum_{d\in\mathcal{D}_{q_{i}}}\frac{ \rho(d)}{\rho_{0}(d)}c_{i}(d). \tag{7}\]
In other words, to compensate that position bias lowers the click probability a document by a factor of \(\rho_{0}(d)\), clicks are weighted by \(1/\rho_{0}(d)\) to correct for this effect in expectation. As a result, clicks on documents that \(\pi_{0}\) is likely to show at positions with low examination probabilities (i.e., the bottom of a ranking) receive a higher IPS weight to compensate.
**Statistical properties of the IPS estimator.** The IPS estimator \(\hat{U}(\pi)\) (Eq. 7) is an unbiased and consistent estimate of our LTR objective \(U(\pi)\) (Eq. 1) (Eq. 1) (Li et al., 2017). It is _unbiased_ since its expected value is equal to our objective:
\[\mathbb{E}_{q,y,c}\big{[}\hat{U}(\pi)\big{]}=U(\pi), \tag{8}\]
and it is _consistent_ because this equivalence also holds in the limit of infinite data:
\[\lim_{N\to\infty}\hat{U}(\pi)=U(\pi). \tag{9}\]
For proofs of these properties, we refer to previous work (Koshino et al., 2017; Oosterhuis and de Rijke, 2017; Oosterhuis and de Rijke, 2017).
Importantly, the unbiasedness and consistency properties do not indicate that the actual IPS estimates will be reliable. This is because the estimates produced by IPS are also affected by its variance:
\[\text{Var}_{y,c}\big{[}\hat{U}(\pi)\mid q\big{]}=\sum_{d\in\mathcal{D}_{q}} \frac{\rho(d)^{2}}{\rho_{0}(d)^{2}}\text{Var}_{y,c}[c(d)\mid\pi_{0},q]. \tag{10}\]
The variance is large when some propensities are small, due to the \(\rho_{0}(d)^{-2}\) term. Hence, the actual estimates that IPS produces may contain large errors, especially when \(N\) is relatively small or clicks are very noisy. Thus, \(\hat{U}(\pi)\) may be far removed from the true \(U(\pi)\), and optimization with IPS may be unsafe and lead to unpredictable results.
### Counterfactual risk minimization for offline bandit learning
The foundational work by Swaminathan and Joachims (Swaminathan and Joachims, 2016) introduced the idea of counterfactual risk minimization (CRM) for off-policy learning in a contextual bandit setup. To avoid the negative effects of high-variance with IPS estimation during bandit optimization, they utilize a generalization bound through the addition of a risk term (Sawimanathan and Joachims, 2016). With a probability of \(1-\delta\), the IPS estimate minus the risk term is a lower bound on the true utility of the policy:
\[P\big{(}U(\pi)\geq\hat{U}(\pi)-\text{Risk}(\delta)\big{)}>1-\delta. \tag{11}\]
Therefore, optimization of the lower bound can be more reliable than solely optimizing the IPS estimate (\(\hat{U}(\pi)\)), since it provides a high-confidence guarantee that a lower bound on the _true_ utility of the policy is maximized. Swaminathan and Joachims (Swaminathan and Joachims, 2016) propose
using the sample variance as the risk factor:
\[\hat{U}_{\text{action-CRM}}(\pi)=\hat{U}_{\text{action}}(\pi)-\lambda\sqrt{\frac{1}{ N}\text{Var}\big{[}\hat{U}_{\text{action}}(\pi)\big{]}}, \tag{12}\]
where \(\lambda\in\mathbb{R}^{>0}\) is an alternative to the \(\delta\) parameter that also determines how probable it provides a bound on the true utility. Importantly, this bound is based on an action-based IPS estimator. For our LTR setting this would translate to:
\[\hat{U}_{\text{action}}(\pi)=\frac{1}{N}\sum_{i=1}^{N}\frac{\pi(y_{i}\mid q_{i} )}{\pi_{0}(y_{i}\mid q_{i})}\sum_{d\in D_{qi}}c_{i}(d). \tag{13}\]
Action-based IPS estimation does not work well in the LTR setting because the large number of possible rankings results in extremely small action propensities: \(\pi_{0}(y_{i}\mid q_{i})\), creating a high-variance problem. As discussed in Section 3.2, for this reason CLTR uses exposure-based propensities instead (Eq. 6 and 7), as they effectively avoid extremely small values. As a result, the CRM approach from (Srivastava et al., 2017) is not effective for CLTR, since the high-variance of its action-based IPS make the method impractical in the ranking setting.
Another downside of the CRM approach is that the computation of the sample-variance requires a full-pass over the training dataset, which is computationally costly for large-scale datasets. As a solution, Wu and Wang (2019) introduce variational CRM (VCRM) which uses an upper bound on the variance term based on the Renyi divergence between the new policy and the logging policy (Renyi, 2017). This Renyi divergence is approximated via random sampling, thus making the VCRM method suitable for stochastic gradient descent-based training methods (Zhou et al., 2018). Nevertheless, this CRM approach still relies on action-based propensities, and therefore, does not provide an effective solution for the high-variance problem in CLTR.
## 4. A Novel Exposure-Based Generalization Bound for CLTR
To develop a CRM method for CLTR with safety gaurantees, we aim to find a risk term that gives us a generalization bound as in Eq. 11. Importantly, this bound has to be effective in the LTR setting, therefore, our approach should avoid action-based propensities. We take inspiration from work by Wu and Wang (2019), who use the fact that the Renyi divergence is an upper bound on the variance of an IPS estimator:
\[\text{Var}\big{[}\hat{U}_{\text{action}}(\pi)\big{]}\leq d_{2}(\pi\mid\pi_{0}), \tag{14}\]
where \(d_{2}\) is the exponentiated Renyi divergence between the new policy and the logging policy (Renyi, 2017):
\[d_{2}(\pi\parallel\pi_{0})=\mathbb{E}_{q}\Bigg{[}\sum_{g}\Bigg{(}\frac{\pi(y \mid q)}{\pi_{0}(y\mid q)}\Bigg{)}^{2}\pi_{0}(y\mid q)\Bigg{]}. \tag{15}\]
In other words, the dissimilarity between the logging policy and a new policy can be used to bound the variance of the IPS estimate of the new policy's performance. However, because this divergence is based on action propensities, it is not effective in the LTR setting.
Below, we introduce an exposure-based measure of divergence that can produce a desired generalization bound for LTR optimization. Section 4.1 introduces the concept of normalized exposure that treats rankings as exposure distributions. Section 4.2 proves that Renyi divergence based on normalized exposure can bound the variance of an exposure-based IPS estimator. Section 4.3 uses this variance bound to construct a generalization bound for CLTR.
### Normalized expected exposure
Renyi divergence is only valid for probability distributions, e.g., \(d_{2}(\pi\parallel\pi_{0})\) with \(\pi(y\mid q)\) and \(\pi_{0}(y\mid q)\). However, expected exposure is not a probability distribution, i.e., the values of \(\rho(d)\) (Eq. 5) or \(\rho_{0}(d)\) (Eq. 6) do not necessarily sum up to one, over all documents to be ranked. This is because users generally examine more than a single item in a single displayed ranking (Bordord et al., 2017), as a result, expected exposure can be seen as a distribution of multiple examinations. Our insight is that a valid probability distribution can be obtained by normalizing the expected exposure:
\[\rho^{\prime}(d)=\frac{\rho(d)}{\sum_{d^{\prime}\in D}\rho(d^{\prime})}=\frac{ \rho(d)}{\text{Z}}, \tag{16}\]
where the normalization factor is a constant that only depends on \(K\), the (truncated) ranking length:
\[\begin{split}\text{Z}=\sum_{d\in D}\rho(d)&=\sum_{ d\in D}\mathbb{E}_{y\sim\pi}\big{[}P(E=1\mid\text{rank}(d\mid y))\big{]}\\ &=\mathbb{E}_{y\sim\pi}\Big{[}\sum_{d\in D}P\big{(}E=1\mid \text{rank}(d\mid y)\big{)}\Big{]}\\ &=\mathbb{E}_{y\sim\pi}\Big{[}\sum_{k=1}^{K}P\big{(}E=1\mid k \big{)}\Big{]}=\sum_{k=1}^{K}P\big{(}E=1\mid k\big{)}.\end{split} \tag{17}\]
In this way, Z can be seen as the expected amount of examination that any ranking will receive, and \(\rho^{\prime}\) as the probability distribution that indicates how it is expected to spread over documents.
An important property is that the ratio between two propensities is always equal to the ratio between their normalized counterparts:
\[\frac{\rho(d)}{\rho_{0}(d)}=\frac{\rho^{\prime}(d)}{\rho^{\prime}_{0}(d)}. \tag{18}\]
This is relevant to IPS estimation since it only requires the ratios between propensities, the proofs in the remainder of this paper make use of this property.
Figure 1. Three rankings and their normalized expected exposure distributions (Eq. 16) based on DCG weights (Eq. 2). According to our exposure-based divergence, ranking 1 and ranking 2 are quite similar despite only agreeing on the placing of document A. In contrast, ranking 1 and ranking 3 also agree on the placement of a single document (C) but have the highest possible dissimilarity, due to their highly mismatched exposure distributions.
Finally, using the normalized expected exposure, we can introduce the exponentiated exposure-based Renyi divergence:
\[d_{2}(\rho\parallel\rho_{0})=\mathbb{E}_{q}\bigg{[}\sum_{d\in D_{q}}\rho_{0}^{ \prime}(d)\bigg{(}\frac{\rho^{\prime}(d)}{\rho_{0}^{\prime}(d)}\bigg{)}^{2} \bigg{]}. \tag{19}\]
The key difference between our exposure-based divergence and action-based divergence is that it allows policies to be very similar, even when they have no overlap in the rankings they produce. As an intuitive example, Figure 1 displays three different rankings and their associated normalized expected exposure distributions; these are the distributions for deterministic policies that give 100% probability to one of the rankings. Under action-based divergence, these policies would have the highest possible dissimilarity since they have no overlap in their possible actions, i.e., the rankings they give non-zero probability. In contrast, exposure-based divergence gives high similarity between ranking 1 and ranking 2, since the differences in their exposure distribution are minor. We note that these rankings still disagree on the placement of all documents except one. Conversely, for ranking 1 and ranking 3, which also only agree on a single document placement, exposure-based divergence gives the lowest possible similarity score because their exposure distributions are highly mismatched. Importantly, by solely considering differences in exposure distributions, exposure-based divergence naturally weighs differences at the bottom of rankings as less impactful than changes that affect the top. As a result, exposure-based divergence more closely corresponds with common ranking metrics (Eq. 1) than existing action-based divergences.
### Exposure-divergence bound on variance
We now provide proof that exposure-based divergence is an upper bound on the variance of IPS estimators for CLTR.
**Theorem 4.1**: _Given a ranking policy \(\pi\) and logging policy \(\pi_{0}\), with the expected exposures \(\rho(d)\) and \(\rho_{0}(d)\) respectively, the variance of the exposure-based IPS estimate \(\hat{U}(\pi)\) is upper-bounded by exposure-based divergence:_
\[\operatorname{Var}_{q,y,c}\big{[}\hat{U}(\pi)\big{]}\leq\frac{\mathbb{Z}}{N} \mathbb{E}_{q}\bigg{[}\sum_{d\in D_{q}}\rho_{0}^{\prime}(d)\bigg{(}\frac{\rho ^{\prime}(d)}{\rho_{0}^{\prime}(d)}\bigg{)}^{2}\bigg{]}. \tag{20}\]
Proof.: From the definition of \(\hat{U}(\pi)\) (Eq. 7) and the assumption that queries \(q\) are independent and identically distributed (i.i.d), the variance of the counterfactual estimator can be rewritten as an expectation over queries (Zhu et al., 2017):
\[\operatorname{Var}_{q,y,c}\big{[}\hat{U}(\pi)\big{]}=\frac{1}{N}\mathbb{E}_{q }\big{[}\operatorname{Var}_{q,c}\big{[}\hat{U}(\pi)\mid q\big{]}\big{]}. \tag{21}\]
Since we have assumed a rank-based examination model (Section 3.2), the examinations of documents are independent. This allows us to rewrite the variance conditioned on a single query:
\[\operatorname{Var}_{y,c}\big{[}\hat{U}(\pi\mid q)\big{]}= \operatorname{Var}_{y,c}\Bigg{[}\sum_{d\in D_{q}}\frac{\rho(d)}{\rho_{0}(d)}c( d,q)\Bigg{]}\] \[=\sum_{d\in D_{q}}\operatorname{Var}_{y,c}\Bigg{[}\frac{\rho(d)} {\rho_{0}(d)}c(d,q)\Bigg{]}\leq\sum_{d\in D_{q}}\mathbb{E}_{c,y}\Bigg{[}\frac{ \rho(d)}{\rho_{0}(d)}c(d,q)^{2}\Bigg{]}. \tag{22}\]
Since: \(c(d,q)^{2}=c(d,q)\), we can further rewrite to:
\[\sum_{d\in D_{q}}\mathbb{E}_{c,y}\Bigg{[}\bigg{(}\frac{\rho(d)}{ \rho_{0}(d)}c(d,q)\bigg{)}^{2}\Bigg{]} =\sum_{d\in D_{q}}\mathbb{E}_{c,y}\Bigg{[}\bigg{(}\frac{\rho(d)}{ \rho_{0}(d)}\bigg{)}^{2}c(d,q)\Bigg{]}\] \[=\sum_{d\in D_{q}}\bigg{(}\frac{\rho(d)}{\rho_{0}(d)}\bigg{)}^{2} P(C=1\mid d,q,m). \tag{23}\]
Next, we use Eq. 3 and 6 to substitute the click probability; subsequently, we replace the examination propensities with normalized counterparts using Eq. 16 and 18; and lastly, we upper bound the result using the fact that \(P(R=1|d,q)\leq 1\):
\[\sum_{d\in D_{q}}\mathbb{E}_{c,y}\Bigg{[}\bigg{(}\frac{\rho(d)}{ \rho_{0}(d)}c(d,q)\bigg{)}^{2}\Bigg{]}\] \[=\sum_{d\in D_{q}}\rho_{0}(d)\bigg{(}\frac{\rho^{\prime}(d)}{\rho _{0}^{\prime}(d)}\bigg{)}^{2}P(R=1|d,q)\leq Z\sum_{d\in D_{q}}\rho_{0}^{\prime}( d)\bigg{(}\frac{\rho^{\prime}(d)}{\rho_{0}^{\prime}(d)}\bigg{)}^{2}. \tag{24}\]
Finally, we place this upper bound for a single query back into the expectation over all queries (Eq. 20):
\[\frac{1}{N}\,\mathbb{E}_{q}\big{[}\operatorname{Var}_{y,c}\big{[}\hat{U}(\pi )\mid q\big{]}\big{]}\leq\frac{Z}{N}\mathbb{E}_{q}\bigg{[}\sum_{d\in D_{q}} \rho_{0}^{\prime}(d)\bigg{(}\frac{\rho^{\prime}(d)}{\rho_{0}^{\prime}(d)} \bigg{)}^{2}\bigg{]}. \tag{25}\]
Therefore, by Eq. 21, 25 and the definition of exposure-based divergence in Eq. 19, it is a proven upper bound of the variance.
### Exposure-divergence bound on performance
Using the upper bound on the variance of an CLTR IPS estimator that was proven in Theorem 4.1, we can now introduce a generalization bound for the CLTR estimator.
**Theorem 4.2**: _Given the true utility \(U(\pi)\) (Eq. 1) and its exposure-based IPS estimate \(\hat{U}(\pi)\) (Eq. 7), for the ranking policy \(\pi\) and the logging policy \(\pi_{0}\) with expected exposures \(\rho(d)\) and \(\rho_{0}(d)\), respectively, the following generalization bound holds with probability \(1-\delta\):_
\[U(\pi)\geq\hat{U}(\pi)-\sqrt{\frac{Z}{N}\Big{(}\frac{1-\delta}{\delta}\Big{)}d_{2 }(\rho\parallel\rho_{0})}. \tag{26}\]
Proof.: As per Cantelli's inequality (Cantelli, 2017), given an estimator \(\hat{X}\) with expected value \(\mathbb{E}[\hat{X}]\) and variance \(\operatorname{Var}[\hat{X}]\), the following tail-bound holds:
\[P(\hat{X}-\mathbb{E}[\hat{X}]\geq\lambda)\leq\frac{\operatorname{Var}[\hat{X}]}{ \operatorname{Var}[\hat{X}]+\lambda^{2}}. \tag{27}\]
Since \(\lambda>0\) is a free parameter, we can define \(\delta\) such that:
\[\delta\coloneqq\frac{\operatorname{Var}[\hat{X}]}{\operatorname{Var}[\hat{X}]+ \lambda^{2}},\qquad\lambda=\sqrt{\frac{1-\delta}{\delta}\operatorname{Var}[\hat{X}]}. \tag{28}\]
Consequently, the following inequality holds:
\[P(\mathbb{E}[\hat{X}]\geq\hat{X}-\lambda)\geq 1-\delta. \tag{29}\]
Building on this inequality, the following inequality must hold with probability \(1-\delta\):
\[U(\pi)\geq\hat{U}(\pi)-\sqrt{\frac{1-\delta}{\delta}\mathrm{Var}_{q,q_{c}}[\hat{U} (\pi)]}. \tag{30}\]
Finally, we can replace the variance with the upper bound from Theorem 4.1, which completes the proof.
**Risk in CLTR.** Based on the generalization bound proposed in Theorem 4.2, we see that it proposes the following measure of risk: \(\mathrm{Risk}(\delta)=\sqrt{\frac{Z}{N}\big{(}\frac{1-\delta}{\delta}\big{)}d_ {2}(\rho\parallel\rho_{0})}\) (cf. Eq. 11). Clearly, this risk is mostly determined by the exposure-based divergence between the new policy and the logging policy. Thereby, it states that the greater the difference between how exposure is spread over documents by the logging policy and the new policy, the higher the risk involved. Therefore, to optimize this lower bound, one has to balance the maximization of the estimated utility \(\hat{U}(\pi)\) and the minimization of risk by not letting \(\pi\) differ too much from \(\pi_{0}\) in terms of exposure.
Furthermore, we see that our measure of risk diminishes as \(N\) increases. As a result, the risk term will overwhelm the IPS term when \(N\) is very low, as there is much risk involved when estimating based on a few interactions. Conversely, when \(N\) is very large, the risk term mostly disappears, as the IPS estimate is more reliable when based on large numbers of interactions. Thus, during optimization, the generalization bound is expected to mostly help with avoiding initial decreases in performance, while still converging at the same place as the standard IPS estimator.
Lastly, the \(\delta\) parameter determines the _safety_ that is provided by the risk, where a lower \(\delta\) makes it more likely that the generalization bound holds. Accordingly, as \(\delta\) increases the risk term becomes smaller and will thus have less effect on optimization.
To the best of our knowledge, this is the first exposure-based generalization bound, which makes it the first method designed for safe optimization in the CLTR setting.
**Illustrative comparison.** To emphasize the working and novelty of our exposure-based risk, a comparison of the optimal policies for action-based risk, exposure-based risk, and no risk are shown in Figure 2. We see that IPS without a risk term places the once-clicked document at the first position, with 100% probability. This is very risky, as it greatly impacts the ranking while only being based on a single observation. The action-based risk tries to mitigate this risk with a probabilistic policy that gives most probability to the logging policy ranking (90%) and the remainder to the IPS ranking (10%). In contrast, with exposure-based risk, the optimal policy makes the risk and utility trade-off in a single ranking, that mostly follows the logging policy but places the clicked document slightly higher.
This example illustrates that because action-based risk does not have a similarity measure between rankings, it can only produce a probabilistic interpolation between the logging policy and IPS rankings. Alternatively, because exposure-based risk does have such a measure, it produces a ranking that is neither the logging ranking nor the IPS ranking, but one with an exposure distribution that is similar to both. Thereby, exposure-based risk has a more elegant and natural method of balancing utility maximization and risk minimization in the CLTR setting.
## 5. A Novel Counterfactual Risk Minimization Method for Ltr
Now that we have the proven generalization bound described in Section 4.3 (Theorem 4.2), we can propose a novel risk-aware CLTR method for optimizing it. The aim of our method is to find the policy that maximizes this high-confidence lower bound on the true performance. In formal terms, we have the following optimization problem:
\[\max_{\pi}\hat{U}(\pi)-\sqrt{\frac{Z}{N}\big{(}\frac{1-\delta}{\delta}\big{)}d _{2}(\rho\parallel\rho_{0})}. \tag{31}\]
We propose to train a stochastic policy \(\pi\) via stochastic gradient descent, therefore, we need to derive the gradient and find a method of computing it. For the computation of the gradient w.r.t. the utility \(\hat{U}(\pi)\), the first part of Eq. 31, we refer to several prior work that discusses this topic extensively (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018). Thus, we can focus our attention on the second part of Eq. 31:
\[\nabla_{\pi}\sqrt{\frac{Z}{N}\big{(}\frac{1-\delta}{\delta}\big{)}d_{2}(\rho \parallel\rho_{0})}=\sqrt{\frac{Z(1-\delta)}{4N\delta d_{2}(\rho\parallel\rho _{0})}}\nabla_{\pi}d_{2}(\rho\parallel\rho_{0}). \tag{32}\]
To derive the gradient of the exposure-based divergence function, we use the relation between \(\rho\) and \(\rho^{\prime}\) from Eq. 17 and 18:
\[\nabla_{\pi}d_{2}(\rho\parallel\rho_{0}) =\nabla_{\pi}\mathbb{E}_{q}\bigg{[}\sum_{d\in D_{q}}\rho_{0}^{ \prime}(d)\bigg{(}\frac{\rho^{\prime}(d)}{\rho_{0}^{\prime}(d)}\bigg{)}^{2} \bigg{]}\] \[=\frac{2}{Z}\,\mathbb{E}_{q}\bigg{[}\sum_{d\in D_{q}}\frac{\rho(d )}{\rho_{0}(d)}\nabla_{\pi}\rho(d)\bigg{]}. \tag{33}\]
Thus, we only need the gradient w.r.t. the exposure of a document (\(\nabla_{\pi}\rho(d)\)) to complete our derivation. If \(\pi\) is a Plackett-Luce (PL) ranking model, one can make use of the specialized gradient computation algorithm from (Zhou et al., 2017). However, for this work, we will not make further assumptions about \(\pi\) and apply the more general log-derivate trick from the REINFORCE algorithm (Zhou et al., 2017):
\[\nabla_{\pi}\rho(d)=\mathbb{E}_{y\sim\pi}\big{[}p\big{(}E=1\mid\mathrm{rank}( d\mid y)\big{)}\big{]}\nabla_{\pi}\log\pi(y). \tag{34}\]
Putting all of the previous elements back together, gives us the gradient w.r.t. the exposure-based risk function:
\[\sqrt{\frac{1-\delta}{N\delta\,d_{2}(\rho\parallel\rho_{0})}}\mathbb{E}_{q,y\sim\pi}\Bigg{[}\sum_{k=1}^{K}\frac{\rho(y_{k})}{\rho_{0}(y_{k})}P(E=1|k) \Bigg{]}\nabla_{\pi}\log\pi(y)\Bigg{]}, \tag{35}\]
where \(y_{k}\) is the document at rank \(k\) in ranking \(y\). For a close approximation of this gradient, we substitute the gradient with the queries from the given dataset, and the rankings sampled from \(\pi\) during optimization (Zhou et al., 2017; Wang et al., 2018).
Figure 2. Example comparison of the optimal policy for a single logged click according to three different risk estimators.
Similarly, since the exact computation of is \(d_{2}(\rho\parallel\rho_{0})\) infeasible in practice, we introduce a sample-based empirical divergence estimator:
\[\hat{d}_{2}(\rho\parallel\rho_{0})=\frac{1}{N}\sum_{i=1}^{N}\sum_{d\in D_{q_{i} }}\rho_{0}^{\prime}(d)\left(\frac{\rho^{\prime}(d)}{\rho_{0}^{\prime}(d)} \right)^{2}. \tag{36}\]
This is an unbiased estimate of the true divergence given that the sampling process is truly monte-carlo (Luo, 2017).
## 6. Experimental Setup
For our experiments, we follow the semi-synthetic experimental setup that is common in the CLTR literature (Kang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020). We make use of the three largest publicly available LTR datasets: Yahoo! Webscope (Yamaguchi et al., 2017), MSLR-WEB30k (Yamaguchi et al., 2017), and Istella (Istella, 2018). The datasets consist of queries, a preselected list of documents per query, query-document feature vectors, and manually-graded relevance judgements for each query-document pair. To generate clicks, we follow previous work (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020) and train a logging policy on a 3% fraction of the relevance judgements. This simulates a real-world setting, where a production ranker trained on manual judgements is used to collect click logs, which can then be used for subsequent click-based optimization. Typically, in real-world ranking settings, given that the production ranker is used on live-traffic, it is deemed as a safe policy that can be trusted with real users.
We simulate a top-\(K\) ranking setup (Kang et al., 2017) where five documents are presented at once. Clicks are generated with our assumed click model (Eq. 3) and the following rank-based position-bias:
\[P(E=1\mid q,d,y)=\begin{cases}\left(\frac{1}{\text{rank}(d\mid y)}\right)^{2}& \text{if}\:\text{rank}(d\mid y)\leq 5,\\ 0&\text{otherwise}.\end{cases} \tag{37}\]
In real-world click data, the observed CTR is typically very low (Kang et al., 2017; Wang et al., 2020; Wang et al., 2020); hence, to simulate such a sparse click settings, we apply the following transformation from relevance judgements to relevance probabilities:
\[P(R=1\mid q,d)=0.025*rel(q,d)+0.2, \tag{38}\]
where \(rel(q,d)\in\{0,1,2,3,4\}\) is the relevance judgement for the query-document pair and 0.2 is added as click noise. During training, the only available data consists of clicks generated on the training and validation sets, no baseline method has access to the underlying relevance judgements (expect the skyline).
Furthermore, we assume a setting where the exact logging policy is not available during training. As a result, the \(\hat{\rho}_{0}\) propensities have to estimated, we use a simple frequency estimate following (Wang et al., 2018):
\[\hat{\rho}_{0}(d)=\sum_{i=1}^{N}\frac{\mathds{1}\left[q=q_{i}\right]}{\sum_{j= 1}^{N}\mathds{1}\left[q=q_{j}\right]}P(E=1\mid\text{rank}(d\mid y_{i})). \tag{39}\]
For the action-based baselines, the action propensities \(\hat{\pi}_{0}(y\mid q)\) are similarly estimated based on observed frequencies:
\[\hat{\pi}_{0}(y\mid q)=\prod_{k=1}^{K-1}\hat{\pi}_{0}(y_{k}\mid q),\ \hat{\pi}_{0}(y_{k}\mid q)=\sum_{j=1}^{N}\frac{\mathds{1}\left[y_{k}=y_{j} \right]}{\sum_{j=1}^{N}\mathds{1}\left[q=q_{j}\right]}, \tag{40}\]
where \(\hat{\pi}_{0}(y_{k}\mid q)\) is the estimated probability of \(d\) appearing at rank \(k\) for query \(q\). As is common in CLTR (Kang et al., 2017; Wang et al., 2020; Wang et al., 2020), we clip propensities by \(10/\sqrt{N}\) in the training set, to reduce variance, but not in the validation set.
We optimize neural PL ranking models (Kang et al., 2017) with early stopping based on validation clicks to prevent overfitting. For the REINFORCE policy-gradient, we follow (Wang et al., 2019) and use the average reward per query as a control-variate for variance reduction.
As our evaluation metric, we compute NDCG@5 metric using the relevance judgements on the test split of each dataset (Kang et al., 2017). All reported results are averages over ten independent runs, significant testing is performed with a two-sided student-t test.
Finally, the following methods are included in our comparisons:
1. [label=()]
2. _Naive._ As the most basic baseline, we train on the generated clicks without any correction (equivalent to \(\forall d,\ \rho_{0}(d)=1\)).
3. _Skyline._ To compare with the highest possible performance, this baseline is trained on the actual relevance judgements.
4. _Action-based IPS_. Standard IPS estimation (Eq. 13) that is not designed for ranking and thus uses action-based propensities.
5. _Action-based CRM_. Standard CRM (Eq. 12) that is also not designed for ranking, for the risk function we use the action-based divergence function in Eq. 15.
6. _Exposure-based IPS_. The IPS estimator designed for CLTR with exposure-based propensities (Eq. 7). The most important baseline, as it is the prevalent approach in the field (Kang et al., 2017; Wang et al., 2019).
7. _Exposure-based CRM_. Our proposed CRM method (Eq. 31) using a risk function based on exposure-based divergence.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline & \multicolumn{3}{c}{Yahoo! Webscope} & \multicolumn{3}{c}{MSLR-WEB30k} & \multicolumn{3}{c}{Istella} \\ \cline{2-11} & \(N=4\cdot 10^{2}\) & \(N=4\cdot 10^{7}\) & \(N=10^{9}\) & \(N=4\cdot 10^{2}\) & \(N=4\cdot 10^{7}\) & \(N=10^{9}\) & \(N=4\cdot 10^{2}\) & \(N=4\cdot 10^{7}\) & \(N=10^{9}\) \\ \hline Logging & 0.677 & 0.677 & 0.677 & 0.435 & 0.435 & 0.435 & 0.635 & 0.635 & 0.635 \\ Skyline & 0.727 & 0.727 & 0.727 & 0.479 & 0.479 & 0.479 & 0.714 & 0.714 & 0.714 \\ \hline Naive & 0.652 (0.021)* & 0.694 (0.000)* & 0.695 (0.000)* & 0.353 (0.003)* & 0.448 (0.000)* & 0.448 (0.001)* & 0.583 (0.007)* & 0.661 (0.001)* & 0.661 (0.001)* \\ Action IPS & 0.656 (0.008)* & 0.701 (0.001)* & 0.701 (0.001)* & 0.359 (0.007)* & 0.448 (0.001)* & 0.448 (0.001)* & 0.578 (0.004)* & 0.671 (0.001)* & 0.671 (0.002)* \\ Action CRM & 0.617 (0.004)* & 0.698 (0.001)* & 0.700 (0.001)* & 0.359 (0.005)* & 0.448 (0.001)* & 0.449 (0.001)* & 0.449 (0.013)* & 0.468 (0.002)* & 0.672 (0.001)* \\ Exp. IPS & 0.659 (0.010)* & 0.723 (0.001)* & 0.730 (0.001)* & 0.389 (0.014)* & 0.474 (0.001)* & 0.481 (0.001)* & 0.576 (0.010)* & **0.696 (0.001)** & **0.706 (0.001)** \\ Exp. CRM & **0.677 (0.001)** & **0.723 (0.001)** & **0.730 (0.000)** & **0.434 (0.001)** & **0.473 (0.001)** & **0.480 (0.001)** & **0.635 (0.001)** & **0.695 (0.001)** & **0.706 (0.001)** \\ \hline \hline \end{tabular}
\end{table}
Table 1. NDCG@5 performance under different settings and datasets for several values of \(N\), the number of logged interactions in the simulated training set. Reported numbers are averages over 10 independent runs evaluated on the held-out test-sets, bold numbers indicate the highest performance. Statistical significance for differences with the exposure-based CRM are measured via a two-sided student-t test, * indicates methods with significantly lower NDCG with \(p<0.01\), and * no significant difference.
## 7. Results and Discussion
### Comparison with baseline methods
The main results of our experimental comparison are presented in Figure 3 and Table 3. Figure 3 displays the performance curves of the different methods as the number of logged interactions (\(N\)) increases. Table 3 presents performance at \(N\in\{4\cdot 10^{2},4\cdot 10^{7},10^{9}\}\) and indicates whether the observed differences with our exposure-based CRM method are statistically significant.
We start by considering the performance curves in Figure 3. We see that both the action-based and exposure-based IPS baselines have an initial period of very similar performance that is far below the logging policy. Around \(N\approx 10^{4}\) their performance is comparable to the logging policy, and finally at \(N=10^{9}\) the exposure-based IPS has reached optimal performance, while the performance of action-based IPS is still far from optimal. We can attribute this initial poor performance to the high variance problem of IPS estimation; when \(N\) is small, variance is at its highest, resulting in risky and sub-optimal optimization by the IPS estimators. However, even when \(N=10^{9}\), the variance of the action-based IPS estimator is too high to reach optimal performance, due to its extremely small propensities. This illustrates why the introduction of exposure-based propensities was so important to the CLTR field, and that even exposure-based IPS produces unsafe optimization when little data is available or variance from interactions is high.
Next, we consider whether action-based CRM is able to mitigate the high variance problem of action-based IPS. Despite being a proven generalization bound, Figure 3 clearly shows us that action-based CRM only leads to decreases in performance compared to its IPS counterpart. It appears that this happens because the logging policy is not available in our setup, and the propensities have to be estimated from logged data. Consequently, the action-based risk pushes the optimization to mimic the exact rankings that were observed during logging. Thus, due to the variance introduced from the sampling of rankings from the logging policy, it appears that action-based CRM has an even higher variance problem than action-based IPS. As expected, our results thus clearly indicate that action-based CRM is also unsuited for the CLTR setting, to our surprise; it is substantially worse than its IPS counterpart.
Finally, we examine the performance of our novel exposure-based CRM method. Similar to the other methods, there is an initial period of low performance, but in stark contrast, this period ends very quickly; on Yahoo! logging policy performance is reached when \(N\approx 125\), on MSLR-WEB30k when \(N\approx 350\) and on Istella when \(N\approx 400\). For comparison, exposure-based IPS needs \(N\approx 1100\) on Yahoo!, \(N\approx 10^{4}\) on MLSR-WEB30k and \(N\approx 1.1\cdot 10^{4}\) on Istella to do the same; meaning that our CRM method needs roughly 89%, 97% and 97% fewer interactions, respectively. In addition, Table 3 indicates that the logging policy performance is matched on all datasets when \(N=400\) by exposure-based CRM, where it also outperforms all baseline methods. We note that there is still an initial period of low performance, because the logging policy is unavailable at training, and thus, its behavior still has to be estimated from logged interactions. It is possible that in settings where the logging policy is fully known during training, this initial period is eliminated entirely. Nevertheless, our results show that exposure-based CRM reduces the initial periods of poor performance due to variance by an enormous magnitude.
Furthermore, while the initial period is clearly improved, we should also consider whether there is a trade-off with the rate of convergence. Surprisingly, Figure 3 does not display any noticeable decrease in performance when compared with exposure-based IPS. Moreover, Table 3 shows the differences between exposure-based IPS and CRM are barely measurable and not statistically significant when \(N\in\{4\cdot 10^{7},10^{9}\}\). We know from the risk formulation in Eq. 31 that the weight of the risk term decreases as \(N\) increases at a rate of \(1/\sqrt{N}\). In other words, the more data is available, the more optimization is able to diverge from the logging policy. It appears that this balances utility maximization and risk minimization so well that we are unable to observe any downside of applying exposure-based CRM instead of IPS. Therefore, we conclude that, compared to all baseline methods and across all datasets, exposure-based CRM drastically reduces the initial period of low performance,
Figure 3. Performance in NDCG@5 of various IPS and CRM methods for CLTR. The top-row presents the results when the size of the training data is varied from extremely small (\(10^{2}\)) to extremely high (\(10^{9}\)). The bottom-row is a zoomed-in view, focusing on the low-data region from \(10^{2}\) to \(10^{5}\). Results are averages over 10 runs; shaded areas indicate 80% confidence intervals.
matches the best rate of convergence of all baseline, and has optimal performance at convergence.
### Ablation study on the confidence parameter
To gain insights into how the confidence parameter \(\delta\) affects the trade-off between safety and utility, an ablation study over various \(\delta\) values was performed for both CRM methods.
The top-row of Figure 4 shows us the performance of action-based CRM, and contrary to expectation, a decrease in \(\delta\) corresponds to a considerably worse performance. For the sake of clarity, in theory, \(\delta\) is inversely tied to safety, a lower \(\delta\) should result in less divergence from the safe logging policy (Srivastava et al., 2017). Conversely, we see that action-based CRM displays the opposite trend. We think this further confirms our hypothesis that a frequency estimate of action-based divergence has an even higher variance problem than action-based IPS. Consequently, a higher weight to the risk function results in worse performance. This further confirms our previous conclusion that action-based CRM is unsuited for the CLTR setting, regardless of how the \(\delta\) parameter is tuned.
In contrast, the bottom-row of Figure 4 displays the expected trend for exposure-based CRM; as \(\delta\) decreases the resulting performance gets closer to the logging policy. With \(\delta=0.1\), CRM performs extremely close to its IPS counterpart, as optimization is less constrained to mimic the logging policy here. Decreasing \(\delta\) appears to have diminishing returns, as the difference between \(\delta=10^{-4}\) and \(\delta=10^{-5}\) is marginal. Importantly, we do not observe any downsides to setting \(\delta=10^{-5}\), thus we have not reached a point in our experiments where \(\delta\) is set too conservatively. This suggests that exposure-based CRM is very robust to the setting of the \(\delta\) parameter, and that a sufficiently low \(\delta\) does not require fine-tuning. Therefore, this shows that the improvements we observed when comparing with baseline methods, did not stem from a fine-tuning of \(\delta\). Thus, we can conclude that this robustness further increases the safety that is provided by exposure-based CRM, as there is also little risk involved in the tuning of the \(\delta\) parameter.
## 8. Conclusion
In this paper, we introduced the first counterfactual risk minimization (CRM) method designed for CLTR, that relies on a novel exposure-based divergence function. In contrast with existing action-based CRM methods, exposure-based divergence avoids the problem of the enormous combinatorial action space when ranking, by measuring the dissimilarity between policies based on how they distribute exposure to documents. As a result, exposure-based CRM optimization produces policies that rank similar to the logging policy when it is risky to follow IPS, i.e., when little data is available or variance is very high. Consequently, our experimental results show that it almost completely removes initial periods of detrimental performance; to be precise, our method needed 89% to 97% fewer interactions than state-of-the-art IPS to match production system performance. Importantly, we observed no downsides in its application, as it maintained the same rate and point of convergence as IPS, in all tested experimental settings. Therefore, we conclude that our exposure-based CRM method provides the safest CLTR methods so far, as it almost completely alleviates the risk of decreasing the performance of a production system.
These improvements have large implications for practitioners who work on ranking systems in real-world settings, since the almost complete reduction of initial detrimental performance removes the main risks involved in applying CLTR. In other words, when applying our novel exposure-based CRM, practitioners can have significantly less worry that the resulting policy will perform worse than their production system and hurt user experience.
We hope future work will further research the promising potential applications of exposure-based CRM, for instance, in settings with fast turn-around times in deployment, or large numbers of tail-queries (Srivastava et al., 2017; Srivastava et al., 2017), where interaction data is limited.
## Acknowledgements
This research was supported by Huawei Finland and by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https://hybrid-intelligence
Figure 4. Performance of CRM methods with varying confidence parameters (\(\delta\)). Top-row: action-based CRM baseline; bottom-row: our exposure-based CRM method. Results are averages of 10 runs; shaded areas indicate 80% confidence intervals.
centre.nl. This work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-4963. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
## Reproducibility
All experimental results in this work were obtained using publicly available data. Our implementation is publicly available at [https://github.com/shashankg7/crm_ultr](https://github.com/shashankg7/crm_ultr).
|
2304.11418 | AC Power Flow Feasibility Restoration via a State Estimation-Based
Post-Processing Algorithm | This paper presents an algorithm for restoring AC power flow feasibility from
solutions to simplified optimal power flow (OPF) problems, including convex
relaxations, power flow approximations, and machine learning (ML) models. The
proposed algorithm employs a state estimation-based post-processing technique
in which voltage phasors, power injections, and line flows from solutions to
relaxed, approximated, or ML-based OPF problems are treated similarly to noisy
measurements in a state estimation algorithm. The algorithm leverages
information from various quantities to obtain feasible voltage phasors and
power injections that satisfy the AC power flow equations. Weight and bias
parameters are computed offline using an adaptive stochastic gradient descent
method. By automatically learning the trustworthiness of various outputs from
simplified OPF problems, these parameters inform the online computations of the
state estimation-based algorithm to both recover feasible solutions and
characterize the performance of power flow approximations, relaxations, and ML
models. Furthermore, the proposed algorithm can simultaneously utilize combined
solutions from different relaxations, approximations, and ML models to enhance
performance. Case studies demonstrate the effectiveness and scalability of the
proposed algorithm, with solutions that are both AC power flow feasible and
much closer to the true AC OPF solutions than alternative methods, often by
several orders of magnitude in the squared two-norm loss function. | Babak Taheri, Daniel K. Molzahn | 2023-04-22T14:26:05Z | http://arxiv.org/abs/2304.11418v2 | # AC Power Flow Feasibility Restoration via a State Estimation-Based Post-Processing Algorithm
###### Abstract
This paper presents an algorithm for restoring AC power flow feasibility from solutions to simplified optimal power flow (OPF) problems, including convex relaxations, power flow approximations, and machine learning (ML) models. The proposed algorithm employs a state estimation-based post-processing technique in which voltage phasors, power injections, and line flows from solutions to relaxed, approximated, or ML-based OPF problems are treated similarly to noisy measurements in a state estimation algorithm. The algorithm leverages information from various quantities to obtain feasible voltage phasors and power injections that satisfy the AC power flow equations. Weight and bias parameters are computed offline using an adaptive stochastic gradient descent method. These parameters inform the online computations of the state estimation-based algorithm to recover feasible solutions. Furthermore, the proposed algorithm can simultaneously utilize combined solutions from different relaxations, approximations, and ML models to enhance performance. Case studies demonstrate the effectiveness and scalability of the proposed algorithm, with numerical results showing several orders of magnitude improvement over traditional methods for restoring AC power flow feasible solutions.
Optimal power flow (OPF), state estimation (SE), operating point restoration, machine learning (ML), and adaptive stochastic gradient descent (ASGD).
## I Introduction
Optimal power flow (OPF) problems seek steady-state operating points for a power system that minimize a specified objective function, such as generation costs, losses, voltage deviations, etc., while satisfying equalities that model power flows and inequalities that impose engineering limits. The OPF problem forms the basis for many power system applications. Algorithmic improvements have the potential to save billions of dollars annually in the U.S. alone [1].
The AC power flow equations accurately model how a power system behaves during steady-state conditions by relating the complex power injections and line flows to the voltage phasors. Optimal power flow problems that utilize an AC power flow model are not easily solvable and are considered computationally challenging (NP-hard) [2, 3, 4].
Since first being formulated by Carpentier in 1962 [5], OPF solution methods have been extensively researched [6, 7]. With formulations based on the Karush-Kuhn-Tucker (KKT) conditions, many solution methods only seek a local optimum due to the nonconvex nature of the OPF problem. This nonconvexity is due to the nonlinearity of the relationships between active and reactive power injections and voltage magnitudes and angles [8]. Challenges from power flow nonconvexities are further compounded when solving OPF problems that consider, e.g., discreteness and uncertainty [9, 10].
To overcome these challenges, it is common to simplify OPF problems to convex formulations via relaxation and approximation techniques, often resulting in semidefinite programs (SDP) [11], second-order cone programs (SOCP) [12, 13], and linear programs [14]; see [15] for a survey. A wide range of emerging machine learning (ML) models are also under intense study [16, 17, 18, 19, 20]; see [21] for a survey. In this paper, we collectively refer to relaxed, approximated, or ML-based OPF formulations as _simplified_ OPF problems.
Relaxed OPF problems bound the optimal objective value, can certify infeasibility, and, when the relaxation is tight, provide globally optimal decision variables [15]. With potential advantages in computational tractability and accuracy when deployed appropriately, power flow approximations are also used in a wide range of operational and design tasks [15]. Whether developed via relaxations or approximations, the convexity of these formulations is valuable in many applications, enabling, for instance, convergence guarantees for distributed optimization algorithms [22] and tractability for robust and chance-constrained formulations that consider uncertainties [23, 24]. Similarly, machine learning models hold substantial promise in certain applications, such as quickly characterizing many power injection scenarios with fluctuating load demands and renewable generator outputs [21].
The computational advantages of simplifying OPF problems via relaxation, approximation, and machine learning techniques come from replacing the nonconvex AC power flow equations with some other model. As a result, all of these simplified OPF problems may suffer from a key deficiency in _accuracy_ that is the motivation for this paper. Specifically, the outputs of a simplified OPF problem may not satisfy the AC power flow equations, meaning that the power injections and line flows may be inconsistent with the voltage phasors [10, 15, 25, 26]. This is problematic since many practical applications for OPF problems require solutions that satisfy the power flow equations to high accuracy.1
Footnote 1: Prior research has identified sufficient conditions which ensure the tightness of certain relaxed OPF problems, but the assumptions underlying these conditions make them inapplicable for many practical situations [15, 27].
Due to these inaccuracies, there is a need to develop restoration methods that obtain voltage phasors and power injections conforming to the AC power flow equations from the outputs of relaxed, approximated, and ML-based OPF models. There are three main types of methods in the literature that address this. The first type adds penalty terms to the objective function of a relaxed OPF problem. Appropriate
choices for these penalty terms can result in the relaxation being tight for the penalized problem, yielding feasible and near-optimal solutions for some OPF problems; see, e.g., [28]. However, determining the appropriate penalty parameters can be challenging and is often done in a trial-and-error fashion, which can be time-consuming [29]. The second type updates the power flow relaxations and approximations within an algorithm that tries to find a local optimum; see, e.g., [30], in which the OPF problem is formulated as a difference-of-convex programming problem and iteratively solved by a penalized convex-concave procedure. As another example, the algorithm in [31] utilizes a power flow approximation to generate an initial operating point and then seeks small adjustments to the outputs of a select number of generators to obtain an operating point that satisfies both the equality and inequality constraints of an OPF problem. These methods can find local optima, but they require good starting points and multiple evaluations of the relaxed or approximate problem, which can be computationally expensive. See [15, Ch. 6] for a survey of these first two types of methods.
This paper is most closely related to a third type of method that is faster and more direct. This third type of method fixes selected values from the solution of the simplified OPF problem to formulate a power flow problem that has the same number of variables and equations. Solving this power flow problem then yields values for the remaining quantities that satisfy the AC power flow equations. There are various ways to formulate this power flow problem. For instance, one method fixes active power injections and voltage magnitudes at non-slack generator buses to create a power flow problem that can be solved using traditional Newton-based methods [29]. One could instead fix the active and reactive power injections at non-slack generator buses and then solve a power flow problem. As another alternative, one could directly substitute the voltage magnitudes and angles from the simplified OPF solution into the power flow equations to obtain consistent values for active and reactive power generation. This third type of method for restoring AC power flow feasibility is commonly used in prior literature, usually as part of a larger algorithm [17, 19, 32, 33, 34, 32]. Note that this third type of method may result in variable values that do not align with the solution of the simplified problem and may also violate the OPF problem's inequality constraints. Nevertheless, quickly obtaining an AC power flow feasible point that is close to the true OPF solution is often of paramount importance.
Our proposed algorithm for restoring power flow solutions is inspired by ideas from state estimation [35] and offers significantly improved accuracy over alternatives from this third type of method. Instead of fixing some subset of values from a simplified OPF solution, we instead formulate an optimization problem that incorporates additional information from the voltage phasors, power injections, and line flows from relaxed, approximated, and ML-based solutions. Our algorithm addresses inaccuracies in power flow models similarly to how measurement errors are handled in state estimation. The use of additional information from simplified solutions allows for a more accurate restoration of the true OPF solution while leveraging mathematical machinery from state estimation. Additionally, our algorithm can simultaneously consider outputs from multiple simplified OPF problems to both improve the quality of the restored solution and automatically characterize the accuracy of different power flow simplifications.
Our proposed algorithm also allows for flexibility in selecting weights, which are comparable to the variances of sensor noise levels in state estimation algorithms, and biases. These weights and biases are parameters that are chosen based on inconsistencies in the solutions to the simplified OPF problems. Determining the best values for these weights and biases is challenging as the inconsistencies in the solutions are not known beforehand. Inspired by ML approaches, we solve this issue via offline training of the weights and biases using the solutions to many actual OPF problems and the corresponding outputs of the simplified OPF problems. To do so, we employ an adaptive stochastic gradient descent (ASGD) algorithm to iteratively update the weights and biases based on information from our proposed restoration algorithm. This minimizes a loss function that compares the actual OPF solutions and the restored points. The trained weights and biases are then used during online calculations to recover the solutions. We demonstrate the proposed restoration algorithm using various convex relaxations, an approximation, and an ML-based model. The results show an improvement of several orders of magnitude in accuracy for some instances.
In summary, this paper presents an algorithm that addresses AC power flow infeasibility in simplified OPF solutions by:
* Proposing an AC feasibility restoration algorithm relevant to multiple types of simplified OPF problems (relaxations, approximations, and ML models).
* Jointly considering outputs from multiple simplified OPF problems to exploit more information from each.
* Developing an adaptive stochastic gradient descent method to determine optimal weight and bias parameters.
* Illustrating the effectiveness and scalability of the proposed algorithm through numerical experiments using various power flow relaxations, an approximation, and an ML model on multiple test cases.
The paper is structured as follows. Section II reviews the OPF problem and various simplifications. Section III presents the proposed restoration algorithm. Section IV provides numerical results to demonstrate the algorithm's performance. Section V concludes the paper and discusses potential avenues for future research. Note that a preliminary version of the proposed algorithm was published in [36].
## II OPF Formulation and Simplifications
This section overviews the OPF problem and discusses several common relaxations and approximations as well as emerging machine learning models that simplify the OPF problem to improve tractability at the cost of accuracy. For more detail, see [15] for a survey on power flow relaxations and approximations and [21] for a survey on machine learning.
We first establish notation. The sets of buses and lines are represented by \(\mathcal{N}\) and \(\mathcal{E}\), respectively. Each bus \(i\in\mathcal{N}\) has a voltage phasor \(V_{i}\) with phase angle \(\theta_{i}\), a complex power demand \(S_{i}^{d}\), a shunt admittance \(Y_{i}^{S}\), and a complex
power generation \(S_{i}^{g}\). Buses without generators are modeled as having zero generation limits. Complex power flows into each terminal of each line \((j,k)\in\mathcal{E}\) are denoted as \(S_{jk}\) and \(S_{kj}\). Each line \((j,k)\in\mathcal{E}\) has admittance parameters \(Y_{jk}\) and \(Y_{kj}\). The real and imaginary parts of a complex number are denoted as \(\Re(\,\cdot\,)\) and \(\Im(\,\cdot\,)\), respectively. The complex conjugate of a number is represented by \((\,\cdot\,)^{*}\) and the transpose of a matrix is represented by \((\,\cdot\,)^{T}\). Upper and lower bounds are denoted as \((\,\overline{\cdot}\,)\) and \((\,\underline{\cdot}\,)\), interpreted as separate bounds on real and imaginary parts for complex variables. The OPF problem is:
\[\min\sum_{i\in N}c_{2i}\left(\Re\left(S_{i}^{g}\right)\right)^{2}+ c_{1i}\Re\left(S_{i}^{g}\right)+c_{0i}\] (1a) s.t. \[\quad(\forall i\in\mathcal{N},\ \forall(j,k)\in\mathcal{E})\] \[\mathbf{W}_{jk}=V_{j}V_{k}^{*},\ \mathbf{W}_{kj}=V_{k}V_{j}^{*},\ \mathbf{W}_{ii}=V_{i}V_{i}^{*} \tag{1b}\] \[\quad(\underline{V}_{i})^{2}\leq\mathbf{W}_{ii}\leq\left( \overline{V}_{i}\right)^{2}\] (1c) \[\quad\underline{S}_{i}^{g}\leq S_{i}^{g}\leq\overline{S}_{i}^{g}\] (1d) \[\quad|S_{jk}|\leq\overline{S}_{jk},\ |S_{kj}|\leq\overline{S}_{jk}\] (1e) \[\quad S_{i}^{g}-S_{i}^{d}-\left(Y_{i}^{S}\right)^{*}\mathbf{W}_{ ii}=\sum_{(i,j)\in\mathcal{E}}S_{ij}+\sum_{(k,i)\in\mathcal{E}}S_{ki}\] (1f) \[\quad S_{jk}=Y_{jk}^{*}\mathbf{W}_{jj}-Y_{jk}^{*}\mathbf{W}_{jk}\] (1g) \[\quad S_{kj}=Y_{kj}^{*}\mathbf{W}_{kk}-Y_{kj}^{*}\mathbf{W}_{kj}^ {*}\] (1h) \[\quad\tan\left(-\overline{\theta}_{jk}\right)\Re\left(\mathbf{W} _{jk}\right)\leq\Im\left(\mathbf{W}_{jk}\right)\leq\tan\left(\overline{\theta} _{jk}\right)\Re\left(\mathbf{W}_{jk}\right). \tag{1i}\]
The OPF problem (1) minimizes an objective, in this case, the generation cost as shown by (1a). The objective has quadratic coefficients \(c_{2i}\), \(c_{1i}\), and \(c_{0i}\). The voltage phasor products are collected in a Hermitian matrix \(\mathbf{W}\), as described in (1b). The OPF problem also imposes voltage magnitude limits (1c), generator output limits (1d), apparent power flow limits (1e), and complex power balance at each bus (1f). Power flows for each line are defined in (1g) and (1h) and limits on phase angle differences across lines are imposed in (1i).
All the nonconvexities in the OPF problem are associated with the products in (1b), which, in combination with (1f)-(1h), form the power flow equations. Power flow relaxations convexify the power flow equations by replacing (1b) with less stringent conditions. The SDP relaxation requires that \(\mathbf{W}\) is positive semidefinite [11], which is implied by (1b). The SOCP relaxation requires that \(\mathbf{W}\) has non-negative principal minors [12], which is implied by positive semidefiniteness of \(\mathbf{W}\). The QC relaxation strengthens the SOCP relaxation with additional variables and constraints corresponding to phase angle differences that are restricted to convex envelopes [13].
Power flow approximations replace (1b), (1f)-(1h) with alternative (usually linear) expressions relating the line flows and voltage phasors. For instance, the LPAC approximation linearizes a polar representation of the power flow equations [37]. This is accomplished by linearizing sine functions using a first-order Taylor approximation around zero phase angle differences and replacing cosine functions with lifted variables restricted to a convex polytope.
Some emerging ML approaches for OPF problems replace the power flow equations (1b), (1f)-(1h) with a surrogate model. For instance, the approach in [20] replaces the power flow equations with the piecewise linear function corresponding to a trained neural network with rectified linear unit (ReLU) activation functions to obtain a mixed-integer linear programming formulation. Other ML approaches such as [16, 17, 18, 19] directly estimate OPF solutions by training neural networks or other ML models to predict values for the optimal voltage phasors, power injections, and/or line flows.
Since these simplified OPF formulations do not enforce the true AC power flow equations (1b), (1f)-(1h), their outputs may have inconsistencies between the power injections, power flows, and voltage phasors. Thus, all would potentially benefit from our proposed restoration algorithm. We next present our proposed restoration algorithm considering a generic simplified OPF problem. In Section IV, we illustrate the method's performance using the SDP, SOCP, and QC relaxations, the LPAC approximation, and the ML model from [16].
## III Restoring AC Power Flow Feasibility
Solutions to any of the simplified OPF problems discussed in Section II may suffer from voltage phasors, power injections, and line flows that do not satisfy the AC power flow equations. To restore operating points that are AC power flow feasible, this section introduces a restoration algorithm inspired by state estimation techniques where the voltage phasors, power injections, and line flows from the simplified solution are analogous to noisy measurements. This algorithm improves on previous methods as it does not fix any variables to specific values, thus enabling the restoration of higher-quality solutions. Note that this algorithm _does not rely on actual measurements_ from the physical system. Instead, this algorithm finds the AC power flow feasible voltage phasors that most closely match the voltage phasors, power injections, and line flows resulting from the solution to a simplified OPF problem in a similar manner by which state estimation algorithms resolve inconsistencies among noisy measurements.
Analogous to how state estimation algorithms use variations in the amount of sensor noise to weight measured quantities, the proposed algorithm includes weight and bias parameters associated with the outputs of each quantity from the simplified OPF solution. However, unlike state estimation algorithms, these weight parameters are not determined by the physical characteristics of a sensor, but rather by the inconsistencies (with regard to the AC power flow equations) among various quantities in the solution to the simplified OPF problem. To determine the optimal values for these weight and bias parameters, we propose an ASGD-based method that is executed offline, with the results used online for restoring AC power flow feasibility. Fig. 1 shows both the algorithm for determining the weights and biases and the solution restoration algorithm. Furthermore, Table I summarizes the analogy between the proposed algorithm and state estimation.
### _AC Feasibility Restoration Algorithm_
In this section, we introduce our proposed algorithm for restoring AC power flow feasible points from the solutions of simplified OPF problems (convex relaxations, approximations, and ML-based models). This method aims to find the voltage
phasors that are close to the true OPF solution's voltage phasors based on the voltage phasors, power injections, and line flows from a simplified OPF solution.
This section introduces notation based on the typical representation of state estimation algorithms [35] to show how this mathematical machinery is leveraged in our method. We emphasize that we do not use any actual measurements from physical sensors, but rather use the information from the simplified OPF solution. The goal is to find the voltage magnitudes and angles (denoted as \(\mathbf{x}\)) that are most consistent with the voltage magnitudes, phase angles, power flows, and power injections from the simplified OPF solution, which we gather into a vector \(\mathbf{z}\). We denote the number of these quantities, i.e., the length of \(\mathbf{z}\), by \(m\) and let \(n\) be the number of voltage magnitudes plus the number of (non-slack) voltage angles, i.e., the length of \(\mathbf{x}\).
We also define a length-\(m\) vector of bias parameters \(\mathbf{b}\) for the simplified solution.2 While measurement errors in state estimation are typically only characterized by their variations, the errors in simplified OPF solutions may be _biased_, i.e., consistently overestimate or underestimate the true values of some quantities. We account for this using a bias term \(\mathbf{b}\) that represents the systematic errors in the simplified OPF solutions. We also have an error term \(\mathbf{e}\) that captures variations that are modeled as being random. By considering both bias and error terms, our model is better equipped to handle both systematic and random deviations from the true values. We use an AC power flow model denoted as \(\mathbf{h}(\mathbf{x})\) to relate \(\mathbf{x}\) and \(\mathbf{z}\), parameterized by the bias \(\mathbf{b}\):
Footnote 2: As we will discuss in Section IV-D, we can generalize our algorithm to jointly consider solutions from multiple simplified OPF problems. In this case, the elements of \(\mathbf{z}\) and \(\mathbf{b}\) correspond to quantities from multiple simplified OPF problems stacked together, but the size of \(\mathbf{x}\) remains the same.
\[z_{i}+b_{i}=h_{i}(\mathbf{x})+e_{i},\quad i=1,\ldots,m, \tag{2}\]
where \(\mathbf{h}(\mathbf{x})=[\mathbf{V}(\mathbf{x})^{T}\;\mathbf{P}(\mathbf{x})^{ T}\;\mathbf{Q}(\mathbf{x})^{T}\;\mathbf{P}^{f}(\mathbf{x})^{T}\;\mathbf{Q} ^{f}(\mathbf{x})^{T}\;\boldsymbol{\theta}(\mathbf{x})^{T}]^{T}\) denotes the AC power flow equations relating the vectors of voltage magnitudes \(\mathbf{V}\), active and reactive power injections \(\mathbf{P}\) and \(\mathbf{Q}\), active and reactive line flows \(\mathbf{P}^{f}\) and \(\mathbf{Q}^{f}\), and voltage angles \(\boldsymbol{\theta}\) to the vector \(\mathbf{x}=[\boldsymbol{\theta}^{T}\;\mathbf{V}^{T}]^{T}\). The first and last entries of \(\mathbf{h}(\mathbf{x})\), namely, \(\mathbf{V}(\mathbf{x})\) and \(\boldsymbol{\theta}(\mathbf{x})\), are obtained using the identity function. The remaining entries of the \(\mathbf{h}(\mathbf{x})\) are:
\[P_{i}= \sum_{(i,j)\in\mathcal{E}}P_{ij}^{f},\qquad Q_{i}=\sum_{(i,j)\in \mathcal{E}}Q_{ij}^{f}, \tag{3a}\] \[P_{ij}^{f}= V_{i}^{2}\left(\Re(Y_{ij})+\Re(Y_{ij}^{sh})\right)-V_{i}V_{j} \Re(Y_{ij})\cos(\theta_{i}-\theta_{j})\] \[-V_{i}V_{j}\Im(Y_{ij})\sin(\theta_{i}-\theta_{j}),\] (3b) \[Q_{ij}^{f}= -V_{i}^{2}\left(\Im(Y_{ij})+\Im(Y_{ij}^{sh})\right)-V_{i}V_{j} \Re(Y_{ij})\sin(\theta_{i}-\theta_{j})\] \[+V_{i}V_{j}\Im(Y_{ij})\cos(\theta_{i}-\theta_{j}). \tag{3c}\]
Hence, the error \(\mathbf{e}\) is the difference between the simplified solution (offset by the bias parameter) and the value corresponding to the restored point \(\mathbf{x}\).
As we will discuss below in Section III-B, the bias \(\mathbf{b}\) is computed offline based on the characteristics of many simplified OPF solutions to reflect the systematic offsets of the simplified solutions from the true values. Once the bias \(\mathbf{b}\) is determined, the error \(\mathbf{e}\) represents the remaining inconsistencies that are not accounted for by the systematic bias.
To address these remaining inconsistencies, our proposed restoration algorithm uses a weighted least squares formulation similar to typical state estimation algorithms. The goal is to choose the voltage magnitudes and angles in \(\mathbf{x}\) that minimize a cost function, denoted as \(J(\mathbf{x})\), that is the sum of the squared inconsistencies between the simplified OPF solution (offset by \(\mathbf{b}\)) and the true OPF solution, i.e., the difference between \(\mathbf{h}(\mathbf{x})\) and \(\mathbf{z}+\mathbf{b}\). These inconsistencies are represented by the vector \(\mathbf{e}\) and are weighted by a specified diagonal matrix \(\boldsymbol{\Sigma}\) with weight parameters associated with the measurements from the simplified OPF solution:
\[\min J(\mathbf{x})=\mathbf{e}^{T}\boldsymbol{\Sigma}\mathbf{e}. \tag{4}\]
Figure 1: Flowchart of the proposed algorithm.
In a state estimation application, \(\mathbf{\Sigma}\) would be the covariance matrix for the sensor noise. Conversely, we permit \(\mathbf{\Sigma}\) to be any diagonal matrix with values computed offline using the algorithm described in Section III-B.
We solve (4) by considering the optimality conditions:
\[\mathbf{g}(\mathbf{x})=\frac{\partial J(\mathbf{x})}{\partial\mathbf{x}}=- \mathbf{H}(\mathbf{x})^{T}\mathbf{\Sigma}(\mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{ x}))=\mathbf{0}, \tag{5}\]
where the Jacobian matrix \(\mathbf{H}(\mathbf{x})\) of AC power flow equations \(\mathbf{h}(\mathbf{x})\) is \(\mathbf{H}(\mathbf{x})=\frac{\partial\mathbf{h}(\mathbf{x})}{\partial\mathbf{ x}}\):
\[\mathbf{H}(\mathbf{x})=\begin{bmatrix}\mathbf{0}&\frac{\partial\mathbf{P}}{ \partial\mathbf{P}}&\frac{\partial\mathbf{Q}}{\partial\mathbf{P}}&\frac{ \partial\mathbf{P}^{t}}{\partial\mathbf{P}}&\frac{\partial\mathbf{Q}^{t}}{ \partial\mathbf{P}}&\mathbf{1}\\ \mathbf{1}&\frac{\partial\mathbf{P}}{\partial\mathbf{V}}&\frac{\partial \mathbf{Q}}{\partial\mathbf{V}}&\frac{\partial\mathbf{P}^{t}}{\partial \mathbf{V}}&\frac{\partial\mathbf{Q}^{t}}{\partial\mathbf{V}}&\mathbf{0} \end{bmatrix}^{T}. \tag{6}\]
To compute the solution to (5), we apply the Newton-Raphson method described in Algorithm 1 that solves, at the \(k\)-th iteration, the following linear system:
\[\mathbf{x}^{(k+1)}=\mathbf{x}^{(k)}-(\mathbf{G}(\mathbf{x}))^{-1}\mathbf{g}( \mathbf{x}), \tag{7}\]
where
\[\mathbf{G}(\mathbf{x})=\frac{\partial\mathbf{g}(\mathbf{x})}{\partial\mathbf{ x}}=\mathbf{H}(\mathbf{x})^{T}\mathbf{\Sigma}\mathbf{H}(\mathbf{x}) \tag{8}\]
Algorithm 1 uses a convergence tolerance of \(\epsilon\) and takes a user-specified initialization of \(\mathbf{x}^{(0)}\). When available, the voltage magnitudes and angles from the simplified OPF solution often provide reasonable initializations \(\mathbf{x}^{(0)}\). Otherwise, a flat start provides an alternative initialization. The output of this algorithm is the restored solution, denoted as \(\mathbf{x}_{R}\).
```
Input : Simplified OPF solution \(\mathbf{z}\), Initialization \(\mathbf{x}^{(0)}\), Parameters \(\mathbf{\Sigma}\), \(\mathbf{b}\), \(\epsilon\) Output : Restored AC power flow feasible solution \(\mathbf{x}_{R}\)
1 Initialize\(k\gets 0\)
2while\(||\mathbf{\Delta x}^{(k)}||>\epsilon\)do
3\(k\gets k+1\);
4 Calculate\(\mathbf{h}(\mathbf{x}^{(k)})\), \(\mathbf{H}(\mathbf{x}^{(k)})\), \(\mathbf{G}(\mathbf{x}^{(k)})\), and \(\mathbf{g}(\mathbf{x}^{(k)})\) using \(\mathbf{\Sigma}\), \(\mathbf{z}\), \(\mathbf{b}\), and \(\mathbf{x}^{(k)}\);
5\(\mathbf{\Delta x}^{(k)}=\)G\((\mathbf{x}^{(k)})^{-1}\mathbf{H}(\mathbf{x}^{(k)})^{T}\mathbf{\Sigma}\left( \mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x}^{(k)})\right);\)
6\(\mathbf{x}^{(k+1)}\leftarrow\mathbf{x}^{(k)}+\mathbf{\Delta x}^{(k)}\);
7\(\mathbf{x}_{R}=\mathbf{x}^{(k+1)}\)
```
**Algorithm 1**Newton-Raphson Algorithm for AC Power Flow Feasibility Restoration
### _Determining the Weight Parameters_
The weight parameters \(\mathbf{\Sigma}\) play a crucial role in determining the accuracy of the solution obtained from the Algorithm 1. Ideally, larger values of \(\mathbf{\Sigma}_{ii}\) should be chosen for the quantities \(\mathbf{z}_{i}\) from the simplified OPF solution that more closely represent the solution to the true OPF problem. However, it is not straightforward to predict or estimate the accuracy of a particular \(\mathbf{z}_{i}\) in relation to the true OPF solution. Choosing values for the bias parameters \(\mathbf{b}\) poses similar challenges. As shown in Fig. 1, we therefore develop an approach inspired by the training of machine learning models to determine these parameters. This approach involves solving a set of randomly generated OPF problems along with the corresponding simplified OPF problems to create a training dataset. As presented in Algorithm 2, we then employ an ASGD method that iteratively solves the proposed restoration algorithm and updates the weight parameters in a way that minimizes the difference between the restored solution and the true OPF solution across the training dataset. In this offline training, solutions to each of the true and simplified OPF problems are computed in parallel.
The ASGD method relies on the sensitivities of the restored point \(\mathbf{x}_{R}\) with respect to the parameters \(\mathbf{\Sigma}\), i.e., \(\frac{\partial\mathbf{x}_{R}}{\partial\mathbf{S}}\):
\[\frac{\text{vec}(\partial\mathbf{x}_{R})}{\text{vec}(\partial\mathbf{ \Sigma})}=\left((\mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x}))-\left( \mathbf{H}(\mathbf{x})(\mathbf{H}(\mathbf{x})^{T}\mathbf{\Sigma}\mathbf{H}( \mathbf{x}))^{-1}\right.\right.\] \[\left.\left.\times\mathbf{H}(\mathbf{x})^{T}\mathbf{\Sigma}(\mathbf{z} +\mathbf{b}-\mathbf{h}(\mathbf{x}))\right)\right)\] \[\otimes\left((\mathbf{H}(\mathbf{x})^{T}\mathbf{\Sigma}\mathbf{H}( \mathbf{x}))^{-1}\mathbf{H}(\mathbf{x})^{T}\right)^{T}. \tag{9}\]
The expression (III-B) gives the sensitivities of the restored point \(\mathbf{x}_{R}\) with respect to the weight parameters \(\mathbf{\Sigma}\), where \(\otimes\) denotes the Kronecker product and \(\text{vec}\left(\,\cdot\,\right)\) denotes the vectorization of a matrix. With length-\(m\) vectors \(\mathbf{z}\) and \(\mathbf{h}(\mathbf{x})\) and an \(m\times n\) matrix \(\mathbf{H}\), the sensitivities \(\frac{\partial\mathbf{x}_{R}}{\partial\mathbf{\Sigma}}\) are represented by a \(n\times m^{2}\) matrix. Appendix -A provides the derivation of (III-B).
Note that directly applying (III-B) can be computationally expensive for large-scale systems since this expression computes sensitivities with respect to all entries of \(\mathbf{\Sigma}\). While possibly relevant for variants of the proposed formulation, sensitivities for the off-diagonal terms in \(\mathbf{\Sigma}\) are irrelevant in our formulation since these terms are fixed to zero due to the diagonal struture of \(\mathbf{\Sigma}\). Exploiting this structure can enable faster computations; see Appendix -B for further details.
### _Determining the Bias Parameters_
Like the weight parameters \(\mathbf{\Sigma}\), the bias parameters \(\mathbf{b}\) significantly impact the accuracy of Algorithm 1's outputs. Similar to the weight parameters \(\mathbf{\Sigma}\), Algorithm 2 also computes the bias parameters \(\mathbf{b}\) via an ASGD method. The sensitivities of the restored point \(\mathbf{x}_{R}\) with respect to the bias parameters \(\mathbf{b}\) are:
\[\frac{\partial\mathbf{x}_{R}}{\partial\mathbf{b}}=\left(\mathbf{H}(\mathbf{x})^ {T}\mathbf{\Sigma}\mathbf{H}(\mathbf{x})\right)^{-1}\mathbf{H}(\mathbf{x})^{T}\mathbf{ \Sigma}. \tag{10}\]
### _Loss Function_
To evaluate the accuracy of the restored solution obtained from our proposed algorithm, we need to define a quantitative measure, i.e., a _loss function_, that compares the restored solution to the true solution of the OPF problem. There are several possible ways to define a loss function in this context, such as comparing the voltage magnitudes, phase angles, power injections, line flows, etc. from the restored solution to those from the true OPF solution.
Following typical approaches for training ML models, we formulate a loss function as the squared difference between the voltage magnitudes and angles from the restored solution \(\mathbf{x}_{R}\) and the true solution \(\mathbf{x}_{AC}\). To achieve this, we introduce new vectors \(\mathbf{X}_{R}=[\mathbf{x}_{R_{1}}^{(1)T},\mathbf{x}_{R}^{(2)T},\dots,\mathbf{x}_ {R}^{(S)T}]^{T}\) and \(\mathbf{X}_{AC}=[\mathbf{x}_{AC}^{(1)T},\mathbf{x}_{AC}^{(2)T},\dots,\mathbf{x}_ {AC}^{(S)T}]^{T}\), where \(\mathbf{x}_{R}^{(i)}\) and \(\mathbf{x}_{AC}^{(i)}\) denote the
vectors of voltage magnitudes and angles at each bus for the restored and actual OPF solutions of the \(i\)-th sampled load scenario and \(S\) represents the number of scenarios. Consequently, we define the loss function across samples as:
\[F(\mathbf{\Sigma},\mathbf{b})=\frac{1}{n}\|\mathbf{X}_{R}(\mathbf{\Sigma},\mathbf{b})- \mathbf{X}_{AC}\|_{2}^{2}, \tag{11}\]
where the constant \(\frac{1}{n}\) normalizes this function based on the system size.
### _Adaptive Stochastic Gradient Descent (ASGD) Algorithm_
The optimal weight and bias parameters, \(\mathbf{\Sigma}\) and \(\mathbf{b}\), are obtained using the ASGD method described in Algorithm 2. After the off-line execution of Algorithm 2, the resulting weights are applied on-line to restore AC power flow feasibility for a particular problem via Algorithm 1 (see Fig. 1.)
To compute optimal weight and bias parameters, Algorithm 2 first creates a set of sampled load scenarios representing the range of conditions expected during real-time operations. Next, the algorithm solves (in parallel) both the actual and simplified OPF problems and saves the results in \(\mathbf{x}_{AC}^{(i)}\) and \(\mathbf{z}^{(i)}\), respectively, for each load scenario \(i=1,\ldots,S\). Using the information from the simplified solutions and Algorithm 1, the algorithm computes the restored solutions \(\mathbf{x}_{R}^{(i)}\) for each load scenario \(i=1,\ldots,S\). The algorithm then iteratively updates the weight and bias parameters based on the discrepancies between the actual and restored solutions along with their respective partial derivatives in order to minimize the loss function. The optimal weight and bias parameters, \(\mathbf{\Sigma}^{opt}\) and \(\mathbf{b}^{opt}\), are returned as outputs after reaching a maximum number of iterations or satisfying some other termination criteria (e.g., negligible changes from one iteration to the next).
The ASDG algorithm uses the gradient of the loss function with respect to the weight parameters, denoted as \(\mathbf{q}^{var}\):
\[\mathbf{q}^{var}=\frac{2}{n}\sum_{i=1}^{S}\left.\frac{\partial\mathbf{x}_{R}}{ \partial\mathbf{\Sigma}}\right|_{\mathbf{x}_{R}^{(i)}}\Big{(}\mathbf{x}_{R}^{(i)} (\mathbf{\Sigma},\mathbf{b})-\mathbf{x}_{AC}^{(i)}\Big{)}. \tag{12}\]
There are many variants of gradient descent algorithms, such as batch gradient, momentum, AdaGrad, Adam, etc., each of which has their own advantages and disadvantages. We use the Adam algorithm since we empirically found it to perform best for this application [36]. The Adam algorithm is commonly used for training machine learning models and involves the following steps at each iteration [38]:
\[\mathbf{m} \leftarrow\beta_{1}\mathbf{m}+(1-\beta_{1})\mathbf{q}, \tag{13a}\] \[\mathbf{\tau} \leftarrow\beta_{2}\mathbf{\tau}+(1-\beta_{2})(\mathbf{q})^{2},\] (13b) \[\hat{\mathbf{m}} \leftarrow\frac{\mathbf{m}}{1-\beta_{1}^{k}},\] (13c) \[\mathbf{\Gamma} \leftarrow\frac{\mathbf{\tau}}{1-\beta_{2}^{k}},\] (13d) \[\mathbf{\Sigma} \leftarrow\mathbf{\Sigma}-\eta\frac{\hat{\mathbf{m}}}{\sqrt{\hat{ \mathbf{T}}}+\epsilon}, \tag{13e}\]
where \(\mathbf{m}\) and \(\mathbf{\tau}\) are the first and second moments of the gradients at iteration \(k\), \(\eta\) is a learning rate (step size), \(\beta_{1}\) and \(\beta_{2}\) are exponentially decaying hyperparameters for the first and second moments, and \(\epsilon\) is a small constant.
```
Input:\(\eta,\epsilon,\beta_{1},\beta_{2},\mathbf{m},\mathbf{\tau},\textit{batch size, max\_iter}\): Adaptive stochastic gradient descent parameters \(\mathbf{\Sigma}^{init}\): Initial weight parameters \(\mathbf{b}^{init}\): Initial bias parameters Output:\(\mathbf{\Sigma}^{opt}\): Optimal weight parameters Output: Optimal bias parameters
1
2
3 end
4
5 end
6
7 end
8
9 end
10
11 end
12
13 end
14
15 end
16
17 end
18
19 end
20
21 end
21 end
22
23 end
24
25 end
26
27 end
28
29 end
30
31 end
32
32 end
333 end
34
35 end
36
37 end
38
38 end
39
40 end
410
42 end
430
443 end
444
45 end
46
47 end
48
49 end
50
511 end
52
612 end
62
631 end
64
70 end
71
72 end
723
734 end
735
746 end
757
76 end
77
88 end
80
81 end
82
837 end
847
85 end
86
878 end
879
88 end
890
910 end
8011
```
**Algorithm 2**Computing Weight and Bias Parameters
In addition, the gradient of the objective function with respect to the bias parameters is represented by \(\mathbf{q}^{bias}\):
\[\mathbf{q}^{bias}=\frac{2}{n}\sum_{i=1}^{S}\left.\frac{\partial\mathbf{x}_{R}^{( i)}}{\partial\mathbf{b}}\right|_{\mathbf{x}_{R}^{(i)}}\Big{(}\mathbf{x}_{R}^{(i)}( \mathbf{\Sigma},\mathbf{b})-\mathbf{x}_{AC}^{(i)}\Big{)}. \tag{14}\]
Using this gradient, one can find the optimal bias parameters \(\mathbf{b}\) using the Adam algorithm in the same fashion as in (13).
## IV Experimental Results and Discussion
This section evaluates the proposed algorithm's performance using numerical results from restoring AC power flow feasibility for solutions obtained from the SOCP [12], QC [13], and SDP [11] relaxations, the LPAC approximation [37], and the ML-based OPF model from [16].
### _Experiment Setup_
We generated 10,000 scenarios (8,000 for training and 2,000 for testing) for each of the PJM 5-bus, IEEE 14-bus, IEEE 57-bus, IEEE 118-bus, Illinois
200-Bus [39], IEEE 300-bus, and Pegase 1354-bus systems [40]. These scenarios were created by multiplying the nominal load demands by a normally distributed random variable with zero mean and standard deviation of \(10\%\). Solutions to the OPF problems and the relaxations and approximations were computed using PowerModels.jl[41] with the solvers Ipopt [42] and Mosek on a computing node of the Partnership for an Advanced Computing Environment (PACE) cluster at Georgia Tech. This computing node has a 24-core CPU and 32 GB of RAM. We also imported the ML results for available test cases from [16]. The restoration algorithm was implemented in Python 3.0 using a Jupyter Notebook.
### _Benchmarking Approach_
We consider three alternate restoration methods as comparisons to our proposed algorithm. The first method simply compares the voltage magnitudes and angles from the relaxed, approximated, or ML-based solution directly to the OPF solution. However, it is important to note that this method typically does not yield an AC power flow feasible point and is thus unsuitable for many practical applications. Additionally, this method is not applicable to the SDP and SOCP relaxations as they do not have variables corresponding to the voltage phase angles. The second method, referred to as the "benchmark" method, solves the power flow problem obtained from fixing the voltage magnitudes at all generator buses and the active power injections at non-slack generator buses to the outputs of the simplified OPF problem as discussed in [29] and used in a variety of papers such as [17, 19, 32, 33, 34, 35]. The third method is the proposed restoration algorithm with the initial weight and bias parameters \(\mathbf{\Sigma}_{ii}=1\) and \(\mathbf{b}_{i}=0\), \(i=1,\ldots,m\). The fourth method is the proposed restoration algorithm with weight and bias parameters computed using Algorithm 2.
### _Performance Evaluation_
Fig. 2 shows the weight parameters (i.e., the diagonal elements of \(\mathbf{\Sigma}\)) obtained by applying Algorithm 2 to the 5-bus system with the SOCP, QC, and SDP relaxations as well as the LPAC approximation. Observe that certain quantities receive significantly higher weights than others. For instance, in this test case, the algorithm allocates more weight to voltage magnitudes at buses 1 and 5, suggesting that these quantities are superior predictors of the actual OPF solutions. Larger weights imply that the algorithm considers these quantities more reliable when reconstructing the AC feasible points.
Moreover, Fig. 3 shows a geographic representation of the weight parameters \(\mathbf{\Sigma}\) for the voltage magnitudes in the Illinois 200-bus system. The SOCP, QC, and SDP relaxations and the LPAC approximation each assign different weights to various parts of the system, with some clustering evident. Additionally, the QC relaxation has larger weights on the voltage magnitudes overall compared to the other OPF simplifications. These distinct weight assignments will be leveraged later in Section IV-D to combine multiple simplified OPF solutions for improved accuracy and performance, as our proposed algorithm can exploit the strengths of each method while compensating for individual inaccuracies.
We evaluate the efficacy of the suggested restoration algorithm using the test dataset of 2,000 scenarios that were not used during the calculation of weight and bias parameters in Algorithm 2. Table II presents the loss function values for each solution recovery method. As shown in Table II, the proposed restoration algorithm successfully produces high-quality AC power flow feasible points from simplified OPF solutions. The loss functions resulting from the proposed algorithm are considerably smaller than those of other methods, including the benchmark approach. Furthermore, the application of optimized weight and bias parameters substantially enhances the performance of the loss function compared to using the initial weight and bias parameters \(\mathbf{\Sigma}_{ii}=1\) and \(\mathbf{b}_{i}=0\). Note that incorporating bias parameters into the updated algorithm further improves our initial findings presented in [36].
Additionally, we compare the restoration methods by analyzing the difference in density distributions, which represent how frequently a particular voltage magnitude or angle value appears in a restored solution relative to how often it appears in the true OPF solution for the 5-bus system, considering all samples in the test dataset. Fig. 4 demonstrates the performance of our proposed algorithm versus the benchmark, highlighting the superiority of the proposed algorithm when solving SDP relaxations of OPF problems. The figure has two subplots: (a) for voltage magnitudes and (b) for voltage angles. The vertical axes represent the difference in density relative to the true OPF solution, with positive values indicating higher density and negative values indicating lower density. Good performance is indicated by a line that is nearly horizontal at zero, which suggests that the restoration method accurately represents the true OPF solution across all voltage magnitudes and angles. This plot is useful for evaluating the overall performance of the restoration method, complementing the aggregate metrics in Table II by assessing performance across voltage magnitudes and angles. We observe that the restored solutions to SDP relaxations (denoted by the solid purple line) outperform all other methods, including SOCP, QC, and LPAC, for this problem. Moreover, the proposed algorithm with SOCP, QC, SDP, and LPAC exhibits better performance than their respective benchmark counterparts, as indicated by the smaller differences relative to the true OPF solution.
### _Combining Solutions_
The results presented above show that the restoration algorithm's outcomes vary with the choice of relaxation, approximation, or ML model, with none consistently dominating the others in every aspect. This suggests that there may be advantages in simultaneously considering _multiple_ simplified OPF solutions by combining their outputs using our proposed algorithm. With the flexibility to individually assign weights and biases for each quantity in a merged set of simplified OPF solutions, our proposed algorithm can naturally exploit the most accurate aspects of each simplified OPF solution while counterbalancing their individual inaccuracies.
In other words, merging solutions from multiple simplified OPF problems supplies additional information for reconstructing even higher quality solutions via our proposed
algorithm. For example, if we simultaneously use the results of the SOCP, QC, and SDP relaxations as well as the LPAC approximation, Algorithm 2 will consider \(\textbf{z}=[\textbf{V}_{SOCP}^{T}\,\textbf{V}_{SOCP}^{T}\,\textbf{V}_{SDP}^{T}\, \textbf{V}_{LPAC}^{T}\,\textbf{P}_{SOCP}^{T}\,\textbf{P}_{QC}^{T}\,\textbf{P}_{ SDP}^{T}\,\textbf{P}_{LPAC}^{T}\,\textbf{Q}_{SCCP}^{T}\,\textbf{Q}_{DCP}^{T}\, \textbf{Q}_{LPAC}^{T}\,\textbf{P}_{SOCP}^{T}\,\textbf{P}_{QC}^{T}\,\textbf{P}_{ SDP}\,\textbf{P}_{LPAC}^{T}\,\textbf{Q}_{SCCP}^{T}\] \(\textbf{Q}_{QC}^{T}\,\textbf{Q}_{QP}^{T}\,\textbf{Q}_{LPAC}^{T}\,\textbf{ \theta}_{QC}^{T}\,\textbf{P}_{SDP}^{T}\,\textbf{P}_{LPAC}^{T}\,\textbf{Q}_{ QC}^{T}\) \(\textbf{Q}_{QC}^{T}\,\textbf{Q}_{QP}^{T}\,\textbf{Q}_{LPAC}^{T}\). Algorithm 2 automatically identifies the accuracy for each quantity in all simplified OPF solutions by suitably assigning the corresponding \(\textbf{\Sigma}_{ii}\) and \(\textbf{b}_{i}\) values to optimally utilize all available information, thus enabling recovery of high-quality solutions even when individual simplified OPF solutions might falter. For instance, the loss function for the 5-bus system with all simplified OPF solutions improves by an order of magnitude relative to the restoration achievable with the best individual solution
Figure 4: Differences in the density distributions for the voltage magnitudes and angles for the restored points relative to the true OPF solutions. The vertical axis represents the difference in density from the true OPF solution, with positive values indicating higher density and negative values indicating lower density compared to true OPF solution for the 5-bus system over the test set of 2000 scenarios. (a) Voltage magnitudes. (b) Voltage angles (in degrees).
Figure 5: Cumulative proportion of the absolute error in the voltage magnitudes and angles for various restoration methods for the 5-bus system. The vertical axis displays the cumulative proportion of the absolute error and the horizontal axis shows differing levels of the absolute error. Both the horizontal and vertical axes are on logarithmic scales. Each curve shows the cumulative proportion of errors up to a certain level, with higher curves thus indicating a larger proportion of smaller errors (i.e., better performance). (a) Voltage magnitudes. (b) Voltage angles (in degrees).
Figure 3: Contour plot of weight parameters for voltage magnitudes in the Illinois 200-bus system for SOCP, QC, SDP, and LPAC.
Figure 2: Diagonal elements of the weight parameters \(\Sigma\) for the 5-bus system as obtained using Algorithm 2. Higher values (in blue) indicate more trustworthy quantities for reconstructing AC feasible points. (a) QC and LPAC (left). (b) SOCP and SDP (right).
(\(1\times 10^{-5}\) for the merged solutions versus \(1\times 10^{-4}\) for the solution restored from the SDP solution alone).
Fig. 5 compares the absolute errors in voltage magnitudes and angles, expressed as \(|\mathbf{X}_{AC}-\mathbf{X}_{R}|\), for various methods applied to the 5-bus system. The horizontal axis denotes the absolute error and the vertical axis represents the cumulative proportion of errors less than or equal to the respective value on the horizontal axis. Ideal performance would correspond to a curve situated in the upper-left corner, as this indicates a high proportion of small errors. With the steepest rise in Fig. 5, using multiple relaxations and approximations consistently leads to superior performance relative to using any single relaxation or approximation. Furthermore, Fig. 6 visualizes the weight parameters for each of the simplified solutions for the 5-bus system. The figure presents a comprehensive view of the optimal weight parameters for the combined SOCP, QC, and SDP relaxations and the LPAC approximation. Observe that each of the simplified models contributes quantities with non-negligible weight parameters when considered jointly.
Table III gives Algorithm 1's per-scenario run time with optimized weights and biases. The online execution is comparable to a power flow solve. For instance, PowerModels.jl performs the power flow calculations used by the benchmark restoration method in an average of 1.18 seconds for each scenario with the Pegase 1354-bus system, which is similar to the 1.02 to 1.21 seconds for Algorithm 1.
## V Conclusion
In this paper, we present a solution restoration algorithm that significantly improves AC power flow feasibility restoration from the solutions of simplified (relaxed, approximated, and ML-based) OPF problems. Our algorithm, which is based on state estimation techniques with the adjustment of weight and bias parameters through an ASGD method, can be several orders of magnitude more accurate than alternate methods.
In future research, we plan to utilize the insights gained from the trained weights and biases to improve the accuracy of power flow relaxations, approximations, and machine learning models. We also intend to incorporate the proposed restoration process into the training of machine learning models, thus closing the loop between training ML models and restoring AC feasible solutions. Additionally, future research will aim to devise related self-supervised restoration techniques that do not depend on the availability of accurate OPF solutions. Instead, these techniques would instead compute the sensitivities of a solution quality metric with respect to the weight and bias parameters without requiring an optimal solution. The ability
to restore high-quality AC power flow feasible operating points without the need for true OPF solutions would further increase the range of practical applications for the proposed algorithm.
## Appendix A Derivation of Sensitivities
The sensitivities of the voltage phasors \(\mathbf{x}_{R}\) obtained from the state estimation-inspired algorithm in relation to the weight matrix \(\mathbf{\Sigma}\) are calculated using (9), which is derived as follows:
\[Y=(\mathbf{H}^{T}\mathbf{\Sigma H})^{-1}\mathbf{H}^{T}\mathbf{\Sigma}(\mathbf{z }+\mathbf{b}-\mathbf{h}), \tag{15}\]
\[dY=\overbrace{d\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H} \right)^{-1}\mathbf{H}^{T}\Big{)}\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}- \mathbf{h}\right)}^{I}\] \[+\underbrace{\left(\mathbf{H}^{T}\mathbf{\Sigma H} \right)^{-1}\mathbf{H}^{T}d\Big{(}\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}- \mathbf{h}\right)\Big{)}}_{U}, \tag{16}\]
\[U=\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}\mathbf{H}^{T}d\mathbf{ \Sigma}\left(\mathbf{z}+\mathbf{b}-\mathbf{h}\right), \tag{17}\]
\[I=d\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}\Big{)}\mathbf{H}^ {T}\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}-\mathbf{h}\right), \tag{18}\]
\[I=\Big{(}-\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}d \left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)\left(\mathbf{H}^{T}\mathbf{\Sigma H }\right)^{-1}\Big{)}\] \[\times\Big{(}\mathbf{H}^{T}\mathbf{\Sigma}\left(\mathbf{z}+ \mathbf{b}-\mathbf{h}\right)\Big{)}, \tag{19}\]
\[I=\Big{(}-\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1} \mathbf{H}^{T}d\mathbf{\Sigma H}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{ -1}\Big{)}\] \[\times\Big{(}\mathbf{H}^{T}\mathbf{\Sigma}\left(\mathbf{z}+ \mathbf{b}-\mathbf{h}\right)\Big{)}, \tag{20}\]
\[dY= -\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}\mathbf{H}^{T}d \mathbf{\Sigma H}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}\mathbf{H}^ {T}\mathbf{\Sigma}(\mathbf{z}+\mathbf{b}\] \[-\mathbf{h}\right)+\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^ {-1}\mathbf{H}^{T}d\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}-\mathbf{h}\right), \tag{21}\]
\[\text{vec}(dY)=\text{vec}\Big{[}-\left(\mathbf{H}^{T}\mathbf{ \Sigma H}\right)^{-1}\mathbf{H}^{T}d\mathbf{\Sigma H}\left(\mathbf{H}^{T} \mathbf{\Sigma H}\right)^{-1}\] \[\times\mathbf{H}^{T}\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}- \mathbf{h}\right)\Big{]}+\text{vec}\Big{[}\left(\mathbf{H}^{T}\mathbf{\Sigma H }\right)^{-1}\] \[\times\mathbf{H}^{T}d\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}- \mathbf{h}\right)\Big{]}, \tag{22}\]
\[\text{vec}(dY)=\Big{[}-\left(\mathbf{H}\left(\mathbf{H}^{T} \mathbf{\Sigma H}\right)^{-1}\mathbf{H}^{T}\mathbf{\Sigma}\left(\mathbf{z}+ \mathbf{b}-\mathbf{h}\right)\Big{)}^{T}\] \[\otimes\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1} \mathbf{H}^{T}\Big{)}\Big{]}\text{vec}(d\mathbf{\Sigma})+\Big{[}(\mathbf{z}+ \mathbf{b}-\mathbf{h})^{T}\] \[\otimes\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1} \mathbf{H}^{T}\Big{)}\Big{]}\text{vec}(d\mathbf{\Sigma}), \tag{23}\]
\[\text{vec}(dY)=\Big{[}(\mathbf{z}+\mathbf{b}-\mathbf{h})^{T} \otimes\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1}\mathbf{H}^{T} \Big{)}\] \[-\Big{(}\mathbf{H}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{- 1}\mathbf{H}^{T}\mathbf{\Sigma}\left(\mathbf{z}+\mathbf{b}-\mathbf{h}\right) \Big{)}^{T}\] \[\otimes\Big{(}\left(\mathbf{H}^{T}\mathbf{\Sigma H}\right)^{-1} \mathbf{H}^{T}\Big{)}\Big{]}\text{vec}(d\mathbf{\Sigma}), \tag{24}\]
Figure 6: Trained diagonal elements of weight matrices for combined SOCP, QC, SDP, and LPAC solutions in the 5-bus system. (a) Voltage magnitude \(\mathbf{V}\). (b) Active power injection \(\mathbf{P}\). (c) Reactive power injection \(\mathbf{Q}\). (d) Active power flow \(\mathbf{P}^{f}\). (e) Reactive power flow \(\mathbf{Q}^{f}\). (f) Voltage angle \(\mathbf{\theta}\)
\[\frac{\text{vec}(\partial Y)}{\text{vec}(\partial\mathbf{\Sigma})}= \left((\mathbf{z}+\mathbf{b}-\mathbf{h})-(\mathbf{H}(\mathbf{H}^{T} \mathbf{\Sigma}\mathbf{H})^{-1}\mathbf{H}^{T}\mathbf{\Sigma}\right.\] \[\left.\times(\mathbf{z}+\mathbf{b}-\mathbf{h})\right)\otimes \left((\mathbf{H}^{T}\mathbf{\Sigma}\mathbf{H})^{-1}\mathbf{H}^{T}\right)^{T}. \tag{25}\]
## Appendix B Exploiting the Diagonal Structure of \(\Sigma\)
We observe that the computation of (4) can be optimized by leveraging the diagonal structure of the \(\mathbf{\Sigma}\) matrix. Instead of using a matrix, we can represent the weight terms as a vector \(\mathbf{\sigma}\) with \(\mathbf{\sigma}_{i}=\mathbf{\Sigma}_{ii}\), \(i=1,\ldots,m\). This approach allows us to focus on computing sensitivities exclusively for the diagonal entries, leading to a more efficient calculation. We will next demonstrate this approach and its computational advantages.
First, we rewrite (4) as:
\[J(\mathbf{x}) =(\mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x}))^{T}\mathbf{\Sigma}( \mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x})), \tag{26a}\] \[=(\mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x}))^{T}\mathbf{\sigma} \odot(\mathbf{z}+\mathbf{b}-\mathbf{h}(\mathbf{x})), \tag{26b}\]
where \(\odot\) denotes the Hadamard (element-wise) product. Following the same procedure as in (5)-(7), we first compute the derivative of (26b) with respect to \(\mathbf{x}\) as follows:
\[\mathbf{g}(\mathbf{x})=\frac{\partial J(\mathbf{x})}{\partial\mathbf{x}}=- \Big{(}\mathbf{\sigma}\odot\mathbf{H}(\mathbf{x})\Big{)}^{T}(\mathbf{z}+\mathbf{ b}-\mathbf{h}(\mathbf{x})), \tag{27}\]
where \(\mathbf{H}(\mathbf{x})=\frac{\partial\mathbf{h}(\mathbf{x})}{\partial\mathbf{ x}}\) is the Jacobian matrix of the function \(\mathbf{h}(\mathbf{x})\). The derivative of \(\mathbf{g}(\mathbf{x})\) with respect to \(\mathbf{x}\) is:
\[\mathbf{G}(\mathbf{x})=\frac{\partial\mathbf{g}(\mathbf{x})}{\partial\mathbf{ x}}=\Big{(}\frac{\partial\mathbf{h}(\mathbf{x})}{\partial\mathbf{x}}\Big{)}^{T} \mathbf{\sigma}\odot\mathbf{H}(\mathbf{x}). \tag{28}\]
To solve \(\mathbf{g}(\mathbf{x})=0\), we apply the Newton-Raphson method described in Algorithm 1 with modified \(\mathbf{g}(\mathbf{x})\) and \(\mathbf{G}(\mathbf{x})\) that performs, at the \(k\)-th iteration, the following steps:
\[\mathbf{x}^{k+1} =\mathbf{x}^{k}-(\mathbf{G}(\mathbf{x}))^{-1}g(\mathbf{x}), \tag{29a}\] \[\Delta\mathbf{x}^{k} =\left(\mathbf{H}^{T}\mathbf{\sigma}\odot\mathbf{H}\right)^{-1} \left(\mathbf{\sigma}\odot\mathbf{H}\right)^{T}(\mathbf{z}+\mathbf{b}-\mathbf{h}),\] (29b) \[\mathbf{x}^{k+1} =\mathbf{x}^{k}+\Delta\mathbf{x}^{k}. \tag{29c}\]
Now, we can derive the sensitivities of the state vector \(\mathbf{x}_{R}\) with respect to the \(\mathbf{\sigma}\) in the same fashion as in Appendix A:
\[\frac{\partial\mathbf{x}_{R}}{\partial\mathbf{\sigma}}= \Big{(}(\mathbf{z}+\mathbf{b}-\mathbf{h})-(\mathbf{H}(\mathbf{H}^{ T}\mathbf{\sigma}\odot\mathbf{H})^{-1}\mathbf{H}^{T}\mathbf{\sigma}\] \[\odot(\mathbf{z}+\mathbf{b}-\mathbf{h}))\Big{)}\odot\Big{(}( \mathbf{H}^{T}\mathbf{\sigma}\odot\mathbf{H})^{-1}\mathbf{H}^{T}\Big{)}^{T}. \tag{30}\]
The expression (30) gives the sensitivities of the restored point \(\mathbf{x}_{R}\) with respect to the vector of weight parameters \(\mathbf{\sigma}\). With length-\(m\) vectors \(\mathbf{z}\) and \(\mathbf{h}(\mathbf{x})\) and an \(m\times n\) matrix \(\mathbf{H}(\mathbf{x})\), the sensitivities \(\frac{\partial\mathbf{x}_{R}}{\partial\mathbf{\sigma}}\) are represented by a \(n\times m\) matrix. Accordingly, the size of the sensitivity matrix in (9) reduces from \(n\times m^{2}\) to \(n\times m\); therefore, this approach results in a more efficient implementation than considering the sensitivities of all entries of the \(\mathbf{\Sigma}\) matrix.
## Acknowledgement
The authors would like to thank M. Klamkin and P. Van Hentenryck for sharing the outputs of the machine learning models in [16].
|
2304.09257 | A structure-preserving upwind DG scheme for a degenerate phase-field
tumor model | In this work, we present a modification of the phase-field tumor growth model
given in [26] that leads to bounded, more physically meaningful, volume
fraction variables. In addition, we develop an upwind discontinuous Galerkin
(DG) scheme preserving the mass conservation, pointwise bounds and energy
stability of the continuous model. Finally, some computational tests in
accordance with the theoretical results are introduced. In the first test, we
compare our DG scheme with the finite element (FE) scheme related to the same
time approximation. The DG scheme shows a well-behavior even for strong
cross-diffusion effects in contrast with FE where numerical spurious
oscillations appear. Moreover, the second test exhibits the behavior of the
tumor-growth model under different choices of parameters and also of mobility
and proliferation functions. | Daniel Acosta-Soba, Francisco Guillén-González, J. Rafael Rodríguez Galván | 2023-04-18T19:41:23Z | http://arxiv.org/abs/2304.09257v2 | # A structure-preserving upwind DG scheme for a degenerate phase-field tumor model
###### Abstract
In this work, we present a modification of the phase-field tumor growth model given in [26] that leads to bounded, more physically meaningful, volume fraction variables. In addition, we develop an upwind discontinuous Galerkin (DG) scheme preserving the mass conservation, pointwise bounds and energy stability of the continuous model. Finally, some computational tests in accordance with the theoretical results are introduced. In the first test, we compare our DG scheme with the finite element (FE) scheme related to the same time approximation. The DG scheme shows a well-behavior even for strong cross-diffusion effects in contrast with FE where numerical spurious oscillations appear. Moreover, the second test exhibits the behavior of the tumor-growth model under different choices of parameters and also of mobility and proliferation functions.
Keywords:Degenerate Cahn-Hilliard. Degenerate proliferation. Cross-diffusion. Discrete point-wise bounds. Discrete energy stability.
## 1 Introduction
Lately, significant work on the mathematical modeling of tumor growth has been carried out. As a result, many different models have arisen, some of which have even been applied to predict the response of the tumor to its surrounding environment and possible medical treatments. Most of these models can be classified into micro-scale discrete models, macro-scale continuum models or hybrid models, [29, 12]. Regarding the continuum models, different approaches has been developed among which we can find models using both ODE, for instance, [8, 30], and PDE, for example, [18, 34].
In this sense, phase field models such as the Cahn-Hilliard (CH) equation have become a very popular tool. This model describes the evolution of a thin, diffuse, interface between two different phases or states of a process [7, 33] through a so-called phase-filed variable, which minimizes an adequate free energy. Sometimes, this CH model is coupled with a degenerate mobility to impose phase-related pointwise bounds on this variable.
In particular, in the context of tumor modeling, the phase-field variable \(u\) is usually interpreted as a tumor volume-fraction (with \(0\leq u\leq 1\)) and this model is coupled with other equations describing the interaction between the tumor and the surrounding environment. Some examples of these tumor models can be found in [4, 19, 21, 22, 26, 39, 42] and the references therein. Often, certain physical properties, inherited from the Cahn-Hilliard equation, are inherent to the solution of these models such as mass-conservation of a biological substance, pointwise bounds on some of the variables and some sort of energy-dissipation.
In this work, we consider the model (1) carefully derived from mixture theory by Hawkins-Daarud et al. in [26], which describes the interaction between a tumor and the nutrients in the extracellular water. To this aim, the CH equation for \(u\) is coupled with a diffusion equation for the nutrients \(n\) by means of some reaction and cross-diffusion terms. Although this model does not take into account some of the complex processes involved in the surrounding environment of the tumor, it allowed the authors to capture some irregular growth patterns that are typically associated with these processes. However, while this model is mass-conservative and energy-dissipative, it does not implicitly impose the necessary pointwise bounds on the volume fraction variables.
Therefore, we propose a modification of the aforementioned tumor model, see (2) below, in accordance with its physical interpretation. As a result of this modification, we obtain pointwise bounds on the tumor and the nutrient volume fractions (\(0\leq u,n\leq 1\)) which are consistent with the physical meaning of the variables. This modification may help to a future application of this model (or a variant of it) for real tumor growth prediction.
This phase-field tumor model (2) consist of a system of coupled nonlinear equations where reaction and cross diffusion effects appear. Thus, dealing with this model is really challenging both from a theoretical and the computational point of view.
In the case of the Cahn-Hilliard equation itself, several advances have been published regarding the existence and regularity of solution, most of which can be found in [31] and the references therein. Also, one can find several results regarding the existence, regularity and long-time behavior of the solution of the tumor model (1) and variants in the literature in the case without cross-diffusion, see [9, 10, 11, 20]. Recently, the well-posedness and the long-time behavior of the model (1) with cross-diffusion have been addressed in the work by H. Garcke and S. Yayla, [23].
Regarding the numerical approximation of these equations, significant advances have been done both with respect to the time and the spatial discretizations.
On the one hand, the classical approach for the time discretization of the phase-field models is the convex-splitting decomposition introduced in [17] which preserves the energy stability. Nonetheless,
other time-discrete schemes have been introduced in the literature (see, for instance, [24, 25, 41]). Among these time approximations we find the idea of introducing a Lagrange multiplier in the potential term in [5] which was extended in [24, 25, 41]. This idea led to the popular energy quadratization (EQ) schemes [44, 45, 46], later extended to the scalar auxiliary variable (SAV) approach [37].
On the other hand, in the case of the Cahn-Hilliard equation with degenerate mobility, designing a suitable spatial discretization consistent with the physical properties of the model, specially the pointwise bounds, is a difficult task and only a few works have been published in this regard. Among the currently available structure-preserving schemes we can find some schemes based on finite volumes, [6, 27], and on finite elements, [25]. Moreover, the authors have published a recent work, [2], where the pointwise bounds of the CH model in the case with convection are preserved using an upwind discontinuous Galerkin (DG) approximation. To the best knowledge of the authors, no previous work has been published defining a fully discrete DG scheme preserving the mass conservation, pointwise bounds and energy stability of the CH model with degenerate mobility.
The difficulties of the discretization are emphasized in the case of phase-field tumor models. In particular, in [26] an energy-stable finite element scheme with a first-order convex-splitting scheme in time is proposed for (1) and extended in [43] to a second-order time discretization. Other types of approximations of this model (1) using meshless collocation methods, [14], stabilized element-free Galerkin method, [32], and SAV Fourier-spectral method, [38], can be found in the literature. However, no bounds are imposed on the discrete variables whatsoever.
In this sense, we introduce a well-suited convex-splitting DG scheme of the proposed model (2), based on the previous works [1, 2, 28]. This approximation preserves the physical properties of the phase-field tumor model (mass conservation, pointwise bounds and energy stability) and prevent numerical spurious oscillations. This scheme can be applied, in particular, to the more simple CH model with degenerate mobility itself preserving all of the aforementioned properties.
This paper is organized as follows: in Section 2 we discuss the tumor model (1), which was derived in [26], and we introduce our modified version of this model, (2), showing its physical properties. In Section 3 we develop our numerical approximation of the tumor model (2). We introduce the convex-splitting time-discrete scheme (8) in Subsection 3.1. Moreover, we present the DG space approximation, (12), in Subsection 3.2, defining the upwind form (13) in Subsection 3.2.1. Then, we analyze the properties of the fully discrete scheme in Subsection 3.2.2. Finally, we compute a couple of numerical experiments in Section 4. Specifically, in Subsection (4.1), we present a numerical comparison between the robust DG scheme (12) and a FE discretization of (8), the latter of which fails in the case of strong cross-diffusion. In Subsection 4.2, we show the behavior of the model (2) under different choices of parameters and mobility/proliferation functions.
## 2 Modified tumor model
Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded polygonal domain and \(T>0\) the final time.
The following tumor-growth model was introduced in [26] and further studied in [43]:
\[\partial_{t}u =\nabla\cdot(M_{u}\nabla\mu_{u})+\delta P(u)(\mu_{n}-\mu_{u}) \text{in }\Omega\times(0,T), \tag{1a}\] \[\mu_{u} =F^{\prime}(u)-\varepsilon^{2}\Delta u-\chi_{0}n \text{in }\Omega\times(0,T),\] (1b) \[\partial_{t}n =\nabla\cdot(M_{n}\nabla\mu_{n})-\delta P(u)(\mu_{n}-\mu_{u}) \text{in }\Omega\times(0,T),\] (1c) \[\mu_{n} =\frac{1}{\delta}n-\chi_{0}u \text{in }\Omega\times(0,T),\] (1d) \[\nabla u\cdot\mathbf{n} =(M_{n}\nabla\mu_{n})\cdot\mathbf{n}=(M_{u}\nabla\mu_{u})\cdot \mathbf{n}=0 \text{on }\partial\Omega\times(0,T),\] (1e) \[u(0) =u_{0},\quad n(0)=n_{0} \text{in }\Omega, \tag{1f}\]
where \(u_{0},n_{0}\in L^{2}(\Omega)\). In this model, \(u\) and \(n\) represent the tumor cells and the nutrient-rich extracellular water volume fractions, respectively. Therefore, these variables are assumed to be bounded in \([0,1]\). Moreover, \(\mu_{u}\) and \(\mu_{n}\) are the (chemical) potentials of \(u\) and \(n\), respectively.
The behavior of the cells is modeled using a Cahn-Hilliard equation, where \(F(u)=\frac{1}{4}u^{2}(1-u)^{2}\) is the Ginzburg-Landau double well potential and \(M_{u}\) is the mobility of the tumor, which is taken either as constant or degenerated at \(u=0\), for instance, \(M_{u}(u)=\widehat{M}u^{2}\) with \(\widehat{M}>0\). The parameter \(\varepsilon\geq 0\) is related to the thickness of the interface between the tumor phases \(u=1\) (fully saturated) and \(u=0\) (fully unsaturated).
On the other hand, the nutrients are modeled using a diffusion equation where the function \(M_{n}\) is the mobility of the nutrients, which is taken as constant in practice.
These equations are coupled by cross diffusion terms (multiplied by the coefficient \(\chi_{0}\geq 0\)) introduced in (1b) and (1d) that model the attraction between tumor cells and nutrients. In addition, reaction terms modeling the consumption of nutrients by the tumor cells appear in (1a) and (1c), where \(P(u)\) is a proliferation term that vanishes when \(u\leq 0\) in [26] or when \(u\not\in(0,1)\) in [43]. These reaction terms depend on the difference between the potentials, which is assumed to be positive as the parameter \(\delta>0\) is very small, because one has the approximation
\[\delta P(u)(\mu_{n}-\mu_{u})=P(u)(n-\delta(\chi_{0}u-\mu_{u}))\approx P(u)n \quad\text{if }\delta\approx 0.\]
The well-posedness and long-time behavior of the model (1) and some variants have been considered in the case without cross-diffusion (\(\chi_{0}=0\)), see [9, 10, 11, 20], and only recently in the case with cross diffusion (\(\chi_{0}>0\)) in [23].
In this work, taking into account the previous considerations, we introduce the following modified
phase field tumor model
\[\partial_{t}u =C_{u}\nabla\cdot(M(u)\nabla\mu_{u})+\delta P_{0}P(u,n)(\mu_{n}-\mu_{ u})_{\oplus} \text{in }\Omega\times(0,T), \tag{2a}\] \[\mu_{u} =F^{\prime}(u)-\varepsilon^{2}\Delta u-\chi_{0}n \text{in }\Omega\times(0,T),\] (2b) \[\partial_{t}n =C_{n}\nabla\cdot(M(n)\nabla\mu_{n})-\delta P_{0}P(u,n)(\mu_{n}- \mu_{u})_{\oplus} \text{in }\Omega\times(0,T),\] (2c) \[\mu_{n} =\frac{1}{\delta}n-\chi_{0}u \text{in }\Omega\times(0,T),\] (2d) \[\nabla u\cdot\mathbf{n} =(M(n)\nabla\mu_{n})\cdot\mathbf{n}=(M(u)\nabla\mu_{u})\cdot \mathbf{n}=0 \text{on }\partial\Omega\times(0,T),\] (2e) \[u(0) =u_{0},\quad n(0)=n_{0} \text{in }\Omega, \tag{2f}\]
where \(u_{0},n_{0}\in L^{2}(\Omega)\), \(F(u)=\frac{1}{4}u^{2}(1-u)^{2}\) and all the parameters above are nonnegative with \(\delta,C_{u},C_{n}>0\) and \(\varepsilon,\chi_{0},P_{0}\geq 0\). Also, we have introduced the following notation for the positive and negative parts of a scalar function \(v\):
\[v_{\oplus}\coloneqq\frac{|v|+v}{2}=\max\{v,0\},\quad v_{\ominus}\coloneqq \frac{|v|-v}{2}=-\min\{v,0\},\quad v=v_{\oplus}-v_{\ominus}\,.\]
Moreover, we define the following family of degenerate mobilities
\[M(v):=h_{p,q}(v), \tag{3}\]
for certain \(p,q\in\mathbb{N}\) where
\[h_{p,q}(v)\coloneqq K_{p,q}v_{\oplus}^{p}(1-v)_{\oplus}^{q}=\begin{cases}K_{p, q}v^{p}(1-v)^{q},&v\in[0,1],\\ 0,&\text{elsewhere},\end{cases}\]
with \(K_{p,q}>0\) a constant so that \(\max_{x\in\mathbb{R}}h_{p,q}(v)=1\), hence \(M(v)\) is a degenerate and normalized mobility. In addition, we define the proliferation function depending on both cells and nutrients as
\[P(u,n)\coloneqq h_{r,s}(u)n_{\oplus}, \tag{4}\]
for certain \(r,s\in\mathbb{N}\).
Notice that the mobility functions, defined in (3), for the tumor and for the nutrients do not necessarily need to be identical. One may consider the tumor mobility as \(M_{u}(u)=h_{p,q}(u)\) with \(p,q\in\mathbb{N}\) and the nutrients mobility as \(M_{n}(n)=h_{p^{\prime},q^{\prime}}(n)\) with \(p^{\prime},q^{\prime}\in\mathbb{N}\) and all the results below equally hold. However, for simplicity, we will assume that \(M_{u}=M_{n}\) and denote the mobility function as \(M\).
**Remark 2.1**.: _This model introduces several changes with respect to the previous model (1) studied in [26, 43]. These modifications, described next, involve significant improvements. But also they lead to a more complex model, with degenerate mobility and proliferation functions and major difficulties regarding the analysis of the existence, regularity and long time behavior of solutions._
_Specifically:_
* _The difference between the potentials,_ \(\mu_{n}-\mu_{u}\) _is assumed to be positive since_ \(\delta\) _is set to be a very small parameter. This difference could possible be negative in the regions where_ \(n\simeq 0\) _but, in this case, the reaction terms vanish due to the proliferation function_ \(P(u,n)\) _defined in (_4_). Therefore, the positive part of_ \((\mu_{n}-\mu_{u})\) _is taken in (_2a_) and (_2c_)._
* _When_ \(\delta\to 0\)_, the reaction terms in equations (_2a_) and (_2c_) are assumed to grow with the square of the nutrients volume fraction. In fact_ \[\delta P_{0}P(u,n)(\mu_{n}-\mu_{u})_{\oplus}=P_{0}P(u,n)(n-\delta(\chi_{0}u-\mu _{u}))_{\oplus}\simeq P_{0}P(u,n)n=P_{0}h_{r,s}(u)(n_{\oplus})^{2}.\]
* _A degenerate mobility, (_3_), is considered for both the phase-field function_ \(u\) _and the volume fraction of nutrients_ \(n\)_._
* _The aforementioned modifications imply that_ \(u\) _and_ \(n\) _must be bounded in the interval_ \([0,1]\) _(see Theorem_ 2.3_), what matches the physical assumptions of the model since_ \(u\) _and_ \(n\) _are assumed to be volume fractions. This is a clear improvement over previous approaches, such us the ones considered in_ _[_43, 26_]__, where the solution does not necessarily satisfy these bounds._
**Remark 2.2**.: _In practice, \(C_{n}=\delta D\) with \(D>0\) so that, when \(\delta\to 0\), the \(n\)-equation is approached by_
\[\partial_{t}n\approx D\nabla\cdot(M(n)\nabla n)-P_{0}P(u,n)n.\]
Considering that \(\mu_{n}\) is explicitly determined by (2d), we can reduce the number of unknowns and define the weak formulation of (2) as: find \((u,\mu_{u},n)\) such that \(u,n\in L^{2}(0,T;H^{1}(\Omega))\), \(\partial_{t}u,\partial_{t}n\in L^{2}(0,T;H^{1}(\Omega)^{\prime})\) and \(\mu_{u}\in L^{2}(0,T;H^{1}(\Omega))\), which satisfies the following variational problem a.e. \(t\in(0,T)\)
\[\langle\partial_{t}u(t),\overline{u}\rangle =-C_{u}\left(M(u(t))\nabla\mu_{u}(t),\nabla\overline{u}\right)\] \[\quad+\delta P_{0}\left(P(u(t),n(t))(\mu_{n}(t)-\mu_{u}(t))_{ \oplus},\overline{u}\right), \forall\overline{u}\in H^{1}(\Omega), \tag{5a}\] \[(\mu_{u}(t),\overline{\mu}_{u}) =\varepsilon^{2}\left(\nabla u(t),\nabla\overline{\mu}_{u}\right) +\left(F^{\prime}(u(t))-\chi_{0}n(t),\overline{\mu}_{u}\right), \forall\overline{\mu}_{u}\in H^{1}(\Omega),\] (5b) \[\langle\partial_{t}n(t),\overline{n}\rangle =-C_{n}\left(M(n(t))\nabla\mu_{n}(t),\nabla\overline{n}\right)\] \[\quad-\delta P_{0}\left(P(u(t),n(t))(\mu_{n}(t)-\mu_{u}(t))_{ \oplus},\overline{n}\right), \forall\overline{n}\in H^{1}(\Omega),\] (5c) \[\mu_{n}(t) =\frac{1}{\delta}n(t)-\chi_{0}u(t), \tag{5d}\]
where \(u(0)=u_{0}\), \(n(0)=n_{0}\) and \((\cdot,\cdot)\), \(\langle\cdot,\cdot\rangle\) denote the usual scalar product in \(L^{2}(\Omega)\) and the dual product over \(H^{1}(\Omega)\), respectively.
Since \(u,n\in L^{2}(0,T,H^{1}(\Omega))\) with \(\partial_{t}u,\partial_{t}n\in L^{2}(0,T,H^{1}(\Omega)^{\prime})\), it is known, see for instance [13, 16], that \(u,n\in\mathcal{C}^{0}([0,T],L^{2}(\Omega))\) and that \(\langle\partial_{t}u(t),\overline{u}\rangle=\frac{d}{dt}\left(u(t),\overline{ u}\right)\), \(\langle\partial_{t}n(t),\overline{n}\rangle=\frac{d}{dt}\left(n(t),\overline{ n}\right)\) for a.e. \(t\in(0,T)\) and every \(\overline{u},\overline{n}\in H^{1}(\Omega)\).
**Proposition 2.3**.: _Given \(u_{0},v_{0}\in[0,1]\), any solution \((u,\mu_{u},n)\) of the model (5) satisfies that \(u(t)\) and \(n(t)\) are bounded in \([0,1]\) for a.e. \(t\in(0,T)\)._
Proof.: Let \((u,\mu_{u},n)\) be a solution of the model (5) and \(u_{0},v_{0}\in[0,1]\).
* First, we prove that \(u,n\geq 0\). Notice that \(u_{\ominus}\in L^{2}(0,T,H^{1}(\Omega))\) and take \(\overline{u}=u(t)_{\ominus}\) for a.e. \(t\in(0,T)\) in (5a). We arrive at \(\frac{1}{2}\frac{d}{dt}\left\|u(t)_{\ominus}\right\|_{L^{2}(\Omega)}^{2}=0\), hence \(\left\|u(t)_{\ominus}\right\|_{L^{2}(\Omega)}=\left\|u(0)_{\ominus}\right\|_{ L^{2}(\Omega)}=0\). Similarly, \(\left\|n(t)_{\ominus}\right\|_{L^{2}(\Omega)}=0\) for a.e. \(t\in(0,T)\).
* Now, we prove that \(u,n\geq 1\). Notice that \((1-u)_{\ominus}\in L^{2}(0,T,H^{1}(\Omega))\), \(\partial_{t}u=\partial_{t}(u-1)\) and take \(\overline{u}=(u(t)-1)_{\oplus}\) for a.e. \(t\in(0,T)\) in (5a). We arrive at \(\frac{1}{2}\frac{d}{dt}\left\|(u(t)-1)_{\oplus}\right\|_{L^{2}(\Omega)}^{2}=0\), hence \(\left\|(u(t)-1)_{\oplus}\right\|_{L^{2}(\Omega)}=\left\|(u(0)-1)_{\oplus} \right\|_{L^{2}(\Omega)}=0\). Similarly, \(\frac{1}{2}\frac{d}{dt}\left\|(n(t)-1)_{\oplus}\right\|_{L^{2}(\Omega)}^{2} \leq 0\) what implies \(\left\|(n(t)-1)_{\oplus}\right\|_{L^{2}(\Omega)}\leq\left\|(n(0)-1)_{\oplus} \right\|_{L^{2}(\Omega)}=0\) for a.e. \(t\in(0,T)\).
**Proposition 2.4**.: _Let \((u,\mu_{u},n)\) be a solution of the problem (5). Then, this solution conserves the total mass of tumor cells and nutrients in the sense of_
\[\frac{d}{dt}\int_{\Omega}(u(x,t)+n(x,t))dx=0.\]
Proof.: It is enough to take \(\overline{u}=\overline{n}=1\) in (5a) and (5c) and add the resulting expressions.
**Proposition 2.5**.: _If \((u,\mu_{u},n)\) is a solution of the problem (5) with \(\partial_{t}u\in L^{2}(0,T,H^{1}(\Omega))\), then it satisfies the following energy law_
\[\frac{dE(u(t),n(t))}{dt} +C_{u}\int_{\Omega}M(u(x,t))|\nabla\mu_{u}(x,t)|^{2}dx+C_{n}\int _{\Omega}M(n(x,t))|\nabla\mu_{n}(x,t)|^{2}\] \[+\delta P_{0}\int_{\Omega}P(u(x,t),n(x,t))(\mu_{u}(x,t)-\mu_{n}(x,t))_{\oplus}^{2}dx=0, \tag{6}\]
_where_
\[E(u(t),n(t))\coloneqq\int_{\Omega}\left(\frac{\varepsilon^{2}}{2}|\nabla u(x,t)|^{2}+F(u(x,t))-\chi_{0}u(x,t)n(x,t)+\frac{1}{2\delta}\left(n(x,t)\right)^ {2}\right)dx. \tag{7}\]
_Therefore, the solution is energy stable in the sense_
\[\frac{d}{dt}E(u(t),n(t))\leq 0.\]
Proof.: Take \(\overline{u}=\mu_{u}(t)\), \(\overline{\mu}_{u}=\partial_{t}u(t)\), \(\overline{n}=\mu_{n}(t)\) in (5a)-(5c) and test (5d) with \(\partial_{t}n(t)\). Adding the resulting expressions we arrive at
\[\varepsilon^{2}\left(\nabla u(t),\nabla(\partial_{t}u(t))\right) +\left(F^{\prime}(u(t)),\partial_{t}u(t)\right)-\chi_{0}\left[(n(t),\partial _{t}u(t))+(u(t),\partial_{t}n(t))\right]+\frac{1}{\delta}\left(n(t),\partial _{t}n(t)\right)\] \[+C_{u}\int_{\Omega}M(u(x,t))|\nabla\mu_{u}(t)|^{2}dx+C_{n}\int_{ \Omega}M(n(x,t))|\nabla\mu_{n}(t)|^{2}dx\] \[+\delta P_{0}\int_{\Omega}P(u(x,t),n(x,t))(\mu_{u}(x,t)-\mu_{n}(x,t))_{\oplus}(\mu_{u}(x,t)-\mu_{n}(x,t))dx=0.\]
Therefore, it is straightforward to check that (6) holds.
Numerical approximation
In this section we will develop a well suited approximation of the tumor model (2) which preserves the physical properties presented in the previous section.
### Time-discrete scheme
Regarding the time discretization, we take an equispaced partition \(0=t_{0}<t_{1}<\cdots<t_{N}=T\) of the time domain \([0,T]\) with \(\Delta t=t_{m+1}-t_{m}\) the time step. Also, given a scalar function \(v\) defined on \([0,T]\) we will denote \(v^{m}\simeq v(t_{m})\) and \(\delta_{t}v^{m+1}=(v^{m+1}-v^{m})/\Delta t\) the time-discrete derivative.
Now, we define a convex splitting of the double well potential \(F(u)\) as follows:
\[F(u)\coloneqq F_{i}(u)+F_{e}(u),\quad F_{i}(u)\coloneqq\frac{3}{8}u^{2}, \quad F_{e}(u)\coloneqq\frac{1}{4}u^{4}-\frac{1}{2}u^{3}-\frac{1}{8}u^{2}, \quad u\in[0,1],\]
where we are going to treat the convex term, \(F_{i}(u)\), implicitly and the concave term, \(F_{e}(u)\), explicitly (see, for instance, [2, 17, 25], for more details). For this, we define
\[f(u^{m+1},u^{m})\coloneqq F_{i}^{\prime}(u^{m+1})+F_{e}^{\prime}(u^{m})= \frac{1}{4}\left(3u^{m+1}+4(u^{m})^{3}-6(u^{m})^{2}-u^{m}\right).\]
We propose the following time-discrete scheme: given \((u^{m},\mu_{u}^{m},n^{m})\in H^{1}(\Omega)^{3}\) with \(u^{m},n^{m}\in[0,1]\) find \((u^{m+1},\mu_{u}^{m+1},n^{m+1})\in H^{1}(\Omega)^{3}\) such that
\[\left(\delta_{t}u^{m+1},\overline{u}\right) =-C_{u}\left(M(u^{m+1})\nabla\mu_{u}^{m+1},\nabla\overline{u}\right)\] \[\quad+\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\mu_{u}^ {m+1})_{\oplus},\overline{u}\right), \forall\overline{u}\in H^{1}(\Omega), \tag{8a}\] \[\left(\mu_{u}^{m+1},\overline{\mu}_{u}\right) =\varepsilon^{2}\left(\nabla u^{m+1},\nabla\overline{\mu}_{u}\right)\] \[\quad+\left(f(u^{m+1},u^{m})-\chi_{0}n^{m+1},\overline{\mu}_{u} \right), \forall\overline{\mu}_{u}\in H^{1}(\Omega),\] (8b) \[\left(\delta_{t}n^{m+1},\overline{n}\right) =-C_{n}\left(M(n^{m+1})\nabla\mu_{n}^{m+1},\nabla\overline{n}\right)\] \[\quad-\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\mu_{u}^ {m+1})_{\oplus},\overline{n}\right), \forall\overline{n}\in H^{1}(\Omega),\] (8c) \[\mu_{n}^{m+1} =\frac{1}{\delta}n^{m+1}-\chi_{0}u^{m}, \tag{8d}\]
where \(u^{0}=u_{0}\) y \(n^{0}=n_{0}\).
Notice that the proposed scheme (8) is just a variation of backward Euler's method where we have treated explicitly the concave part of the splitting of \(F(u)\) in (8b) and a part of the cross diffusion in (8d).
The proofs of the following results are analogous to Propositions 2.4 and 2.5.
**Proposition 3.1**.: _Any solution \((u^{m+1},\mu_{u}^{m+1},n^{m+1})\) of the time-discrete scheme (8) satisfies that \(u^{m+1}\) and \(n^{m+1}\) are bounded in \([0,1]\)._
**Proposition 3.2**.: _Any solution \((u^{m+1},\mu_{u}^{m+1},n^{m+1})\) of the time-discrete scheme (8) conserves the total mass of tumor cells and nutrients in the sense of_
\[\delta_{t}\int_{\Omega}(u^{m+1}(x)+n^{m+1}(x))dx=0.\]
**Proposition 3.3**.: _Any solution \((u^{m+1},\mu_{u}^{m+1},n^{m+1})\) of the time-discrete scheme (8) satisfies the following discrete energy law_
\[\delta_{t}E(u^{m+1},n^{m+1}) +C_{u}\int_{\Omega}M(u^{m+1})|\nabla\mu_{u}^{m+1}|^{2}+C_{n}\int_{ \Omega}M(n^{m+1})|\nabla\mu_{n}^{m+1}|^{2}\] \[+\delta P_{0}\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{u}^{m+1}-\mu_{ n}^{m+1})_{\oplus}^{2}\leq 0, \tag{9}\]
_where \(E(u,n)\) is defined in (7)._
_Therefore, the solution is energy stable in the sense_
\[\delta_{t}E(u^{m+1},n^{m+1})\leq 0.\]
Proof.: Take \(\overline{u}=\mu_{u}^{m+1}\), \(\overline{\mu}_{u}=\delta_{t}u^{m+1}\), \(\overline{n}=\mu_{n}^{m+1}\) in (5a)-(5c) and test (5d) by \(\delta_{t}n^{m+1}\). Adding the resulting expressions we arrive at
\[\varepsilon^{2}\left(\nabla u^{m+1},\nabla(\delta_{t}u^{m+1}) \right)+\left(f(u^{m+1},u^{m}),\delta_{t}u^{m+1}\right)-\chi_{0}\left[\left(n^ {m+1},\delta_{t}u^{m+1}\right)+\left(u^{m},\delta_{t}n^{m+1}\right)\right]\\ +\frac{1}{\delta}\left(n^{m+1},\delta_{t}n^{m+1}\right)+C_{u}\int _{\Omega}M(u^{m+1})|\nabla\mu_{u}^{m+1}|^{2}+C_{n}\int_{\Omega}M(n^{m+1})| \nabla\mu_{n}^{m+1}|^{2}\\ +\delta P_{0}\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{u}^{m+1}-\mu_{ n}^{m+1})_{\oplus}(\mu_{u}^{m+1}-\mu_{n}^{m+1})=0.\]
Now, using that
\[\left(n^{m+1},\delta_{t}u^{m+1}\right)+\left(u^{m},\delta_{t}n^{m+1}\right)= \delta_{t}\int_{\Omega}u^{m+1}n^{m+1}\]
we obtain
\[\delta_{t}E(u^{m+1},n^{m+1}) +C_{u}\int_{\Omega}M(u^{m+1})|\nabla\mu_{u}^{m+1}|^{2}+C_{n}\int _{\Omega}M(n^{m+1})|\nabla\mu_{n}^{m+1}|^{2}\] \[+\delta P_{0}\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{u}^{m+1}-\mu_{ n}^{m+1})_{\oplus}^{2}\] \[=\delta_{t}\int_{\Omega}F^{\prime}(u^{m+1})-\left(f(u^{m+1},u^{m} ),\delta_{t}u^{m+1}\right).\]
Finally, it is straightforward to check that
\[\delta_{t}\int_{\Omega}F^{\prime}(u^{m+1})-\left(f(u^{m+1},u^{m}),\delta_{t}u ^{m+1}\right)\leq 0\]
using the standard convex splitting technique (see [2, 17, 25]), what yields (9).
### Fully discrete scheme
For the space discretization, we consider a shape-regular triangular mesh \(\mathcal{T}_{h}=\{K\}_{K\in\mathcal{T}_{h}}\) of size \(h\) over \(\Omega\), and we note by \(\mathcal{E}_{h}\) the set of the edges of \(\mathcal{T}_{h}\) where \(\mathcal{E}_{h}^{\mathrm{i}}\) denotes the _interior edges_ and \(\mathcal{E}_{h}^{\mathrm{b}}\) denotes the _boundary edges_ such that \(\mathcal{E}_{h}=\mathcal{E}_{h}^{\mathrm{i}}\cup\mathcal{E}_{h}^{\mathrm{b}}\).
We set the following mesh orientation: the unit normal vector \(\mathbf{n}_{e}\) associated to an interior edge \(e\in\mathcal{E}_{h}^{\mathrm{i}}\) shared by the elements \(K,L\in\mathcal{T}_{h}\), i.e. \(e=\partial K\cap\partial L\), is exterior to \(K\) pointing to \(L\). Moreover, for the boundary edges \(e\in\mathcal{E}_{h}^{\mathrm{b}}\), the unit normal vector \(\mathbf{n}_{e}\) points outwards of the domain \(\Omega\).
In addition, we assume the following hypothesis:
**Hypothesis 1**.: The line between the baricenters of any adjacent triangles \(K\) and \(L\) is orthogonal to the interface \(e=K\cap L\in\mathcal{E}_{h}^{\mathrm{i}}\).
One example of a mesh satisfying Hypothesis 1 is plotted in Figure 1. For other examples and a further insight on this property we refer the reader to [1].
We define the _average_\(\{\!\{\cdot\}\!\}\) and the _jump_\(\{\!\{\cdot\}\!\}\) of a scalar function \(v\) on an edge \(e\in\mathcal{E}_{h}\) as follows:
\[\{\!\{v\}\!\}\coloneqq\begin{cases}\dfrac{v_{K}+v_{L}}{2}&\text{if }e\in \mathcal{E}_{h}^{\mathrm{i}}\\ v_{K}&\text{if }e\in\mathcal{E}_{h}^{\mathrm{b}}\end{cases},\qquad\!\{v\} \coloneqq\begin{cases}v_{K}-v_{L}&\text{if }e\in\mathcal{E}_{h}^{ \mathrm{i}}\\ v_{K}&\text{if }e\in\mathcal{E}_{h}^{\mathrm{b}}\end{cases}.\]
Let \(\mathbb{P}_{k}^{\mathrm{disc}}(\mathcal{T}_{h})\) and \(\mathbb{P}_{k}^{\mathrm{cont}}(\mathcal{T}_{h})\) be the spaces of Finite Element discontinuous and continuous functions, respectively, whose restriction to the elements \(K\) of \(\mathcal{T}_{h}\) are polynomials of degree \(k\geq 0\). Also, we define the projection \(\Pi_{0}\colon L^{1}(\Omega)\to\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\) and the regularization \(\Pi_{1}^{h}\colon L^{1}(\Omega)\to\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_ {h})\) of a function \(g\in L^{1}(\Omega)\) as the function satisfying the following:
\[(g,\overline{w}) =(\Pi_{0}g,\overline{w})\,, \forall\,\overline{w} \in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h}), \tag{10}\] \[\big{(}g,\overline{\phi}\big{)} =\left(\Pi_{1}^{h}g,\overline{\phi}\right)_{h}, \forall\,\overline{\phi} \in\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h}), \tag{11}\]
where \(\left(\cdot,\cdot\right)_{h}\) is the mass-lumping scalar product in \(\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h})\).The operators \(\Pi_{0}\) and \(\Pi_{1}^{h}\) are well defined, using the Lax-Milgram Theorem.
For a further insight on discontinuous Galerkin methods we recommend [15].
We propose the following fully discrete scheme for the model (2): given \(u^{m},n^{m}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\) with \(u^{m},n^{m}\in[0,1]\) and \(\mu_{u}^{m}\in\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h})\), find \(u^{m+1},n^{m+1}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\) and \(\mu_{u}^{m+1}\in\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h})\), such that
\[\big{(}\delta_{t}u^{m+1},\overline{u}\big{)} =-C_{u}a_{h}^{\mathrm{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1}), \overline{u})\] \[\quad+\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0} \mu_{u}^{m+1})_{\oplus},\overline{u}\right), \forall\overline{u}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h}), \tag{12a}\] \[\big{(}\mu_{u}^{m+1},\overline{\mu}_{u}\big{)}_{h} =\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u^{m+1},\nabla\overline{ \mu}_{u}\right)+\left(f(\Pi_{1}^{h}u^{m+1},\Pi_{1}^{h}u^{m}),\overline{\mu}_{ u}\right)\] \[\quad-\chi_{0}\left(n^{m+1},\overline{\mu}_{u}\right), \forall\overline{\mu}_{u}\in\mathbb{P}_{1}^{\mathrm{cont}}( \mathcal{T}_{h}),\] (12b) \[\big{(}\delta_{t}n^{m+1},\overline{n}\big{)} =-C_{n}a_{h}^{\mathrm{upw}}(\mu_{n}^{m+1};M(n^{m+1}),\overline{n})\] \[\quad-\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0} \mu_{u}^{m+1})_{\oplus},\overline{n}\right), \forall\overline{n}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h}),\] (12c) \[\mu_{n}^{m+1} =\frac{1}{\delta}n^{m+1}-\chi_{0}\Pi_{0}(\Pi_{1}^{h}u^{m}), \tag{12d}\]
where \(u^{0}=u_{0}\), \(n^{0}=n_{0}\) and \(a_{h}^{\mathrm{upw}}(\cdot;\cdot,\cdot)\) is an upwind form defined in Subsection 3.2.1 below. To ease the notation, we denote the solution of this fully discrete scheme in the same way than the time discrete scheme (8). From now on we will refer to the solution of the fully discrete scheme unless otherwise specified.
Notice that we have introduced the regularization of \(u^{m+1}\), \(\Pi_{1}^{h}u^{m+1}\) to preserve the diffusion term in (12b). In fact, this regularized variable will be regarded as our approximation of the tumor cells volume fraction as, according to the results in Subsection 3.2.2, it preserves the maximum
principle and satisfies a discrete energy law. Moreover, in order to preserve the maximum principle and the dissipation of the energy, we consider mass lumping in the term \(\left(\mu_{u}^{m+1},\overline{\mu}\right)_{h}\).
**Remark 3.4**.: _The homogeneous Neumann boundary conditions on \(u^{m}\) and \(n^{m}\) have been implicitly imposed in the definition of \(a_{h}^{\text{upw}}(\cdot;\cdot,\cdot)\), see (13). In addition, the boundary condition \(\nabla\Pi_{1}^{h}u^{m}\cdot\mathbf{n}=0\) on \(\partial\Omega\times(0,T)\) is imposed implicitly by the term \(\left(\nabla\Pi_{1}^{h}u^{m},\nabla\overline{\mu}\right)\) in (12b)._
**Remark 3.5**.: _The scheme (12) is nonlinear so we will have to use an iterative procedure, such as Newton's method, to approach its solution._
#### 3.2.1 Definition of \(a_{h}^{\text{upw}}(\cdot;\cdot,\cdot)\)
First of all, following the ideas in [2], in order to preserve the maximum principle using an upwind approximation we will split the mobility function into its increasing and its decreasing part as follows:
\[M^{\uparrow}(v)=\begin{cases}M(v),&v\leq v^{*},\\ M(v^{*}),&v>v^{*},\end{cases}\quad M^{\downarrow}(v)=\begin{cases}0,&v\leq v^{ *},\\ M(v)-M(v^{*}),&v>v^{*},\end{cases}\]
where \(v^{*}\in\mathbb{R}\) is the point where the maximum of \(M(v)\) is attained, which can be obtained by simple algebraic computations. Note that \(M(v)=M^{\uparrow}(v)+M^{\downarrow}(v)\).
Now, we define the following upwind form for \(v,\overline{v},\mu\in\mathbb{P}_{0}(\mathcal{T}_{h})\):
\[a_{h}^{\text{upw}}(\mu;M(v),\overline{v}):=\\ \sum_{e\in\mathcal{E}_{h}^{i},e=K\cap L}\int_{e}\left(\left(- \nabla_{\mathbf{n}_{e}}^{0}\mu\right)_{\oplus}\left(M^{\uparrow}(v_{K})+M^{ \downarrow}(v_{L})\right)_{\oplus}-(-\nabla_{\mathbf{n}_{e}}^{0}\mu)_{\odot} \left(M^{\uparrow}(v_{L})+M^{\downarrow}(v_{K})\right)_{\oplus}\right) \llbracket\overline{v}\rrbracket \tag{13}\]
with
\[\nabla_{\mathbf{n}_{e}}^{0}\mu=\frac{-\left[\![\mu]\!\right]}{\mathcal{D}_{e}( \mathcal{T}_{h})}=\frac{\mu_{L}-\mu_{K}}{\mathcal{D}_{e}(\mathcal{T}_{h})}, \tag{14}\]
a reconstruction of the normal gradient using \(\mathbb{P}_{0}(\mathcal{T}_{h})\) functions for every \(e\in\mathcal{E}_{h}^{i}\) with \(e=K\cap L\) (see [1] for more details). We have denoted \(\mathcal{D}_{e}(\mathcal{T}_{h})\) the distance between the baricenters of the triangles \(K\) and \(L\) of the mesh \(\mathcal{T}_{h}\) that share \(e\in\mathcal{E}_{h}^{i}\). This way, we can rewrite (13) as
\[a_{h}^{\text{upw}}(\mu;M(v),\overline{v}):=\\ \sum_{e\in\mathcal{E}_{h}^{i},e=K\cap L}\frac{1}{\mathcal{D}_{e}( \mathcal{T}_{h})}\int_{e}\left(\llbracket\mu\rrbracket_{\oplus}\left(M^{ \uparrow}(v_{K})+M^{\downarrow}(v_{L})\right)_{\oplus}-\llbracket\mu\rrbracket_{ \oplus}\left(M^{\uparrow}(v_{L})+M^{\downarrow}(v_{K})\right)_{\oplus} \right)\llbracket\overline{v}\rrbracket\,. \tag{15}\]
**Remark 3.6**.: _The form \(a_{h}^{\text{upw}}(\mu;M(v),\overline{v})\) is an upwind approximation of the convective term_
\[-\left(M(v)\nabla\mu,\nabla\overline{v}\right),\quad\overline{v}\in H^{1}( \Omega),\]
_taking into consideration that the orientation of the flux is determined by both the orientation of \(\nabla\mu\) and the sign of \(M^{\prime}(v)\) as follows:_
\[\nabla\cdot(M(v)\nabla\mu)=M^{\prime}(v)\nabla v\nabla\mu+M(v)\Delta\mu.\]
_In order to develop this approximation we have followed the ideas of [1, 2, 28] and we have considered an approximation of \(M(v)\) in (13) by means of its increasing and decreasing parts, \(M^{\uparrow}(v)\) and \(M^{\downarrow}(v)\), whose derivatives are positive and negative, respectively. However, unlike in [2], we have also truncated the mobility \(M(v)\) to avoid negative approximations of \(M(v)\) that may lead to a loss of energy stability._
#### 3.2.2 Properties of the fully discrete scheme
**Proposition 3.7**.: _The scheme (12) conserves the total mass of cells and nutrients in the following sense: for all \(m\geq 0\),_
\[\int_{\Omega}(u^{m+1}+n^{m+1})=\int_{\Omega}(u^{m}+n^{m})\quad\text{and}\quad \int_{\Omega}(\Pi_{1}^{h}u^{m+1}+n^{m+1})=\int_{\Omega}(\Pi_{1}^{h}u^{m}+n^{m}).\]
Proof.: Just need to take \(\overline{u}=1\) in (12a) and \(\overline{n}=1\) in (12c) and add both expressions to obtain:
\[\int_{\Omega}(u^{m+1}+n^{m+1})=\int_{\Omega}(u^{m}+n^{m}).\]
Moreover, due to the definition of the regularization \(\Pi_{1}^{h}\), we have that \(\int_{\Omega}u^{m+1}=\int_{\Omega}\Pi_{1}^{h}u^{m+1}\) and \(\int_{\Omega}u^{m}=\int_{\Omega}\Pi_{1}^{h}u^{m}\), what yields
\[\int_{\Omega}(\Pi_{1}^{h}u^{m+1}+n^{m+1})=\int_{\Omega}(\Pi_{1}^{h}u^{m}+n^{m }).\]
**Theorem 3.8** (DG scheme (12) preserves the maximum principle).: _Let \((u^{m+1},\mu_{u}^{m+1},n^{m+1})\) be a solution of (12), then \(u^{m+1},n^{m+1}\in[0,1]\) provided \(u^{m},n^{m}\in[0,1]\)._
Proof.: Firstly, we prove that \(u^{m+1},n^{m+1}\geq 0\).
To prove that \(u^{m+1}\geq 0\) we may take the following \(\mathbb{P}_{0}^{\text{disc}}(\mathcal{T}_{h})\) test function
\[\overline{u}^{*}=\begin{cases}(u_{K^{*}}^{m+1})_{\ominus}&\text{in }K^{*}\\ 0&\text{out of }K^{*}\end{cases},\]
where \(K^{*}\) is an element of \(\mathcal{T}_{h}\) such that \(u_{K^{*}}^{m+1}=\min_{K\in\mathcal{T}_{h}}u_{K}^{m+1}\). Then, by definition of \(P(u,n)\) in (4),
\[\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\mu_{u}^{m+1})_{\oplus}, \overline{u}^{*}\right)=0,\]
equation (12a) becomes
\[|K^{*}|\delta_{t}u_{K^{*}}^{m+1}(u_{K^{*}}^{m+1})_{\ominus}=-C_{u}u_{h}^{ \text{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1})_{\oplus},\overline{u}^{*}). \tag{16}\]
Now, since \(u_{L}^{m+1}\geq u_{K^{*}}^{m+1}\) we can assure that
\[M^{\uparrow}(u_{L}^{m+1})\geq M^{\uparrow}(u_{K^{*}}^{m+1})\quad\text{and} \quad M^{\downarrow}(u_{L}^{m+1})\leq M^{\downarrow}(u_{K^{*}}^{m+1}).\]
Hence, using that the positive part is an increasing function, we obtain
\[a_{h}^{\text{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1}),\overline{u}^{*})\leq 0,\]
which yields \(|K^{*}|\delta_{t}u_{K^{*}}^{m+1}(u_{K^{*}}^{m+1})_{\ominus}\geq 0\).
Consequently,
\[0\leq|K^{*}|(\delta_{t}u_{K^{*}}^{m+1})(u_{K^{*}}^{m+1})_{\ominus}=-\frac{|K^{* }|}{\Delta t}\left((u_{K^{*}}^{m+1})_{\ominus}^{2}+u_{K^{*}}^{m}(u_{K^{*}}^{m+1 })_{\ominus}\right)\leq 0,\]
which implies, since \(u_{K^{*}}^{m}\geq 0\), that \((u_{K^{*}}^{m+1})_{\ominus}=0\). Hence \(u^{m+1}\geq 0\).
Similarly, taking the following \(\mathbb{P}_{0}^{\text{disc}}(\mathcal{T}_{h})\) test function
\[\overline{n}^{*}=\begin{cases}(n_{K^{*}}^{m+1})_{\ominus}&\text{in }K^{*}\\ 0&\text{out of }K^{*}\end{cases}\]
in (12c), where \(K^{*}\) is an element of \(\mathcal{T}_{h}\) such that \(n_{K^{*}}^{m+1}=\min_{K\in\mathcal{T}_{h}}n_{K}^{m+1}\) we get that \(n^{m+1}\geq 0\).
Secondly, we prove that \(u^{m+1},n^{m+1}\leq 1\).
To prove that \(u^{m+1}\leq 1\), taking the following test function in (12a),
\[\overline{u}^{*}=\begin{cases}(u_{K^{*}}^{m+1}-1)_{\oplus}&\text{in }K^{*}\\ 0&\text{out of }K^{*}\end{cases},\]
where \(K^{*}\) is an element of \(\mathcal{T}_{h}\) such that \(u_{K^{*}}^{m+1}=\max_{K\in\mathcal{T}_{h}}u_{K}^{m+1}\) and using similar arguments than above, we arrive at
\[|K^{*}|\delta_{t}u_{K^{*}}^{m+1}(u_{K^{*}}^{m+1}-1)_{\oplus}\leq 0.\]
Therefore, it is satisfied that
\[0 \geq|K^{*}|\delta_{t}u_{K^{*}}^{m+1}(u_{K^{*}}^{m+1}-1)_{\oplus}= \frac{|K^{*}|}{\Delta t}\left((u_{K^{*}}^{m+1}-1)+(1-u_{K^{*}}^{m})\right)(u_ {K^{*}}^{m+1}-1)_{\oplus}\] \[=\frac{|K^{*}|}{\Delta t}\left((u_{K^{*}}^{m+1}-1)_{\oplus}^{2}+ (1-u_{K^{*}}^{m})(u_{K^{*}}^{m+1}-1)_{\oplus}\right)\geq 0,\]
what yields \((u_{K^{*}}^{m+1}-1)_{\oplus}=0\) and, therefore, \(u^{m+1}\leq 1\).
Finally, taking the test function
\[\overline{n}^{*}=\begin{cases}(n_{K^{*}}^{m+1}-1)_{\oplus}&\text{in }K^{*}\\ 0&\text{out of }K^{*}\end{cases}\]
in (12c), where \(K^{*}\) is an element of \(\mathcal{T}_{h}\) such that \(n_{K^{*}}^{m+1}=\max_{K\in\mathcal{T}_{h}}n_{K}^{m+1}\) we obtain, similarly, that \(n^{m+1}\leq 1\).
The following result is a direct consequence of the previous Theorem 3.8 and the definition of the regularization \(\Pi_{1}^{h}\).
**Corollary 3.9**.: _The regularized approximation of the phase-field variable satisfies \(\Pi_{1}^{h}u^{m+1}\in[0,1]\) provided \(u^{m+1}\in[0,1]\)._
Now, we focus on the existence of the scheme (12). For this, we consider the following well-known result.
**Theorem 3.10** (Leray-Schauder fixed point theorem).: _Let \(\mathcal{X}\) be a Banach space and let \(T\colon\mathcal{X}\longrightarrow\mathcal{X}\) be a continuous and compact operator. If the set_
\[\{x\in\mathcal{X}\colon x=\alpha\,T(x)\quad\text{for some }0\leq\alpha\leq 1\}\]
_is bounded (with respect to \(\alpha\)), then \(T\) has at least one fixed point._
**Theorem 3.11** (Existence).: _There is at least one solution of the scheme (12)._
Proof.: Given two functions \(z_{u},z_{n}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\) with \(0\leq z_{u},z_{n}\leq 1\), we define the map
\[T\colon\mathbb{P}_{0}^{\mathrm{disc}}\times\mathbb{P}_{1}^{\mathrm{cont}} \times\mathbb{P}_{0}^{\mathrm{disc}}\longrightarrow\mathbb{P}_{0}^{\mathrm{ disc}}\times\mathbb{P}_{1}^{\mathrm{cont}}\times\mathbb{P}_{0}^{\mathrm{disc}}\]
such that
\[T(\widehat{u},\widehat{\mu}_{u},\widehat{n})=(u,\mu_{u},n)\in\mathbb{P}_{0}^{ \mathrm{disc}}(\mathcal{T}_{h})\times\mathbb{P}_{1}^{\mathrm{cont}}( \mathcal{T}_{h})\times\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\]
is the unique solution of the linear (and decoupled, computing first \(\mu_{n}\), next \(n\), then \(u\) and finally \(\mu_{u}\)) scheme:
\[\frac{1}{\Delta t}\,(u-z_{u},\overline{u}) =-C_{u}a_{h}^{\mathrm{upw}}(\Pi_{0}\widehat{\mu};M(\widehat{u}), \overline{u})+\delta P_{0}\left(P(\widehat{u},\widehat{n})(\mu_{n}-\Pi_{0} \widehat{\mu}_{u})_{\oplus},\overline{u}\right), \forall\overline{u}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h}), \tag{17a}\] \[(\mu_{u},\overline{\mu}_{u})_{h} =\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u,\nabla\overline{\mu}_{u} \right)+\left(f(\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u}),\overline{\mu}_{u}\right)- \chi_{0}\left(n,\overline{\mu}_{u}\right), \forall\overline{\mu}_{u}\in\mathbb{P}_{1}^{\mathrm{cont}}( \mathcal{T}_{h}),\] (17b) \[\frac{1}{\Delta t}\,(n-z_{n},\overline{n}) =-C_{n}a_{h}^{\mathrm{upw}}(\mu_{n};M(\widehat{n}),\overline{n} )-\delta P_{0}\left(P(\widehat{u},\widehat{n})(\mu_{n}-\Pi_{0}\widehat{\mu}_{ u})_{\oplus},\overline{n}\right), \forall\overline{n}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h}),\] (17c) \[\mu_{n} =\frac{1}{\delta}\widehat{n}-\chi_{0}\Pi_{0}(\Pi_{h}^{1}z_{u}), \tag{17d}\]
It is straightforward to check that, for any given \((\widehat{u},\widehat{\mu}_{u},\widehat{n})\in\mathbb{P}_{0}^{\mathrm{disc}} (\mathcal{T}_{h})\times\mathbb{P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h})\times \mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\), there is a unique solution \((u,\mu_{u},n)\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\times\mathbb{ P}_{1}^{\mathrm{cont}}(\mathcal{T}_{h})\times\mathbb{P}_{0}^{\mathrm{disc}}( \mathcal{T}_{h})\). Therefore, \(T\) is well defined.
Now we will prove that the operator \(T\) is under the hypotheses of the Leray-Schauder fixed point theorem 3.10.
First, we check that \(T\) is continuous. Let \(\{(\widehat{u}_{j},\widehat{\mu}_{u_{j}},\widehat{n}_{j})\}_{j\in\mathbb{N}} \subset\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\times\mathbb{P}_{1}^{ \mathrm{cont}}(\mathcal{T}_{h})\times\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T} _{h})\) be a sequence such that \(\lim_{j\rightarrow\infty}(\widehat{u}_{j},\widehat{\mu}_{u_{j}},\widehat{n}_{j} )=(\widehat{u},\widehat{\mu}_{u},\widehat{n})\). Taking into account that all norms are equivalent in \(\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\) since it is a finite-dimensional space, the convergences \(\widehat{u}_{j}\rightarrow\widehat{u}\) and \(\widehat{n}_{j}\rightarrow\widehat{n}\) are equivalent to the convergences elementwise \((\widehat{u}_{j})_{K}\rightarrow\widehat{n}_{K}\) and \((\widehat{n}_{j})_{K}\rightarrow\widehat{n}_{K}\) for every \(K\in\mathcal{T}_{h}\) (this may be seen, for instance, by using the norm \(\left\|\cdot\right\|_{L^{\infty}(\Omega)}\)). Moreover, since \(\Pi_{0}\) is continuous and \(\Pi_{0}\widehat{\mu}_{u}\in\mathbb{P}_{0}^{\mathrm{disc}}(\mathcal{T}_{h})\), the convergence \(\Pi_{0}\widehat{\mu}_{u}\rightarrow\Pi_{0}\widehat{\mu}_{u}\) is also equivalent to the convergence elementwise \((\Pi_{0}\widehat{\mu}_{u_{j}})_{K}\rightarrow(\Pi_{0}\widehat{\mu}_{u})_{K}\) for every \(K\in\mathcal{T}_{h}\). Finally, taking limits when \(j\rightarrow\infty\) in (17) (with
\(\widehat{u}_{j}\), \(\widehat{\mu}_{u}\coloneqq\widehat{\mu}_{u_{j}}\), \(\widehat{n}\coloneqq\widehat{n}_{j}\) and \((u_{j},\mu_{u_{j}},n_{j})\coloneqq T(\widehat{u}_{j},\widehat{\mu}_{u_{j}}, \widehat{n}_{j}))\), and using the notion of convergence elementwise, we get that
\[\lim_{j\to\infty}T(\widehat{u}_{j},\widehat{\mu}_{j},\widehat{n}_{j})=T( \widehat{u},\widehat{\mu},\widehat{n})=T\left(\lim_{j\to\infty}(\widehat{u}_{j },\widehat{n}_{j},\widehat{n}_{j})\right),\]
hence \(T\) is continuous. Therefore, \(T\) is also compact since \(\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h})\) and \(\mathbb{P}^{\rm cont}_{1}(\mathcal{T}_{h})\) have finite dimension.
Finally, let us prove that the set
\[B=\{(u,\mu_{u},n)\in\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h})\times\mathbb{P }^{\rm cont}_{1}(\mathcal{T}_{h})\times\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{ h})\colon(u,\mu_{u},n)=\alpha T(u,\mu_{u},n)\text{ for some }0\leq\alpha\leq 1\}\]
is bounded (independent of \(\alpha\)). The case \(\alpha=0\) is trivial so we will assume that \(\alpha\in(0,1]\).
If \((u,\mu_{u},n)\in B\), then \((u,\mu_{u},n)\in\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h})\times\mathbb{P}^{ \rm cont}_{1}(\mathcal{T}_{h})\times\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h})\) is the solution of
\[\frac{1}{\Delta t}\,(u-\alpha z_{u},\overline{u}) =-\alpha C_{u}\,a_{h}^{\rm upw}(\Pi_{0}\mu_{u};M(u),\overline{u} )+\alpha\,\delta P_{0}\left(P(u,n)(\mu_{n}-\Pi_{0}\mu_{u})_{\oplus},\overline {u}\right), \forall\overline{u}\in\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h}), \tag{18}\] \[(\mu_{u},\overline{\mu}_{u})_{h} =\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u,\nabla\overline{\mu}_{u }\right)+\left(f(\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u}),\overline{\mu}_{u}\right)- \chi_{0}\,(n,\overline{\mu}_{u})\,, \forall\overline{\mu}_{u}\in\mathbb{P}^{\rm cont}_{1}(\mathcal{T}_{h}),\] (19) \[\frac{1}{\Delta t}\,(n-\alpha z_{n},\overline{n}) =-\alpha C_{n}\,a_{h}^{\rm upw}(\mu_{n};M(n),\overline{n})- \alpha\,\delta P_{0}\left(P(u,n)(\mu_{n}-\Pi_{0}\mu_{u})_{\oplus},\overline{n} \right), \forall\overline{n}\in\mathbb{P}^{\rm disc}_{0}(\mathcal{T}_{h})\] (20) \[\mu_{n} =\frac{1}{\delta}n-\chi_{0}\Pi_{0}(\Pi_{h}^{1}z_{u}). \tag{21}\]
Now, testing (18) by \(\overline{u}=1\) and (20) by \(\overline{n}=1\), we obtain
\[\int_{\Omega}(u+n)=\alpha\int_{\Omega}(z_{u}+z_{n}).\]
Moreover, since \(0\leq z_{u},z_{n}\leq 1\), it can be proved that \(0\leq u,n\leq 1\) using the same arguments than in Theorem 3.8. Therefore, we arrive at
\[\|u\|_{L^{1}(\Omega)}+\|n\|_{L^{1}(\Omega)}\leq\|z_{u}\|_{L^{1}(\Omega)}+\|z_ {n}\|_{L^{1}(\Omega)}\,,\]
thus,
\[\|u\|_{L^{1}(\Omega)}\,,\|n\|_{L^{1}(\Omega)}\leq\|z_{u}\|_{L^{1}(\Omega)}+\|z_ {n}\|_{L^{1}(\Omega)}\,.\]
Also, \(\Pi_{1}^{h}u\in\mathbb{P}^{\rm cont}_{1}(\mathcal{T}_{h})\) is the solution of the equation
\[\left(\Pi_{1}^{h}u,\overline{\phi}\right)_{h} =\left(u,\overline{\phi}\right), \forall\overline{\phi}\in\mathbb{P}^{\rm cont}_{1}(\mathcal{T}_{h}). \tag{22}\]
Hence, \(0\leq u\leq 1\) implies \(0\leq\Pi_{1}^{h}u\leq 1\), and taking \(\overline{\phi}=1\) we arrive at
\[\left\|\Pi_{1}^{h}u\right\|_{L^{1}(\Omega)}=\|u\|_{L^{1}(\Omega)}\leq\|z_{u}\| _{L^{1}(\Omega)}+\|z_{n}\|_{L^{1}(\Omega)}\,.\]
Now, we will check that \(\mu_{u}\) is bounded. Testing (19) with \(\overline{\mu}_{u}=\mu_{u}\) we obtain that
\[\left\|\mu_{u}\right\|_{L^{2}(\Omega)}^{2} \leq\varepsilon^{2}\left\|\nabla\Pi_{1}^{h}u\right\|_{(L^{2}(\Omega ))^{d}}\left\|\nabla\mu_{u}\right\|_{(L^{2}(\Omega))^{d}}+\left\|f(\Pi_{1}^{h}u, \Pi_{1}^{h}z_{u})\right\|_{L^{2}(\Omega)}\left\|\mu_{u}\right\|_{L^{2}(\Omega) }+\left\|n\right\|_{L^{2}(\Omega)}\left\|\mu_{u}\right\|_{L^{2}(\Omega)}\] \[\leq\varepsilon^{2}\left\|\Pi_{1}^{h}u\right\|_{H^{1}(\Omega)} \left\|\mu_{u}\right\|_{H^{1}(\Omega)}+\left\|f(\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u} )\right\|_{L^{2}(\Omega)}\left\|\mu_{u}\right\|_{L^{2}(\Omega)}+\left\|n \right\|_{L^{2}(\Omega)}\left\|\mu_{u}\right\|_{L^{2}(\Omega)}.\]
The norms are equivalent in the finite-dimensional space \(\mathbb{P}_{1}^{\text{cont}}(\mathcal{T}_{h})\), therefore, there are \(K_{1},K_{2}\geq 0\) such that
\[\left\|\mu_{u}\right\|_{L^{2}(\Omega)}\leq\varepsilon^{2}K_{1}\left\|\Pi_{1}^{ h}u\right\|_{L^{1}(\Omega)}+\left\|f(\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u})\right\|_{L^{2} (\Omega)}+K_{2}\left\|n\right\|_{L^{1}(\Omega)}.\]
Consequently, since \(\left\|f(\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u})\right\|_{L^{2}(\Omega)}\) is bounded due to \(0\leq\Pi_{1}^{h}u,\Pi_{1}^{h}z_{u}\leq 1\) and \(\left\|\Pi_{1}^{h}u\right\|_{L^{1}(\Omega)}\) and \(\left\|n\right\|_{L^{1}(\Omega)}\) are also bounded, we conclude that \(\left\|\mu_{u}\right\|_{L^{2}(\Omega)}\) is bounded.
Since \(\mathbb{P}_{0}^{\text{disc}}(\mathcal{T}_{h})\) and \(\mathbb{P}_{1}^{\text{cont}}(\mathcal{T}_{h})\) are finite-dimensional spaces where all the norms are equivalent, we have proved that \(B\) is bounded.
Thus, using the Leray-Schauder fixed point theorem 3.10, there is a solution \((u,\mu_{u},n)\) of the scheme (12).
**Theorem 3.12**.: _Any solution of the scheme (12) satisfies the following **discrete energy law**_
\[\delta_{t}E(\Pi_{1}^{h}u^{m+1},n^{m+1}) +C_{u}a_{h}^{\text{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1}),\Pi_{0} \mu_{u}^{m+1})+C_{n}a_{h}^{\text{upw}}(\mu_{n}^{m+1};M(n^{m+1}),\mu_{n}^{m+1})\] \[+\frac{k\varepsilon^{2}}{2}\int_{\Omega}|\delta_{t}\nabla\Pi_{1} ^{h}u^{m+1}|^{2}+\frac{k}{2\delta}\int_{\Omega}|\delta_{t}n^{m+1}|^{2}\] \[+\delta P_{0}\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0 }\mu_{u}^{m+1})_{\oplus}^{2}\leq 0, \tag{23}\]
_where \(E(\Pi_{1}^{h}u,n)\) is defined in (7)._
Proof.: Observe that, by (10)-(11),
\[\left(\delta_{t}\Pi_{1}^{h}u^{m+1},\mu_{u}^{m+1}\right) =\left(\delta_{t}u^{m+1},\mu_{u}^{m+1}\right),\] \[\left(\delta_{t}u^{m+1},\mu_{u}^{m+1}\right) =\left(\delta_{t}u^{m+1},\Pi_{0}\mu_{u}^{m+1}\right).\]
Consequently
\[(\delta_{t}u^{m+1},\Pi_{0}\mu_{u}^{m+1})=(\delta_{t}\Pi_{1}^{h}u^{m+1},\mu_{u} ^{m+1}).\]
Hence, taking \(\overline{u}=\Pi_{0}\mu_{u}^{m+1}\), \(\overline{\mu}_{u}=\delta_{t}\Pi_{1}^{h}u^{m+1}\), \(\overline{n}=\mu_{n}^{m+1}\) in (12a)-(12c) and testing (12d) by
\(\delta_{t}n^{m+1}\) we arrive at
\[\left(\delta_{t}\Pi_{1}^{h}u^{m+1},\Pi_{0}\mu_{u}^{m+1}\right) +C_{u}a_{h}^{\text{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1}),\Pi_{0}\mu _{u}^{m+1})\] \[=\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0}\mu_{u} ^{m+1})_{\oplus},\Pi_{0}\mu_{u}^{m+1}\right), \tag{24a}\] \[\left(\mu_{u}^{m+1},\delta_{t}\Pi_{1}^{h}u^{m+1}\right) =\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u^{m+1},\delta_{t}\nabla\Pi _{1}^{h}u^{m+1}\right)+\left(f(\Pi_{1}^{h}u^{m+1},\Pi_{1}^{h}u^{m}),\delta_{t} \Pi_{1}^{h}u^{m+1}\right)\] \[\quad-\chi_{0}\left(n^{m+1},\delta_{t}\Pi_{1}^{h}u^{m+1}\right),\] (24b) \[\left(\delta_{t}n^{m+1},\mu_{n}\right) +C_{n}a_{h}^{\text{upw}}(\mu_{n}^{m+1};M(n^{m+1}),\mu_{n}^{m+1})\] \[=-\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0}\mu_ {u}^{m+1})_{\oplus},\mu_{n}^{m+1}\right),\] (24c) \[\left(\mu_{n}^{m+1},\delta_{t}n^{m+1}\right) =\frac{1}{\delta}\left(n^{m+1},\delta_{t}n^{m+1}\right)-\chi_{0} \left(\Pi_{1}^{h}u^{m},\delta_{t}n^{m+1}\right), \tag{24d}\]
By adding (24a)-(24d),
\[C_{u}a_{h}^{\text{upw}}(\Pi_{0}\mu_{u}^{m+1}; M(u^{m+1}),\Pi_{0}\mu_{u}^{m+1})+C_{n}a_{h}^{\text{upw}}(\mu_{n}^{m+1};M (n^{m+1}),\mu_{n}^{m+1})\] \[+\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u^{m+1},\delta_{t}\nabla \Pi_{1}^{h}u^{m+1}\right)+\left(f(\Pi_{1}^{h}u^{m+1},\Pi_{1}^{h}u^{m}),\delta_ {t}\Pi_{1}^{h}u^{m+1}\right)\] \[+\delta P_{0}\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0}\mu_{u }^{m+1})_{\oplus},\mu_{n}^{m+1}-\Pi_{0}\mu_{u}^{m+1}\right)\] \[+\frac{1}{\delta}\left(n^{m+1},\delta_{t}n^{m+1}\right)-\chi_{0} \left(n^{m+1},\delta_{t}\Pi_{1}^{h}u^{m+1}\right)-\chi_{0}\left(\Pi_{1}^{h}u^{ m},\delta_{t}n^{m+1}\right)=0.\]
Taking into account that
\[\varepsilon^{2}\left(\nabla\Pi_{1}^{h}u^{m+1},\delta_{t}\nabla\Pi _{1}^{h}u^{m+1}\right) =\frac{\varepsilon^{2}}{2}\delta_{t}\int_{\Omega}|\nabla\Pi_{1}^ {h}u^{m+1}|^{2}+\frac{k\varepsilon^{2}}{2}\int_{\Omega}|\delta_{t}\nabla\Pi_{1} ^{h}u^{m+1}|^{2},\] \[\frac{1}{\delta}\left(n^{m+1},\delta_{t}n^{m+1}\right) =\frac{1}{2\delta}\delta_{t}\int_{\Omega}|n^{m+1}|^{2}+\frac{k}{2 \delta}\int_{\Omega}|\delta_{t}n^{m+1}|^{2},\] \[\chi_{0}\delta_{t}\int_{\Omega}u^{m+1}n^{m+1} =\chi_{0}\left(n^{m},\delta_{t}\Pi_{1}^{h}u^{m+1}\right)+\chi_{0} \left(\Pi_{1}^{h}u^{m+1},\delta_{t}n^{m+1}\right),\] \[\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0}\mu_{u}^{m+1} )_{\oplus}^{2} =\left(P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0}\mu_{u}^{m+1})_{ \oplus},\mu_{n}^{m+1}-\Pi_{0}\mu_{u}^{m+1}\right),\]
and by adding and subtracting \(\delta_{t}\int_{\Omega}F(\Pi_{1}^{h}u^{m+1})\), we get the following equality
\[\delta_{t}E(\Pi_{1}^{h}u^{m+1},n^{m+1}) +C_{u}a_{h}^{\text{upw}}(\Pi_{0}\mu_{u}^{m+1};M(u^{m+1}),\Pi_{0} \mu_{u}^{m+1})+C_{n}a_{h}^{\text{upw}}(\mu_{n}^{m+1};M(n^{m+1}),\mu_{n}^{m+1})\] \[+\frac{k\varepsilon^{2}}{2}\int_{\Omega}|\delta_{t}\nabla\Pi_{1}^ {h}u^{m+1}|^{2}+\frac{k}{2\delta}\int_{\Omega}|\delta_{t}n^{m+1}|^{2}\] \[+\delta P_{0}\int_{\Omega}P(u^{m+1},n^{m+1})(\mu_{n}^{m+1}-\Pi_{0} \mu_{u}^{m+1})_{\oplus}^{2}\] \[=\delta_{t}\int_{\Omega}F(\Pi_{1}^{h}u^{m+1})-\left(f(\Pi_{1}^{h} u^{m+1},\Pi_{1}^{h}u^{m}),\delta_{t}\Pi_{1}^{h}u^{m+1}\right)=0.\]
Finally, using the standard convex splitting technique (see [2, 17, 25]), we can prove that
\[\left(f(\Pi_{1}^{h}u^{m+1},\Pi_{1}^{h}u^{m}),\delta_{t}(\Pi_{1}^{h}u^{m+1}) \right)-\int_{\Omega}\delta_{t}F(\Pi_{1}^{h}u^{m+1})\geq 0,\]
which implies (23).
**Corollary 3.13**.: _The scheme (12) is unconditionally energy stable in the sense_
\[E(\Pi_{1}^{h}u^{m+1},n^{m+1})\leq E(\Pi_{1}^{h}u^{m},n^{m}).\]
Proof.: It is straightforward to check (see [1]) that
\[a_{h}^{\mathrm{upw}}(\Pi_{0}\mu_{u};M(u^{m+1}),\Pi_{0}\mu_{u}^{m+1})\geq 0\quad \text{and}\quad a_{h}^{\mathrm{upw}}(\mu_{n}^{m+1};M(n^{m+1}),\mu_{n}^{m+1}) \geq 0.\]
Hence, using (23) we conclude that \(\delta_{t}E(\Pi_{1}^{h}u^{m+1},n^{m+1})\leq 0\).
## 4 Numerical experiments
Now, we will present several numerical experiments that match the results presented in the previous section. We assume that \(\Omega=[-10,10]^{2}\), \(\varepsilon=0.1\), \(\delta=0.01\) and we consider the mesh is shown in Figure 1 which satisfies the Hypothesis 1. The nonlinear coupled scheme (12) is approximated by Newton's method.
These results have been computed using the Python interface of the library FEniCSx, [3, 35, 36], and the figures have been plotted using PyVista, [40].
Notice that, as mentioned in Subsection 3.2, \(\Pi_{1}^{h}u^{m}\) is considered the approximation of the phase filed variable \(u\) by the scheme (12). Therefore, all the results shown in this section correspond with this approximation. On the other hand, although \(n^{m}\) is taken as the approximation of the nutrients variable \(n\), for the ease of visualization, \(\Pi_{1}^{h}n^{m}\) has been plotted in Figures 2, 3, 4, 8 and 9.
Figure 1: Mesh used for domain discretization.
### Three tumors aggregation
We define the following initial conditions which are of the same type than those in [43]:
\[u_{0} =\frac{1}{2}\left[\tanh\left(\frac{1-\sqrt{(x-2)^{2}+(y-2)^{2}}}{ \sqrt{2}\varepsilon}\right)+\tanh\left(\frac{1-\sqrt{(x-3)^{2}+(y+5)^{2}}}{ \sqrt{2}\varepsilon}\right)\right.\] \[\quad\left.+\tanh\left(\frac{1.73-\sqrt{(x+1.5)^{2}+(y+1.5)^{2}}} {\sqrt{2}\varepsilon}\right)+3\right],\] \[n_{0} =1.0-u_{0}.\]
These initial conditions are shown in Figure 2. As one may observe, we assume that, at the beginning, the nutrients are fully consumed in the area occupied by the initial tumor.
Moreover, we set \(C_{u}=100\), \(C_{n}=100\cdot 10^{-4}\), \(P_{0}=125\)\(h\approx 0.14\) and we use the following symmetric mobility and proliferation functions:
\[M(v)=h_{1,1}(v),\quad P(u,n)=h_{1,1}(u)n_{\oplus}. \tag{25}\]
We are going to compare the upwind DG scheme (12) and the \(\mathbb{P}_{1}^{\text{cont}}(\mathcal{T}_{h})\)-FE approximation of the time discrete scheme (8). We consider two different cases: \(\chi_{0}=0\) and \(\chi_{0}=10\), i.e. without and with cross-diffusion, respectively.
On the one hand, the experiment without cross-diffusion (\(\chi_{0}=0\) and \(\Delta t=10^{-5}\)) is plotted in Figure 3. As one may notice, both schemes provide a similar approximation. The approximations preserve, approximately in the case of FE, the pointwise bounds of the variables \(u\) and \(n\) and the energy stability, see Figures 5 and 7 (left).
On the other hand, the test with cross-diffusion (\(\chi_{0}=10\) and \(\Delta t=5\cdot 10^{-6}\)) is plotted in Figures 4. In this case, one may notice that, while DG scheme provides a good approximation of the solution, FE solution shows a lot of spurious oscillations. These numerical instabilities lead to a loss of the maximum principle while it is preserved by the DG scheme, see Figure 6. In both cases, the schemes preserve the energy stability of the model as expected, see Figure 7 (right).
Figure 2: Initial conditions for test 4.1 (\(u_{0}\) left, \(n_{0}\) right).
Figure 3: Tumor and nutrients for test (4.1) with \(\chi_{0}=0\) at different time steps.
Figure 4: Tumor and nutrients for test (4.1) with \(\chi_{0}=10\) at different time steps.
Figure 5: Pointwise bounds of the approximations for test 4.1 with \(\chi_{0}=0\) (\(u\) left, \(n\) right).
Figure 6: Pointwise bounds of the approximations for test 4.1 with \(\chi_{0}=10\) (\(u\) left, \(n\) right).
Furthermore, it is remarkable to emphasize that the convergence of Newton's method for the FE scheme requires a very small time step. In this sense, the previous tests where shown for a small enough time step so that Newton's method converges for both schemes. Conversely, the upwind DG scheme (12) does converge for larger time steps. In practice, we have been able to compute the approximation given by the DG scheme for this test with time steps up to \(\Delta t=10^{-4}\).
### Irregular tumor growth
In this test, we show the irregular growth of a tumor due to the irregular distribution of the nutrients over the domain. It is important to notice the well behavior of the scheme (12) which allow us to capture different irregular growth processes even in the cases with important cross-diffusion in which we cannot expect FE to work as shown in Subsection 4.1.
In particular, we consider the following initial conditions for tumor cells and nutrients:
\[u_{0} =\frac{1}{2}\left[\tanh\left(\frac{1.75-\sqrt{x^{2}+y^{2}}}{\sqrt {2}\varepsilon}\right)+1\right],\] \[n_{0} =\frac{1}{2}(1-u_{0})+\frac{1}{4}\left[\tanh\left(\frac{1-\sqrt{ (x-2.45)^{2}+(y-1.45)^{2}}}{\sqrt{2}\varepsilon}\right)\right.\] \[\quad+\tanh\left(\frac{1.75-\sqrt{(x+3.75)^{2}+(y-1)^{2}}}{\sqrt{ 2}\varepsilon}\right)+\tanh\left(\frac{2.5-\sqrt{x^{2}+(y+5)^{2}}}{\sqrt{2} \varepsilon}\right)+3\right],\]
which are shown in Figure 8.
We represent the behavior of the solution of the model under different set of parameters, see Figures 9-15. We set \(C_{u}=2.8\), \(C_{n}=2.8\cdot 10^{-4}\), \(h\approx 0.28\) for every experiment and we vary the rest of the parameters with respect to the reference test in Figure 9 (\(P_{0}=0.5\), \(\chi_{0}=0.1\) and \(\Delta t=0.1\)). For the sake of brevity, we only show the nutrients variable for the reference test.
In fact, we have considered two different types of mobility and proliferation functions. On the one hand, the typical symmetric functions used in the previous experiment (25) have been used
Figure 8: Initial conditions for test 4.2 (\(u_{0}\) left, \(n_{0}\) right).
(see the top rows of Figures 9-15). However, on the other hand, we have considered the following non-symmetric choice of the mobility and proliferation functions
\[M(v)=h_{5,1}(v),\quad P(u,n)=h_{1,3}(u)n_{\oplus}, \tag{26}\]
whose associated results are plotted in the bottom row of Figures 9-15.
The proliferation function in (26) has been chosen to model a very quick tumor growth and nutrient consumption at the non-saturated state (\(u\simeq 0\)) that decays until the tumor is fully saturated (\(u\simeq 1\)). Moreover, the choice of the mobility function in (26) is thought to prevent the dissemination of the tumor and the nutrients in a non-saturated state (\(u,n\simeq 0\)) leading to a more local tumor/nutrient interaction due to the proliferation term.
Of course, the choice of these functions does not limit to those in (26) and other degenerated mobility and proliferation functions can be considered. In this sense, we would like to emphasize that the choice of these functions may be motivated by different types of tumor which might show particular growth and interaction with nutrients behaviors.
Indeed, we can observe the different expected behaviors of the solution for both choices of mobilities and proliferation functions in Figures 9-15. On the one hand, we may notice a local growth of the tumor where a proliferation area appears around the fully saturated tumor due to (26). Conversely, we can observe an eventual dissemination of the tumor all over the domain using (25) in the cases where the proliferation term is more significant than the cross-diffusion allowing the tumor to grow by consuming nutrients.
## Acknowledgments
The first author has been supported by _UCA FPU contract UCA/REC14VPCT/2020 funded by Universidad de Cadiz_ and by a _Graduate Scholarship funded by the University of Tennessee at Chattanooga_. The second and third authors have been supported by _Grant US-4931381261 (US/JUNTA/FEDER, UE)_.
|
2307.10256 | Hidden Markov Models with Random Restarts vs Boosting for Malware
Detection | Effective and efficient malware detection is at the forefront of research
into building secure digital systems. As with many other fields, malware
detection research has seen a dramatic increase in the application of machine
learning algorithms. One machine learning technique that has been used widely
in the field of pattern matching in general-and malware detection in
particular-is hidden Markov models (HMMs). HMM training is based on a hill
climb, and hence we can often improve a model by training multiple times with
different initial values. In this research, we compare boosted HMMs (using
AdaBoost) to HMMs trained with multiple random restarts, in the context of
malware detection. These techniques are applied to a variety of challenging
malware datasets. We find that random restarts perform surprisingly well in
comparison to boosting. Only in the most difficult "cold start" cases (where
training data is severely limited) does boosting appear to offer sufficient
improvement to justify its higher computational cost in the scoring phase. | Aditya Raghavan, Fabio Di Troia, Mark Stamp | 2023-07-17T13:21:58Z | http://arxiv.org/abs/2307.10256v1 | # Hidden Markov Models with Random Restarts vs Boosting for Malware Detection
###### Abstract
Effective and efficient malware detection is at the forefront of research into building secure digital systems. As with many other fields, malware detection research has seen a dramatic increase in the application of machine learning algorithms. One machine learning technique that has been used widely in the field of pattern matching in general--and malware detection in particular--is hidden Markov models (HMMs). HMM training is based on a hill climb, and hence we can often improve a model by training multiple times with different initial values. In this research, we compare boosted HMMs (using AdaBoost) to HMMs trained with multiple random restarts, in the context of malware detection. These techniques are applied to a variety of challenging malware datasets. We find that random restarts perform surprisingly well in comparison to boosting. Only in the most difficult "cold start" cases (where training data is severely limited) does boosting appear to offer sufficient improvement to justify its higher computational cost in the scoring phase.
## 1 Introduction
As of 2017, about 54% of households worldwide had access to the Internet [21]. In terms of raw numbers, the count of Internet users has increased from around 1 billion in 2005 to almost 3.6 billion in 2017 [45]. This trend of digitalization is sure to continue over the coming years, and soon virtually the entire world will be connected to the Internet.
The proliferation of computers and the widespread use of the Internet have resulted in the digitalization of many services. Business applications are obvious, but highly digitized services also include essentials such as the power grid, dams, traffic lights, and so on. The Internet of Things (IoT) promises to connect nearly every aspect of life to the Internet--as of 2018, there are
about 23 billion IoT connected devices [44]. This reliance on digitalization and automation brings with it a set of challenges, and chief among these challenges is digital security.
Today, bad actors can exploit our reliance on technology for financial gain, and there are credible predictions of cyber-warfare being a leading mode of attack in future conflicts [35]. Malicious software, or malware, is the driving force behind a vast array of digital security issues.
According to [29], one in three computers worldwide is affected by malware. While the cost of malware is notoriously difficult to quantify, estimates are that financial losses due to cyber crime will reach a staggering $6 trillion annually by the year 2021 [1]. Not surprisingly, spending on cyber defenses is also expected to increase--it is estimated that such expenditures will exceed $1 trillion in 2021 [26].
A wide variety of different types of malware affect computing systems. Such malware includes adware, spyware, Trojans, viruses, worms, ransomware, and many others [16]. Antivirus software, firewalls, and intrusion detection systems are used in attempts to keep systems secure. Antivirus software generally relies primarily on signature detection (i.e., pattern matching) to detect malware. However, there are many advanced forms of malware that can evade signature-based detection [5].
Machine learning techniques can be used to improve on signature-based detection [4]. Hidden Markov models (HMMs) are one popular machine learning technique that has been successfully applied to the malware detection problem [2; 4; 33; 43], as well as a wide variety of other information security problems [3; 10; 14; 19; 27; 28; 30; 34; 38; 40; 41]. In this research, we consider the effectiveness of malware detection based on HMMs with multiple random restarts. We compare this random restarts approach to combining HMMs using the well-known AdaBoost algorithm [42]. Interestingly, it appears that boosted HMMs have not previously received much attention in the information security domain [13].
We consider a variety of experiments to compare multiple random restarts to boosted HMMs. Our experiments include the so-called "cold start" problem, where limited training data is available. We believe that all of our experiments provide realistic and challenging test cases for comparing the techniques under consideration.
The remainder of this paper is organized as follows. In Section 2, we provide relevant background information, including brief introductions to hidden Markov models and AdaBoost. Section 3 describes our experimental setup and results. In this section, we also provide some discussion and analysis of our results. Finally, in Section 4, we present our conclusion and mention possible future work.
Background
Machine learning can be viewed as a form of statistical discrimination where a "machine" or algorithm does the hard work, rather than a human analyst [43]. Complex problems such as character recognition and voice identification can be effectively solved using various machine learning techniques [11, 24]. In the field of information security, machine learning has become a fundamental tool in many areas of research, including malware detection and analysis, and intrusion detection [4, 13, 20].
Next, we briefly introduce hidden Markov models and AdaBoost. These are both popular machine learning techniques that have found widespread application to problems in information security.
### Hidden Markov Model
In a Markov process of order one, the current state depends only on the previous state, and the state transition probabilities are based on fixed, discrete probability distributions. As the name suggests, in a hidden Markov model, we cannot directly observe the state sequence, but we do have access to a series of observations that are related to the hidden states via discrete probability distributions. A generic view of an HMM is given in Figure 1, with the relevant notation defined in Table 1.
Using the notation in Table 1, the state transition matrix \(A\) is of size \(N\times N\), the observation probability matrix \(B\) is \(N\times M\), and the initial distribution matrix \(\pi\) is \(1\times N\). These matrices are all row stochastic, and they define an HMM; thus, we denote an HMM as \(\lambda=(A,B,\pi)\).
The HMM training process is a hill climb, and we typically initialize the elements of \(A\), \(B\), and \(\pi\) to approximately uniform, that is, \(a_{i,j}\approx 1/N\)
Figure 1: Hidden Markov model
\(b_{i,j}\approx 1/M\) and \(\pi_{1,j}\approx 1/N\), with the row stochastic requirement enforced. Since HMM training is a hill climb, we might obtain better results by training multiple models with different initial values, and selecting the best of the resulting models. Such an "HMM with random restarts" approach has been used, for example, in the analysis of classic substitution ciphers [9, 46]. However, as far as the authors are aware, such an approach has not been explicitly applied in the field of malware detection.
Previous research employing hidden Markov models includes a wide array of pattern matching problems. For example, HMMs have been used to distinguish handwritten letters with high accuracy [24]. Voice recognition is another area where HMMs are a reliable and strong performer [22]. And, as previously noted, HMMs are often used in malware research.
In this paper, we consider the malware detection problem and analyze the effectiveness of generating 1000 HMMs with random restarts. We compare this random restarts approach to the average case, where a single HMM is trained. We also consider the case where the 1000 HMMs are combined using AdaBoost.
Next, we briefly introduce the key ideas behind the AdaBoost algorithm. While AdaBoost is often used with decision trees, the technique will work with any type of classifiers--in this paper, the boosted classifiers are based on HMMs.
### AdaBoost
Boosting is the process of combining multiple (weak) classifiers to obtain a stronger classifier. Any classifier that performs better than a coin flip can be used, and if a sufficient number of such classifiers are available, adaptive boosting, or AdaBoost, can generate an arbitrarily strong classifier [42]. There are
\begin{table}
\begin{tabular}{c l} \hline \hline Notation & \multicolumn{2}{c}{Description} \\ \hline \(T\) & Length of the observation sequence \\ \(N\) & Number of states in the model \\ \(M\) & Number of observation symbols \\ \(Q\) & Distinct states of the Markov process, \(q_{0},q_{1},\ldots,q_{N-1}\) \\ \(V\) & Possible observations, assumed to be \(0,1,\ldots,M-1\) \\ \(A\) & State transition probabilities \\ \(B\) & Observation probability matrix \\ \(\pi\) & Initial state distribution \\ \(\mathcal{O}\) & Observation sequence, \(\mathcal{O}_{0},\mathcal{O}_{1},\ldots,\mathcal{O}_{T-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: HMM notation
other boosting techniques, including extreme gradient boosting (XGBoost), but AdaBoost is certainly the best known boosting technique.
At each iteration, AdaBoost selects the "best" classifier from those available (i.e., unused), where "best" is defined as the classifier that most improves on the overall accuracy of the new, combined classifier. That is, AdaBoost greedily selects a classifier that does the most to improve on the current iteration of the constructed classifier. In AdaBoost the selected classifiers are combined as a weighted linear combination, where an optimal weight is calculated at each iteration, with all previously computed weights fixed. While AdaBoost has many desirable properties, one inherent problem is that errors in the training data tend to grow, due to the iterative nature of the algorithm.
Although AdaBoost is a greedy algorithm, it is worth noting that it is not a hill climb. Hence, at any given iteration, it is possible that the resulting classifier will be worse than at the previous iteration. Figure 2 shows the accuracy of AdaBoost as a function of the iteration number in three different cases. In each case, the same set of \(n=100\) labeled samples was used, with the number of (extremely weak) classifiers being \(L=250\), \(L=500\), and \(L=1000\) for the three different cases, which appear as the red, green, and blue graphs in Figure 2, respectively. The dips in the accuracy, can be fairly substantial, and if we do not have a sufficient number of classifiers available, the results will suffer, as can be seen by the \(L=250\) case in Figure 2. The tutorial [42] discusses these and related issues in more detail.
Figure 2: Correct classifications vs iteration [42]
There have been many successful applications of boosting algorithms. One such example uses AdaBoost to improve the selection of features in a vision based application [17]. Security-related applications of AdaBoost can be found in [13, 20], where several classifiers based on Gaussian mixture models are combined into a stronger classifier for network intrusion detection.
### Evaluation Criteria
We use accuracy as an evaluation metric for some of the experimental results that we present in Section 3. For an experiment on a labeled dataset,
\[\text{accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}}\]
where
\[\text{TP}= \text{true positives},\text{TN}=\text{true negatives}\] \[\text{FP}= \text{false positives},\text{FN}=\text{false negatives}\]
Accuracy is an intuitive measure, as it is simply the ratio of correct classifications to the total number of classifications.
We employ receiver operating characteristic (ROC) curve analysis in all of our experiments. An ROC curve is obtained from a scatterplot by graphing the true positive rate (TPR) versus the false positive rate (FPR) as the threshold varies through the range of values. These rates are computed as
\[\text{TPR}=\frac{\text{TP}}{\text{TP}+\text{FN}}\ \ \text{and}\ \ \text{FPR}=\frac{\text{FP}}{\text{FP}+\text{TN}}\]
The area under the ROC curve (AUC) ranges from 0 to 1, with 1 indicating ideal separation, that is, a threshold exists for which no misclassifications occur. An AUC of 0.5 indicates that the underlying binary classifier is no better than flipping a coin. Also, note that an AUC of \(x<0.5\) will yield an AUC of \(1-x>0.5\) if we simply reverse the sense of the binary classifier. The AUC can be interpreted as the probability that a randomly selected positive instance scores higher than a randomly selected negative instance [12].
## 3 Experiments and Results
In this section, we give detailed experimental results comparing HMMs with multiple random restarts to boosted HMMs. First, we discuss the basic parameters of the experiments; then, we present three sets of experiments.
### Dataset and Features
All of the experiments here are based on malware samples from the Malicia dataset [25], along with a representative collection of benign samples. The benign samples consist of Windows system 32 executables collected from a fresh install, while the malicious families are the following.
**Cridex**: is a Trojan that creates a backdoor and collects sensitive information, such as details related to online banking. The resulting information can then be transmitted to a third party [15].
**Harebot**: is a backdoor that can yield remote access to an infected system. Due to its large number of features, Harebot is also sometimes considered to be a rootkit [18].
**Security Shield**: is a spyware Trojan that claims to be anti-virus software and reports fake virus detection results to the user. Security Shield attempts to coerce the user into purchasing software [36].
**Zbot**: is a Trojan horse that compromises a system by downloading configuration files or updates. Zbot, which is also known as Zeus, is stealthy malware that attempts to hide in the file system [47].
**ZeroAccess**: is a Trojan horse that makes use of an advanced rootkit to hide its presence. ZeroAccess can create a new (hidden) file system, install a backdoor, and download additional malware, among other features [48].
Table 2 lists the number of samples from each malware family considered, as well as the number of benign samples. Note that the number of benign samples is larger than the number of malware samples for three of the five malware families under consideration. All of our subsequent experiments and analysis are conducted on a per-family basis.
Good detection results can be obtained for some of the larger families (e.g., ZeroAccess) in the Malicia dataset, but the smaller families (e.g., Cridex and Harebot) have been shown to be challenging [6, 7, 31, 32]. In addition, HMMs have generally been found to be competitive with many other proposed
\begin{table}
\begin{tabular}{l l r} \hline \hline Family & Type & Samples \\ \hline Cridex & Trojan & 74 \\ Harebot & Backdoor & 53 \\ Security Shield & Spyware & 58 \\ Zbot & Trojan & 2316 \\ ZeroAccess & Trojan & 1305 \\ \hline Benign & — & 107 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset
malware detection techniques [8, 23, 37, 39]. Our initial experimental results, which appear below in Section 3.2, are consistent with this previous work. However, the main contribution of this paper is to be found in the relative differences between boosting and random restarts in the most challenging test cases, which are discussed in Sections 3.3 and 3.4.
The feature used in all of our experiments is the mnemonic opcode sequence. Following previous work, for each family, we use the top 30 most common opcodes, and group all remaining opcodes together as "other" [37]. Note that this will generally give us a different set of opcodes for each family, but the overlap between families is significant. For a typical set of experiments, the distinct top 30 opcodes and number of families that each appears in is given in Figure 3. In this case, we see that 22 of the top 30 opcodes are common to all five malware families.
Executables are disassembled and mnemonic opcodes are extracted. Then the top 30 opcodes are determines, and any opcodes outside of the top 30 are replaced with "other." The percentage of opcodes in each family that are found among the top 30 are given in Table 3. We see that in each case, the vast majority of opcodes lie within the top 30.
For all of the experiments reported below, we use 5-fold cross validation. Cross validation serves to smooth any bias in the data, and also provides us with the maximum possible number of independent test cases [43].
Again, all HMMs are trained on extracted opcode sequences. To score a given sample against a specific HMM, we extract the opcode sequence from the sample under analysis, and score the resulting sequence against the model, then normalize the score by the length of the opcode sequence. This gives us a log-likelihood per opcode (LLPO) score. Since an HMM score is length dependent, the LLPO score enables us to directly compare samples with differing numbers of opcodes.
Figure 3: Number of families for each of top 30 opcodes
### Initial Experiments
For our initial set of experiments, we trained models on each of the malware families listed in Table 3 and computed the AUC statistic. In each case, we obtained results based on HMMs with 1000 random restarts, and also applied AdaBoost to classifiers based on these same 1000 HMM models. The results for all of these experiments are summarized in the form of a bar graph in Figure 4, where the "average HMM" is the average model over the 1000 HMMs. Thus, if we trained a single HMM, we would expect to obtain the results given by the average HMM case.
\begin{table}
\begin{tabular}{l c} \hline \hline \multirow{2}{*}{Family} & Top 30 opcodes \\ & (percentage) \\ \hline Cridex & 95 \\ Harebot & 94 \\ Security Shield & 96 \\ Zbot & 94 \\ ZeroAccess & 96 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top opcodes frequency
Figure 4: Initial experiments
From Figure 4 we see multiple random restarts generally yields a significantly stronger model, and hence would, typically, likely be well worth the additional (one time) work during the training phase. However, in most cases, boosting offered minimal improvement over random restarts. Specifically, boosting had little effect on the results for Harebot, Zbot, and ZeroAccess. On the other hand, for Cridex and Security Shield, boosting does provide some measurable improvement. Perhaps not surprisingly, it appears that the cases where boosting has something to offer are those where the non-boosted models are the weakest.
It is also worth noting that boosting can significantly increase the work factor at the scoring phase, since multiple HMMs are used in the boosted classifier. This additional work depends on the number of HMM classifiers used, and this can vary, as we select the optimal boosted classifiers (i.e., the intermediate boosted model that achieves the best results). For random restarts, we simply select the best model, and hence the scoring phase is no more costly than a single HMM. This points to an inherent advantage of the random restart case, and hence we would generally only select the boosted model in cases where the improvement is significant. As noted above, only Cridex and Security Shield offer any improvement due to boosting, and the improvement in both cases is modest. Therefore, we would argue that random restarts is the better choice in all cases considered in this section.
In subsequent experiments we compare boosting and random restarts in more challenging classification problems. Specifically, we consider the situation where the malware code is morphed, for the purpose of making detection more difficult. Then we consider the so-called cold start problem, where the training data is limited.
### Morphing Experiments
For our next set of experiments, we simulate code morphing, that is, we simulate the case where the malware writer modifies the code in an attempt to make detection more difficult. Recall that we are considering detection based on differences in statistical properties of malware and benign opcode sequences. Therefore, we simulate the code morphing process by inserting opcode sequences extracted from benign samples into malware samples. This should have the desired effect of making the malware samples statistically more similar to the benign samples, and hence make them far more difficult to distinguish from benign.
We consider three morphing cases. First, we insert benign code equivalent to 10% of the code, then we apply 50% morphing and, finally, we use 100% morphing. Note that, for example, in the 100% morphing case, the size of the opcode sequence doubles. Our experimental results for these three cases are summarized in Figure 5.
As with the previous experiments, the differences between the average HMM and the best of the random restarts model is generally significant. On the other hand, the differences between the random restarts and the boosted model are not large in most cases, but there are some cases where the differences are significant. For example, Cridex at 10% morphing and ZeroAccess at 50% morphing both show substantial improvement for the boosted models, as compared to random restarts. Interestingly, there does not seem to be a clear trend as to when boosting is likely to offer more than a marginal improvement. Again, we would likely choose random restarts in all of these cases, as the improvement provided by boosting is insufficient to justify the additional work required when scoring via boosted models.
Figure 5: Morphing experiments
### Cold Start Problem
In machine learning, the cold start problem deals with the case where the training data is severely limited. This is of practical concern when attempting to detect malware, as initially we might only have a small number of samples available for training. In such cases, it would be important to know how much data is needed before reliable models can be generated. And, the cold start problem is particularly relevant to the research here, as we are considering methods to improve our classification results by training a large number of models. Intuitively, these techniques are likely to be most needed in marginal cases, and the cold start problem provides just such a case.
For our cold start experiments, we varied the training data size from 5 to 25 samples, in increments of 5. For the families with a large number of samples available (Zbot and ZeroAccess), we tested models on 200 malware samples in each case, so that the malware and benign sets are more in balance. For the remaining families (Cridex, Harebot, and Security Shield), which have few samples, we used all of the non-training sample for testing. Recall that we have 107 samples in our benign set.
The results of our cold start experiments are summarized in Figures 6 and 7, in terms of AUC and accuracy, respectively. Note that each bar graph in both of these figures includes results for the typical HMM case (the red bar), as well as the result for the case where we select the best model from the random restarts (the green bar), and the case where we apply AdaBoost to the models (blue bar). All of these results are based on 1000 models, each generated with a random initialization for the HMM training.
The general trends for the AUC and accuracy results are similar, so we discuss only the accuracy graphs in Figure 7; similar comments hold for the the AUC graphs in Figure 6 For Cridex, Harebot, and Security Shield, we see a generally upward trend in Figures 7 (a), (b), and (c), respectively, as the number of training samples increases. For the ZeroAccess family in Figure 7 (e), we see little change, indicating that 5 samples is apparently sufficient to obtain essentially optimal results. The Zbot family in Figure 7 (d) is somewhat anomalous--and somewhat surprising--as we see a downward trend in the accuracy with an increase the number of training samples. The Zbot results in Figure 7 (d) seem to indicate that for this particular family, the models generated are unstable, in the sense that the models depend heavily on the specific samples selected for training.
With respect to the three approaches considered in this section (i.e., typical hidden Markov model, multiple random restarts, and boosted HMMs), the results in Figure 7 show a significant advantage for multiple random restarts over the average HMM in almost every case. The advantage of boosting over multiple random restarts is certainly less pronounced, but is sginficant in some cases--and this advantage is generally greatest for the cases where
Figure 6: Cold start AUC results by family
Figure 7: Cold start accuracy results by family
classification is the most challenging. These results indicate that boosting is likely to only be worthwhile in extremely challenging cases. Taking into account the additional work required for scoring using a boosted model only serves to further emphasize this point.
## 4 Conclusion and Future Work
With the increasing threat of malware, it is critically important to have the most efficient and effective malware detection techniques possible. In this paper, we have explored improved classifier methods based on hidden Markov models, using both random restarts and boosting. We found that training multiple HMMs with different initial values will generally yield significant improvement over generating a single HMM. Multiple random restarts adds work in the training phase, but the scoring phase is no more costly, since we use only a single HMM. Since training is one-time work, in many cases it would be reasonable to train a large number of models using random restarts.
For our boosting experiments, we used AdaBoost, which is straightforward and inexpensive in the training phase. The improvement offered by boosting over multiple random restarts was, in general, surprisingly small, but in some of the most challenging cases, boosting did offer a significant improvement. However, boosted classifiers are significantly more costly in the scoring phase, as a large number of models are typically used in the final boosted classifier. Furthermore, boosting is not particularly robust, in the sense that errors in the training data tend to grow when training the boosted classifier [42]. Consequently, we would likely only use boosting in situations where the improvement is significant, as compared to the non-boosted case.
For future work, it would be worthwhile to consider boosting for malware detection, based on machine learning models other than HMMs. It would also be interesting to consider data contamination attacks, which would tend to have a larger negative impact on boosted models than on non-boosted models. Under these and other scenarios, it would be valuable, albeit challenging, to determine conditions under which boosting is likely to yield a classifier that is sufficiently strong to justify the additional cost and risk.
|
2306.08730 | Wireless Point Cloud Transmission | 3D point cloud is a three-dimensional data format generated by LiDARs and
depth sensors, and is being increasingly used in a large variety of
applications. This paper presents a novel solution called SEmantic Point cloud
Transmission (SEPT), for the transmission of point clouds over wireless
channels with limited bandwidth. At the transmitter, SEPT encodes the point
cloud via an iterative downsampling and feature extraction process. At the
receiver, SEPT reconstructs the point cloud with latent reconstruction and
offset-based upsampling. Extensive numerical experiments confirm that SEPT
significantly outperforms the standard approach with octree-based compression
followed by channel coding. Compared with a more advanced benchmark that
utilizes state-of-the-art deep learning-based compression techniques, SEPT
achieves comparable performance while eliminating the cliff and leveling
effects. Thanks to its improved performance and robustness against channel
variations, we believe that SEPT can be instrumental in collaborative sensing
and inference applications among robots and vehicles, particularly in the
low-latency and high-mobility scenarios. | Chenghong Bian, Yulin Shao, Deniz Gunduz | 2023-06-14T20:20:39Z | http://arxiv.org/abs/2306.08730v1 | # Wireless Point Cloud Transmission
###### Abstract
3D point cloud is a three-dimensional data format generated by LiDARs and depth sensors, and is being increasingly used in a large variety of applications. This paper presents a novel solution called SEmantic Point cloud Transmission (SEPT), for the transmission of point clouds over wireless channels with limited bandwidth. At the transmitter, SEPT encodes the point cloud via an iterative downsampling and feature extraction process. At the receiver, SEPT reconstructs the point cloud with latent reconstruction and offset-based upsampling. Extensive numerical experiments confirm that SEPT significantly outperforms the standard approach with octree-based compression followed by channel coding. Compared with a more advanced benchmark that utilizes state-of-the-art deep learning-based compression techniques, SEPT achieves comparable performance while eliminating the cliff and leveling effects. Thanks to its improved performance and robustness against channel variations, we believe that SEPT can be instrumental in collaborative sensing and inference applications among robots and vehicles, particularly in the low-latency and high-mobility scenarios.
Joint source-channel coding, neural networks, point cloud, semantic communication.
## I Introduction
3D point clouds are collections of three-dimensional data points and their associated attributes, such as color, temperature, and normals [1, 2, 3, 4, 5]. Generated through technologies such as light detection and ranging (LiDAR), depth camera, and structured light scanning, point clouds are non-ordered and non-uniformly distributed within space.
Wireless transmission plays a vital role in facilitating the mobility and accessibility of 3D point clouds, empowering industries and applications reliant on this expressive data format. Examples include autonomous driving, medical imaging, augmented reality (AR), robotics, etc. However, it is essential to acknowledge and address the inherent challenges that arise in wireless communication, including potential data loss, latency, and bandwidth limitations. These constraints necessitate a careful and dedicated design of wireless technologies to meet the specific requirements of point cloud transmission.
The standard approach for point cloud transmission consists of four main steps [1]: octree decomposition, quantization, entropy coding, channel coding and modulation. Octree is a canonical representation of point clouds. It recursively partitions the space into eight equal sized octants or cubes, and each node of the octree contains a point or a set of points in the corresponding cube [2]. The standard approach faces several challenges in achieving accurate and reliable transmission of 3D point cloud data:
* **Inefficient feature extraction**. The octree representation cannot efficiently extract contextual features from the 3D point cloud data and does not yield good energy compaction [6]. This can result in a large dynamic range during quantization.
* **The cliff and leveling effects**. Two inherent issues of digital transmission are the cliff and leveling effects [7, 8]. The cliff effect is characterized by a sharp decline in transmission rate when the channel quality falls below a certain threshold. The leveling effect, on the other hand, refers to the phenomenon that the transmission rate fails to improve despite an improvement in the channel quality, unless the coding rate and modulation order are reconfigured adaptively.
Overcoming the above challenges requires the development of more efficient feature extraction modules and communication protocols. In this paper, we leverage the recent advances in deep joint source-channel coding (DeepJSCC) [7] and develop a deep learning-based encoding and decoding framework for wireless point cloud transmission. Our main contributions can be summarized as follows:
1. We present SEPT (SEmantic Point cloud Transmission), a tailored framework for efficient delivery of 3D point cloud over additive white Gaussian noise (AWGN) channels. To the best of our knowledge, this is the first work to utilize the autoencoder approach in designing communication systems specifically for point cloud transmission.
2. To efficiently extract semantic features and avoid the cliff and leveling effects, we develop novel DeepJSCC encoder and decoder architectures for 3D point cloud: At the transmitter, SEPT encodes the point cloud directly into a latent vector without voxelization. A flexible power normalization that judiciously assigns power to different point clouds is applied. At the receiver, as opposed to feeding the noisy latent vector directly into the up-sampling layer, we introduce a refinement layer that uses the Point Transformer [5] as the backbone to first denoise the latent vector. Finally, offset-based up-sampling layers are employed for point cloud reconstruction.
3. Extensive simulations are conducted to verify the reconstruction performance of SEPT. Comparisons with the octree-based digital scheme demonstrate significant performance gains achieved by SEPT. When compared with a more advanced benchmark that combines state-of-the-art deep learning-based compression [9] with Polar code, SEPT shows comparable reconstruction performance while simultaneously eliminating the cliff and leveling effects.
_Related work_: There have been many efforts in process
ing and understanding 3D point clouds using deep learning. The authors in [4] proposed PointNet that uses permutation-invariant operations, such as pointwise multilayer perceptrons (MLPs) and max-pooling, to extract features for point cloud classification and segmentation. The follow-up works improved the performance by using more advanced operations such as 3D convolution [10] and self-attention [5, 11, 12]. The most related line of work to ours is point cloud compression, for which different deep learning-based techniques have been proposed recently [2, 9, 13, 14]. Among them, [2] used deep neural networks to predict the occupancy probability for a certain node in the octree, exploiting the information from its parent node and sibling nodes. Then, an entropy model is used to generate the bit stream. Ref. [14] used multiple KPConv [10] and downsampling layers to progressively reduce the number of points and extract information from the points of previous layers, and an offset-based up-sampling algorithm was proposed to reconstruct the point clouds at the decoder. The authors in [9] used the point cloud transformer [11] as the backbone to enhance the compression performance.
There is also growing interest in utilizing DeepJSCC to develop semantic communication systems [7, 15]. It is shown in [7] that by end-to-end optimizing the DeepJSCC system, both the cliff and leveling effects can be eliminated. With such merits, researchers have actively applied DeepJSCC to different wireless channels, e.g., multi-path fading [16], MIMO [17], and relay channels [18], as well as different data sources, e.g., text [19], image [7, 20], speech [21, 22], video [23, 24], or even wireless channel state information [25].
For different channels and data sources, it is crucial to employ appropriate methods to harness their characteristics to maximize the potential of DeepJSCC. While convolutional neural network (CNN) based autoencoders have been successfully applied to many of these sources in the aforementioned works, three-dimensional point clouds constitute a much more challenging data source as they are unstructured and can be represented in an arbitrary coordinate system. Moreover, the points can be presented in any arbitrary order, which makes it difficult to apply any specified filter to capture the structure among neighbouring points. The main objective of semantic-oriented joint source-channel coding approach to wireless signal delivery is to extract the relevant features of the signal for the specified task, and to map similar features to similar channel inputs so that the reconstruction is robust against channel noise. However, the lack of structure in point clouds makes it highly challenging to apply CNN-based DeepJSCC techniques, despite their recent success in the wireless transmission of image and video sources.
## II System Model
We consider transmitting a 3D point cloud over an AWGN channel. A point cloud can be expressed as \(\mathcal{P}=(\mathbf{X},\mathbf{F})\), where \(\mathbf{X}\triangleq\{\mathbf{x}_{i}\in\mathbb{R}^{3}\}\), \(i\in[N]\), is a set of \(N\) points in space, and \(\mathbf{F}\triangleq\{\mathbf{f}_{i}\in\mathbb{R}^{d}\},i\in[N]\), is a set of features associated with each point in \(\mathbf{X}\). In particular, this paper considers point clouds with no input attributes1 and focuses on transmitting only the coordinates \(\mathbf{X}\). Following the convention, we set the input attributes \(\mathbf{F}\) to an all-ones vector with \(d=1\).
Footnote 1: Nevertheless, the point clouds in the intermediate layers of SEPT can have non-trivial attributes/features. For example, the neighboring information is contained in the attributes of the downsampled points.
The detailed architecture of SEPT is presented in Fig. 1, where we denote the DeepJSCC encoder and decoder by \(h(\cdot)\) and \(g(\cdot)\), respectively. In the big picture, the encoder \(h(\cdot)\) first maps the input 3D point cloud \(\mathcal{P}\) to a latent vector \(\mathbf{\tilde{z}}\in\mathbb{R}^{n}\) where \(n\) is the available channel bandwidth. Then, we power normalize the latent vector to obtain a codeword \(\mathbf{z}\in\mathbb{R}^{n}\) and transmit it to the receiver via discrete-time analog transmission (DTAT) [26]. In particular, instead of imposing a stringent power constraint such that the power of each codeword \(\mathbf{z}\) is bounded by a power budget \(nP/2\), we adopt a more flexible average power constraint [27]: \(\mathbb{E}||\mathbf{z}||^{2}_{2}\leq nP/2\), whereby the power of codewords associated with different point clouds can be adjusted judiciously. To achieve this, we record the moving
Figure 1: The detailed architecture of SEPT with an encoder \(h(\cdot)\) and a decoder \(g(\cdot)\). For clarity, we label the dimension information of the point cloud after each processing module. Take \((\frac{N}{4},d_{f})\) after the first point transformer at the encoder for an example, \(\frac{N}{4}\) is the number of points after the point transformer and \(d_{f}\) is the dimension of feature vectors.
mean \(\mu\) and deviation \(\sigma\) of \(\mathbf{\tilde{z}}\) during the training phase. Then, in the inference phase, the latent vector \(\mathbf{\tilde{z}}\) is normalized using \((\mu,\sigma)\), yielding
\[\mathbf{z}=(\mathbf{\tilde{z}}-\mu)/\sigma. \tag{1}\]
It is worth noting that the codeword \(\mathbf{z}\) is converted to a complex vector, \(\mathbf{z}^{\prime}\in\mathbb{C}^{n/2}\), when passing through the complex AWGN channel. The channel use per point (CPP) is given by \(\frac{n}{2N}\).
At the receiver, the received signal \(\mathbf{y}^{\prime}\in\mathbb{C}^{n/2}\) is a noisy version of \(\mathbf{z}^{\prime}\):
\[\mathbf{y}^{\prime}=\mathbf{z}^{\prime}+\mathbf{w}, \tag{2}\]
where \(\mathbf{w}\in\mathbb{C}^{n/2}\) denotes a complex AWGN vector with independent and identically distributed elements, \(\mathbf{w}\sim\mathcal{CN}(0,N_{0})\). The channel signal-to-noise ratio (SNR) is defined as \(\mathrm{SNR}=10\log_{10}\frac{P}{N_{0}}\). Without loss of generally, we assume \(P=1\) in the sequel. Upon receiving \(\mathbf{y}^{\prime}\), we first convert it to a real vector \(\mathbf{y}\in\mathbb{R}^{n}\) and feed it into the decoder \(g(\cdot)\) to obtain a reconstructed point cloud \(\hat{\mathcal{P}}\).
## III Methodology
This section details our design of the encoder and decoder functions using neural network architectures, and explains how features are extracted from the original point cloud \(\mathcal{P}\) via downsampling and self-attention layers and how the point cloud \(\hat{\mathcal{P}}\) is reconstructed from the noisy latent vector via refinement and up-sampling layers.
### _SEPT encoder_
The encoder of SEPT consists of three main modules: downsampling, self-attention, and max pooling, as shown in Fig. 1.
**Downsampling.** The objective of the downsampling module is to reduce the number of points in the input point cloud. Let \(\mathcal{P}_{1}=(\mathbf{X}_{1},\mathbf{F}_{1})\) and \(\mathcal{P}_{2}=(\mathbf{X}_{2},\mathbf{F}_{2})\) denote the input and output point clouds of a downsampling module, respectively, where \(\mathbf{X}_{2}\subset\mathbf{X}_{1}\). In particular,
* In order to achieve a more representative point cloud, it is crucial to disperse the selected points, i.e., \(\mathbf{X}_{2}\), as widely as possible, ensuring sufficient coverage across \(\mathbf{X}_{1}\).
* The clipped points of \(\mathcal{P}_{1}\) will be embedded into the features of \(\mathcal{P}_{2}\) to facilitate reconstruction at the receiver.
To the above ends, SEPT uses the farthest point sampling (FPS) algorithm to generate \(\mathbf{X}_{2}\), as shown in Fig. 2(a). To generate the feature vector of the \(i\)-th point in \(\mathbf{X}_{2}\), denoted by \(\mathbf{f}_{i}^{(2)}\in\mathbb{R}^{d_{f_{2}}}\), we first find its \(k\)-nearest neighbors within a given radius \(r\) in \(\mathbf{X}_{1}\) and denote them by \(\mathbb{N}(i)\).2 Then, we concatenate the feature vectors with their coordinates of the points in \(\mathbb{N}(i)\) and organise them into a tensor, denoted by \(\mathbf{T}_{\mathbb{N}(i)}\in\mathbb{R}^{(d_{f_{1}}+3)}\times k\times N_{1}\) where \(N_{1}\) denotes the cardinality of \(\mathbf{X}_{1}\), and feed this tensor to a 2D convolutional layer followed by batch normalization, ReLU, and max pooling. In each downsampling module, we set the cardinality of \(\mathbf{X}_{2}\) to be \(1/4\) of that of \(\mathbf{X}_{1}\), i.e., \(|\mathbf{X}_{2}|=|\mathbf{X}_{1}|/4\).
Footnote 2: If the \(i\)-th (\(i<k\)) neighbor has a distance larger than \(r\) from the sampled point, we will use the nearest neighbor to replace it.
**Self-Attention.** In SEPT, each downsampling module is followed by a self-attention module [5], a point cloud processing technique that is capable of extracting rich neighboring information. Denote the input and output point clouds of the self-attention layer by \(\mathcal{P}_{1}=(\mathbf{X}_{1},\mathbf{F}_{1})\) and \(\mathcal{P}_{2}=(\mathbf{X}_{1},\mathbf{F}_{2})\), respectively. As shown in Fig. 2(b), the self-attention layer refines the features of each point in \(\mathcal{P}_{1}\). The inner operations can be written as
\[\mathbf{f}_{i}^{(2)}\!=\!\!\sum_{j\in\mathbb{N}(i)}\!\!\text{Softmax}\big{(}\gamma (\phi(\mathbf{f}_{i}^{(1)})\!-\!\phi(\mathbf{f}_{j}^{(1)}))\!+\!\delta\big{)}\!\odot \!(\alpha(\mathbf{f}_{j}^{(1)})\!+\!\delta),\]
where \(\mathbf{f}_{i}^{(2)}\) denotes the refined feature vector of the \(i\)-th point; \(\gamma,\phi,\alpha:\mathbb{R}^{d_{f}}\rightarrow\mathbb{R}^{d_{f}}\) are realized by MLPs; \(\delta=\theta(\mathbf{x}_{i}-\mathbf{x}_{j})\) is the positional information and \(\theta:\mathbb{R}^{3}\rightarrow\mathbb{R}^{d_{f}}\) is an MLP layer for positional embedding; \(\odot\) denotes the element-wise product. That is, we adopt vector attention [5], as opposed to the standard scalar dot-product attention used in language and vision transformer models, for better performance.
As shown in Fig. 1, after two consecutive downsampling and self-attention module pairs, the final downsized point cloud is obtained, which we denote by \(\mathcal{P}^{*}=(\mathbf{X}^{*},\mathbf{F}^{*})\), where \(\mathbf{X}^{*}\in\mathbb{R}^{(N/16)\times 3}\) and \(\mathbf{F}^{*}\in\mathbb{R}^{(N/16)\times n}\). We emphasize that the features \(\mathbf{F}^{*}\) are generated by neural networks and can be optimized to be robust to noise, thanks to end-to-end learning.
Figure 2: The inner architectures of (a) the downsampling module, (b) the point transformer layer, and (c) the \(\ell\)-th block of the offset-based up-sampling module, \(\ell\in[1,L]\).
On the other hand, the coordinates \(\mathbf{X}^{*}\) are obtained from FPS and are susceptible to noise. Our empirical results indicate that \(\mathbf{X}^{*}\) has to be transmitted to the receiver reliably via digital communications. Failure to do so results in a substantial degradation in the reconstruction performance of the point cloud. Digital transmission of \(\mathbf{X}^{*}\), however, results in two problems: 1) the cliff and leveling effects; 2) excessive channel usage (detailed later in Section IV-B). In this context, SEPT eliminates the need for coordinate (\(\mathbf{X}^{*}\)) transmission and instead focuses solely on transmitting the features (\(\mathbf{F}^{*}\)) to the receiver. To be precise, SEPT learns to encode the global features in \(\mathbf{F}^{*}\) and the decoder is trained to reconstruct the entire point cloud from the global features without the aid of the coordinates. By doing so, SEPT significantly reduces the amount of data that needs to be transmitted, leading to more efficient and streamlined communication.
**Max Pooling.** The last step at the encoder is to transform \(\mathbf{F}^{*}\in\mathbb{R}^{(N/16)\times n}\) to the latent vector \(\tilde{\mathbf{z}}\in\mathbb{R}^{n}\). A natural solution is to use an MLP for each \(\mathbf{f}^{*}_{i}\in\mathbb{R}^{n}\). In contrast, SEPT applies max pooling over the \(N/16\) points to generate the \(n\)-dimensional vector \(\tilde{\mathbf{z}}\) where \(n\) is the available channel bandwidth. The advantage of max pooling will be demonstrated in Section IV-B via an ablation study.
### _SEPT decoder_
The decoder of SEPT consists of two modules: latent reconstruction and refinement, and offset-based up-sampling.
**Latent Reconstruction and Refinement.** The latent reconstruction module takes the noisy latent vector as input to reconstruct \(\mathcal{P}^{*}\). As shown in Fig. 1, given the received signal \(\mathbf{y}\), we first use a TransConv layer, which is essentially a 1D deconvolution with a unit stride, to generate the initial estimate of \(\mathbf{F}^{*}\), denoted by \(\mathbf{F}^{\prime}\in\mathbb{R}^{(N/16)\times n}\). Then, we employ a coordinate reconstruction layer \(\Psi^{\prime}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{3}\), which is composed of MLP layers and a ReLU function, operates on each row of \(\mathbf{F}^{\prime}\) to generate an initial estimate of the coordinates:
\[\mathbf{X}^{\prime}=\Psi^{\prime}(\mathbf{F}^{\prime}), \tag{3}\]
where \(\mathbf{X}^{\prime}\in\mathbb{R}^{N/16\times 3}\). The initial estimates \((\mathbf{X}^{\prime},\mathbf{F}^{\prime})\) can be erroneous due to noise. Therefore, we further use a self-attention module, denoted as SA, to refine the features:
\[\mathbf{F}^{\prime\prime}=\text{SA}(\mathbf{X}^{\prime},\mathbf{F}^{\prime}). \tag{4}\]
Next, a new coordinate reconstruction layer, \(\Psi^{\prime\prime}\), is applied to \(\mathbf{F}^{\prime\prime}\) to produce a refined estimation of coordinates, \(\mathbf{X}^{\prime\prime}\). Our refinement module is shown to be very effective in denoising the coordinates and features. An ablation study will be provided in Section IV-B.
**Offset-Based Up-Sampling.** Finally, we employ an offset-based up-sampling module [14] on \((\mathbf{X}^{\prime\prime},\mathbf{F}^{\prime\prime})\) for point cloud reconstruction. For the \(i\)-th point in the input point cloud, whose coordinates and features are denoted by \((\mathbf{x}_{i},\mathbf{f}_{i})\), this module generates \(L\) new points \(\{(\mathbf{x}^{\ell}_{i},\mathbf{f}^{\ell}_{i}),\ell\in[1,L]\}\) as:
\[\mathbf{x}^{\ell}_{i}=\mathbf{x}_{i}+s\cdot O_{\ell}(\mathbf{f}_{i}), \tag{5}\]
\[\mathbf{f}^{\ell}_{i}=G_{\ell}(\mathbf{f}_{i}), \tag{6}\]
where \(O_{\ell}:\mathbb{R}^{d_{f}}\rightarrow[-1,1]^{3}\) is an MLP layer followed by a \(\tanh\) function that aims to generate an offset vector; \(G_{\ell}\) is comprised of MLPs and a ReLU function that maps the input feature to a new one with the same dimension. In particular, \(s\) is a scaling factor for the offsets. Compared with [14], SEPT uses a relatively large \(s=0.1\) to give the up-sampling module more freedom for better performance, considering the additional noise introduced by the wireless channel. The detailed architectures for \(O_{\ell}\) and \(G_{\ell}\) are shown in Fig. 2(c).
In SEPT, we use two up-sampling modules, and \(L\) is set to \(4\) in each module. Denoting by \(\hat{\mathcal{P}}=(\hat{\mathbf{X}},\hat{\mathbf{F}})\) the final output of the up-sampling modules, the Chamfer distance between \(\mathbf{X}\) and \(\hat{\mathbf{X}}\), denoted by \(d_{c\text{d}}^{2}(\mathbf{X},\hat{\mathbf{X}})\), is used as the loss function:
\[\mathcal{L}_{\text{CD}}=\frac{1}{N}\sum_{\mathbf{x}\in\mathcal{P}}\min_{\mathbf{y}\in \hat{\mathcal{P}}}||\mathbf{x}-\mathbf{y}||_{2}^{2}+\frac{1}{N}\sum_{\mathbf{y}\in\hat{ \mathcal{P}}}\min_{\mathbf{x}\in\hat{\mathcal{P}}}||\mathbf{y}-\mathbf{x}||_{2}^{2}. \tag{7}\]
## IV Numerical Experiments
This section presents the results of our numerical experiments to evaluate the reconstruction performance of SEPT. We consider the point cloud data from ShapeNet [28], which contains about \(51000\) different shapes, and we sample each point cloud to \(N=2048\) points using the FPS algorithm. In both the SEPT encoder and decoder, the dimension of the intermediate attributes is set to \(d_{f}=256\) and the number of neurons in the MLPs of the coordinate reconstruction layer is set to \(128\). During training, we adopt the Adam optimizer with a varying learning rate, which is initialized to 0.001 and reduced by a factor of \(0.5\) every \(20\) epochs. We set the number of epochs to \(200\) and the batch size to \(32\). Two conventional peak signal-to-noise ratio (PSNR) measures [29], D1 and D2, are adopted to evaluate the reconstruction quality. Specifically, \(\text{D1}(\mathcal{A},\mathcal{B})\) measures the average point-to-point geometric distance between each point in point cloud \(\mathcal{A}\) and its nearest neighbor in point cloud \(\mathcal{B}\). To be precise, we first calculate the mean squared error, \(e_{\mathcal{A},\mathcal{B}}^{D1}\):
\[e_{\mathcal{A},\mathcal{B}}^{D1} =\frac{1}{N}\sum_{\mathbf{a}_{i}\in\mathcal{A}}||\mathbf{a}_{i}-\mathbf{b}_{ k}||_{2}^{2}\] \[k =\operatorname*{arg\,min}_{j\in[N]}||\mathbf{a}_{i}-\mathbf{b}_{j}||_{2}^{2}. \tag{8}\]
then, D1 is calculated as [29]:
\[\text{D1}(\mathcal{A},\mathcal{B})=10\frac{3\gamma^{2}}{\max(e_{\mathcal{A}, \mathcal{B}}^{D1},e_{\mathcal{B},\mathcal{A}}^{D1})}, \tag{9}\]
where factor \(3\) in the numerator is due to the 3D coordinates used in the representation, and the peak, \(\gamma\), is set to unity due to the fact that the input points are normalized within the range \([0,1]\). Similarly, D2 evaluates the point-to-plane distance between \(\mathcal{A}\) and \(\mathcal{B}\) and the error term for D2 is defined as:
\[e_{\mathcal{A},\mathcal{B}}^{D2}=\frac{1}{N}\sum_{\mathbf{a}_{i}\in\mathcal{A}}( \mathbf{a}_{i}-\mathbf{b}_{k})\cdot\mathbf{n}_{i}, \tag{10}\]
where \(\mathbf{n}_{i}\) is the normal vector corresponding to \(\mathbf{a}_{i}\) and \(\mathbf{b}_{k}\) is the nearest neighbor of \(\mathbf{a}_{i}\). After obtaining (10), we follow the same formula in (9) to calculate \(\text{D2}(\mathcal{A},\mathcal{B})\).
### _The reconstruction performance_
We first evaluate the reconstruction performance of SEPT with various CPP and channel SNR values. Two separate source-channel coding schemes are considered as benchmarks. For source coding, the first benchmark uses the standard octree-based point cloud compression scheme: MPEG G-PCC [3]. The second benchmark uses the state-of-the-art deep learning-based point cloud compression scheme, which we name it as DPCC [9]. Both schemes are protected by Polar codes with rate \(\{1/2,3/4\}\) and modulated by BPSK, QPSK, or 16QAM for transmission. We also provide the results for the DPCC delivered at finite block length converse bound [30] for a block error rate of \(\epsilon=10^{-3}\).
The simulation results are presented in Fig. 3 (a) and (b), where we fix the channel bandwidth to \(n=200\). In the simulations, a specific SEPT model is trained for each channel SNR value. As shown, SEPT is significantly better than the separation based scheme with MPEG G-PCC3, demonstrating its efficacy in feature extraction and robustness to channel noise. SEPT also outperforms the separation-based scheme with DPCC [9], especially in the low-SNR regime. Note that DPCC can achieve comparable performance to SEPT at certain SNRs, if proper coded modulation schemes are selected.
Footnote 3: To obtain the results of G-PCC, we use an average \(n=860\) in the simulations. Despite the much larger \(n\) compared with that used in SEPT, MPEG G-PCC is still much worse than deep learning-based schemes. This observation is also reported in [9].
Next, we show how SEPT eliminates the cliff and leveling effects. To this end, Fig. 3(c) evaluates the performance of the SEPT model trained with a \(\mathrm{SNR}_{\text{train}}=5\) dB under various channel conditions, \(\mathrm{SNR}_{\text{test}}=\{0,2.5,5,7.5,10\}\) dB. For these simulations, we set \(n=50\). As shown, the single SEPT model trained with \(\mathrm{SNR}_{\text{train}}=5\) dB is robust to channel variations and performs well under various test SNRs. Importantly, SEPT eliminates the cliff and leveling effects, its performance degrades gracefully with the decrease in the test SNR and improves when the test SNR increases, while the digital benchmarks suffer from both the cliff and leveling effects if the modulation order remains unchanged, as shown in Fig. 3(a) and (b).
The performance of the proposed SEPT with respect to different number of channel uses \(n\) are shown in Fig. 4. Two SNR values, \(\{0,10\}\) dB, are considered and we compare the SEPT with the DPCC baseline delivered at finite length capacity. We can observe that D1 and D2 almost saturate when \(n\geq 200\). This might be due to the max pooling operation at the transmitter focuses more on the global features while the fine details may be lost.
We also provide a visualization of the reconstructed point clouds with \(n=300\) and \(\mathrm{SNR}\in\{0,5\}\) dB. As shown in Fig. 5, SEPT yields visually pleasing results even when the \(\mathrm{SNR}\) is as low as 0 dB.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline SNR & **0 dB** & **5 dB** & **10 dB** \\ \hline Max Pooling & 34.33 & 35.27 & 35.63 \\ Linear Projection & 26.41 & 30.22 & 31.27 \\ \hline \hline \end{tabular}
\end{table}
Table I: D1 performance (in dB) comparison between max pooling and linear projection to generate the channel inputs (\(n=300\)).
Figure 4: The reconstruction performance (D1) of the proposed SEPT with respect to different number of channel uses \(n\). We fix the \(\mathrm{SNR}=10\) dB.
Figure 3: Reconstruction performances of SEPT: (a) \(\&\) (b) the D1 and D2 performances, where \(n=200\); (c) D1 versus the test SNR, where \(n=50\) and the SEPT model is trained with \(\mathrm{SNR}_{\text{train}}=5\) dB.
### _Ablation study_
This section performs ablation studies to evaluate different modules of SEPT.
**Max pooling versus linear projection.** As mentioned in Section III-A, to generate \(\tilde{\mathbf{z}}\) from the feature vectors \(\mathbf{F}^{*}\in\mathcal{R}^{N/16\times n}\), we can either use max pooling to produce an \(n\)-dimensional vector or perform MLP layers on \(\mathbf{F}^{*}\) to generate an \(N/16\times t\) matrix satisfying \(Nt/16\approx n\). For comparison, we set \(n=300,t=3\) for the two schemes, and report the reconstruction performance with \(\mathrm{SNR}=\{0,5,10\}\) dB in terms of D1 and D2 in Table I. It is confirmed that max pooling is superior to a linear projection.
**The refinement networks at the decoder.** The initial estimates of both the coordinates and the features, \(\{\mathbf{X}^{\prime},\mathbf{F}^{\prime}\}\), obtained from the 1D deconvolution layer are noisy, and SEPT uses an additional self-attention layer and a coordinate estimation layer to refine these coordinates and features. In this simulation, we show the effectiveness of the proposed refinement neural network by comparing the Chamfer distance (7) between \(\{\mathcal{P}^{\prime\prime},\mathcal{P}\}\) with that of \(\{\mathcal{P}^{\prime},\mathcal{P}\}\), where \(\mathcal{P}\) is the original point cloud. Simulations are performed with \(n=300\) and \(\mathrm{SNR}=0\) dB, and we have \(\mathcal{L}_{\text{CD}}(\mathbf{X}^{\prime},\mathbf{X})=0.76\) and \(\mathcal{L}_{\text{CD}}(\mathbf{X}^{\prime\prime},\mathbf{X})=0.018\) illustrating that the refinement layer provides a much more accurate reconstruction.
**Transmitting downsampled coordinates.** As stated in Section III-B, SEPT only transmits the features \(\mathbf{F}^{*}\) of the downsized point cloud, but not the coordinates \(\mathbf{X}^{*}\). For a comprehensive understanding of SEPT, we further explore a hybrid transmission scheme, where \(\mathbf{X}^{*}\) is transmitted in the digital fashion while \(\mathbf{F}^{*}\) is transmitted with DTAT. Note that this hybrid transmission strategy will still suffer from the cliff and leveling effects.
In this simulation, we consider a \(\mathrm{SNR}=10\) dB and \(n=200\). For the hybrid scheme, we first downsample the original point cloud to \(\mathcal{P}^{*}\) with \(64\) points and then perform max pooling to generate a latent vector \(\tilde{\mathbf{z}}\) with \(n=200\), as SEPT. Then, \(\mathbf{X}^{*}\) is quantized, and the quantized bits are transmitted using \(1/2\)-rate Polar code, and 16QAM modulation. At \(\mathrm{SNR}=10\) dB, this coded modulation scheme achieves zero error probability for coordinate transmission. At the receiver, the coordinate reconstruction layers are no longer needed, since accurate coordinates are already available. The features are refined and the up-sampling blocks are used to reconstruct the final point clouds with \(N\) points. We observe a slight improvement in D1 by sending extra coordinate information from \(35.4\) dB to \(36.2\) dB. However, even if we consider a \(16\)-bit precision of the coordinates as in [14], the extra cost for transmitting the coordinates is \(3072\) bits, occupying \(\approx 900\) excessive complex channel uses (assuming capacity achieving codes), which is unacceptable, given that we only have \(n=200\).
## V Conclusion
Wireless transmission plays a pivotal role in enhancing the mobility and accessibility of 3D point clouds. With limited bandwidth, the SEPT framework proposed in this paper achieves efficient and robust wireless transmission of 3D point clouds, paving the way for realizing immersive user experiences in the metaverse, or collaborative sensing in vehicular networks. Our study highlights two key challenges that merit further investigation:
* The potential for a hybrid scheme that transmits both point cloud coordinates and features for improved performance, albeit at the expense of increased bandwidth utilization. A direction worthy of exploration involves designing a cost-effective hybrid scheme that strikes a balance between performance enhancement and bandwidth efficiency.
* Our findings indicate that the reconstruction performance reaches a saturation point as the CPP increases. This implies that certain intricate details of the point cloud are not fully preserved during feature extraction. To address this issue, it is crucial to develop new encoding and decoding architectures that effectively capture these fine details, enabling progressive performance improvements with higher CPP values.
|
2304.09531 | Hidden AR Process and Adaptive Kalman Filter | The model of partially observed linear system depending on some unknown
parameters is considered. An approximation of the unobserved component is
proposed. This approximation is realized in three steps. First an estimator of
the method of moments of unknown parameter is constructed. Then this estimator
is used for defining the One-step MLE-process and finally the last estimator is
substituted to the equations of Kalman filter. The solution of obtained
equations provide us the approximation (adaptive Kalman filter). The asymptotic
properties of all mentioned estimators and MLE and Bayesian estimators of the
unknown parameters are described. The asymptotic efficiency of adaptive
filtering is discussed. | Yury A. Kutoyants | 2023-04-19T09:39:37Z | http://arxiv.org/abs/2304.09531v1 | # Hidden AR Process and Adaptive Kalman Filter
###### Abstract
The model of partially observed linear system depending on some unknown parameters is considered. An approximation of the unobserved component is proposed. This approximation is realized in three steps. First an estimator of the method of moments of unknown parameter is constructed. Then this estimator is used for defining the One-step MLE-process and finally the last estimator is substituted to the equations of Kalman filter. The solution of obtained equations provide us the approximation (adaptive K-B filter). The asymptotic properties of all mentioned estimators and MLE and Bayesian estimators of the unknown parameters are described. The asymptotic efficiency of adaptive filtering is discussed.
MSC 2000 Classification: 62M02, 62G10, 62G20.
Key words: Partially observed linear system, hidden Markov process, Kalman filter, parameter estimation, method of moments estimators, MLE and Bayesian estimators, One-step MLE-process, on-line approximation, adaptive Kalman filter.
## 1 Introduction
We are given a linear partially observed system
\[X_{t} =f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots, \tag{1}\] \[Y_{t} =a\,Y_{t-1}+b\,v_{t},\qquad\ \ Y_{0}, \tag{2}\]
where \(X^{T}=\left(X_{0},X_{1},\ldots,X_{T}\right)\) are observations and auto regressive process (AR) \(Y_{t},t\geq 0\) is a hidden process. Here \(w_{t},t\geq 1\) and \(v_{t},t\geq 1\) are independent standard Gaussian random variables, i.e., \(w_{t}\sim\mathcal{N}\left(0,1\right)\), \(v_{t}\sim\mathcal{N}\left(0,1\right)\). The initial values are \(X_{0}\sim\mathcal{N}\left(0,d_{x}^{2}\right)\) and \(Y_{0}\sim\mathcal{N}\left(0,d_{y}^{2}\right)\) and can be correlated with correlation \(\rho_{xy}=\mathbf{E}X_{0}Y_{0}\). The system is defined by the parameters \(a,b,f,\sigma^{2},d_{x}^{2},d_{y}^{2}\). It will be convenient for instant to denote \(\vartheta=\left(a,b,f,\sigma^{2}\right)\).
Denote \(\mathfrak{F}_{t}^{X}\) the \(\sigma\)-algebra generated by the first \(t+1\) observations \(X_{0},X_{1},\ldots,X_{t}\). The conditional expectation \(m\left(\vartheta,t\right)=\mathbf{E}_{\vartheta}\left(Y_{t}|\mathfrak{F}_{t}^{X}\right)\) according to the equations of Kalman filter (see, e.g., Theorem 13.4 in [19]) satisfies the equation
\[m\left(\vartheta,t\right)=a\,m\left(\vartheta,t-1\right)+\frac{af\gamma\left( \vartheta,t-1\right)}{\sigma^{2}+f^{2}\gamma\left(\vartheta,t-1\right)}\left[ X_{t}-fm\left(\vartheta,t-1\right)\right],\quad t\geq 1. \tag{3}\]
The initial value is \(m\left(\vartheta,0\right)=\mathbf{E}_{\vartheta}\left(Y_{0}|X_{0}\right)\).
The mean square error \(\gamma\left(\vartheta,t\right)=\mathbf{E}_{\vartheta}\left(Y_{t}-m\left( \vartheta,t\right)\right)^{2}\) is described by the equation
\[\gamma\left(\vartheta,t\right)=a^{2}\gamma\left(\vartheta,t-1\right)+b^{2}- \frac{a^{2}f^{2}\gamma\left(\vartheta,t-1\right)^{2}}{\sigma^{2}+f^{2}\gamma \left(\vartheta,t-1\right)},\qquad\quad t\geq 1 \tag{4}\]
with the initial value \(\gamma\left(\vartheta,0\right)=\mathbf{E}_{\vartheta}\left(Y_{0}-m\left( \vartheta,0\right)\right)^{2}\).
We suppose that the observations \(X^{T}\) are given and that some of the parameters are unknown, but their values are always satisfy the condition
\[\mathscr{A}_{0}\;:\qquad a^{2}\in[0,1),\qquad b^{2}>0,\qquad\qquad f^{2}>0, \qquad\sigma^{2}>0. \tag{5}\]
This condition is uniform in the following sense. If, for example, the unknown parameter is \(f\in\left(\alpha_{f},\beta_{f}\right)\), then \(\alpha_{f}>0\) or \(\beta_{f}<0\). Our goal is to propose an approximation of \(m\left(\vartheta,t\right),t\geq 1\) in such situations and to describe the error of approximation in the asymptotic of _large samples_, i.e., as \(T\rightarrow\infty\).
The consistent estimation of the parameters \(\left(d_{x}^{2},d_{y}^{2}\right)\) is impossible and we suppose that these parameters are known. If these values are unknown, then the system (3)-(4) will be solved with some wrong initial values, but due to robustness of the solutions the difference between solutions with true and wrong initial values under condition \(\mathscr{A}_{0}\) is asymptotically negligible. Note as well that the consistent estimation of the parameters \(\vartheta=\left(f,b\right)\) or \(\vartheta=\left(f,a,b\right)\) is impossible because the model (1)-(2) depends on the product \(fb\).
This work is devoted to the problem of estimation of \(m\left(\vartheta,t\right),t=1,\ldots,T\) in the situations, where some of the parameters of the model (1)-(2) are unknown. As usual in such problems we first estimate the unknown parameter and then this estimator is substituted in the equations (3)-(4). The obtained in such a way equations will describe _adaptive Kalman filter_. There existe a wide literature on adaptive filtering for such and similar partially observed systems. The difference between them is in the construction of parameter estimators and in the description of the corresponding errors of approximations of \(m\left(\vartheta,\cdot\right)\), see, e.g., [1],[2],[4],[6],[8],[20], [21],[22],[23] and references there in. There is a large diversity of the models (linear and non linear), different limits (small noise or large samples) and the methods of adaptive filtering. For the words "adaptive Kalman filter" Google Scholar gives half million references. Of course not all of them are exactly in what we need but nevertheless in some sens it gives the idea how important this subject is. We propose one else algorithm which realizes such procedure. Note that since the work of Kalman [11]
the equations (1)-(2) are considered in more general forms, where \(X_{t}\), \(Y_{t}\) and \(w_{t}\), \(v_{t}\) are vectors and \(f,\sigma,a,b\) are matrices. Our choice of this simplest model was motivated by the simplicity of calculations and the same time the obtained results nevertheless seems to be non trivial. We suppose that the proposed algorithms can be extended on more complicate models and the results for these models will be similar to the presented in this work ones.
Note that the problems of parameter estimation for such systems of observations is a part of more general class of problems of parameter estimation for hidden Markov processes, see, e.g., the works [3] and [5] and references there in.
We are interested by the problem of on-line estimation of the conditional expectation (random function) \(m\left(\vartheta,t\right),0<t\leq T\) in the different situations, where \(\vartheta\) is unknown. For example, \(f=\vartheta\) and \(a\), \(b\) are known. The usual behavior in such situations is to estimate first the unknown parameters and then to substitute these estimators in the equations (3)-(4). The most interesting are of course the algorithms of on-line recurrent adaptive filters. The studied algorithms are mainly verified with the help of numerical simulations, which show the reasonable behavior of the adaptive filters.
Our goal is to obtain a good recurrent approximation \(m_{t}^{\star},0<t\leq T\) of the process \(m\left(\vartheta,t\right),0<t\leq T\) in the case of the homogeneous partially observed system (1)-(2) and to discuss the question of asymptotic efficiency of adaptive filters.
The estimation of \(m\left(\vartheta,t\right),t\in\left(0,T\right]\) in this work is realized following the program:
1. _Calculate a preliminary estimator_ \(\bar{\vartheta}_{\tau}\) _on relatively small interval of observations_ \(\left[0,\tau\right]\)_._
2. _Using_ \(\bar{\vartheta}_{\tau}\) _construct the One-step MLE-process_ \(\vartheta_{t,T}^{\star},\tau<t\leq T\)_._
3. _As approximation of_ \(m\left(\vartheta,t\right)\) _we propose_ \(m_{t}^{\star}\) _obtained with the help of K-B equations, where_ \(\vartheta\) _is replaced by_ \(\vartheta_{t,T}^{\star},\tau<t\leq T\)_._
4. _Estimate the error_ \(m_{t}^{\star}-m\left(\vartheta,t\right),\tau<t\leq T\)_._
5. _Discuss the asymptotic efficiency of the adaptive filter._
This means that we have no on-line approximation on the time interval \(\left[0,\tau\right]\), but \(\tau/T\to 0\). Note that the used here One-step MLE-process is the well-known Le Cam's One-step MLE, in which we consider the upper limit of the integral (time \(t\)) as variable.
Introduce continuous time model
\[\mathrm{d}X_{t} =f\left(\vartheta\right)\,Y_{t}\,\mathrm{d}t+\sigma\,\mathrm{d}W _{t}, X_{0}, 0\leq t\leq T, \tag{6}\] \[\mathrm{d}Y_{t} =-a\left(\vartheta\right)\,Y_{t}\,\mathrm{d}t+b\left(\vartheta \right)\,\mathrm{d}V_{t}, Y_{0}, t\geq 0, \tag{7}\]
where \(W_{t},V_{t},t\geq 0\) are independent Wiener processes, \(f\left(\vartheta\right),a\left(\vartheta\right),b\left(\vartheta\right)\) are known smooth functions and \(\vartheta\in\Theta\subset\mathcal{R}^{d}\) is the unknown parameter. Suppose that the observations are \(X^{T}=\left(X_{t},0\leq t\leq T\right)\) and the Markov process \(Y^{T}=\left(Y_{t},0\leq t\leq T\right)\) is hidden.
We already applied this construction (steps 1-2, or steps 1-4, or steps 1-5) to 4 different models of observations. To the model of continuous time observations like (6)-(7) with small noises in the both equations [13], [18] (1-4). To the model (6)-(7) with small noise in the equation (6) only [15], [16] (1-5). To the model of hidden telegraph process [12] (1-2). To the model of observations (6)-(7) in the asymptotics \(T\to\infty\)[14] (1-2),[17] (1-5).
Note that in the Kalman filtering theory the model of observations is slightly different and can be written in our case as follows
\[X_{t} =f\,Y_{t}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots, \tag{8}\] \[Y_{t+1} =a\,Y_{t}+b\,v_{t+1},\qquad\;\;\;Y_{0}. \tag{9}\]
The link between these two models and the modified equations (3)-(4) can be found in [19], Corollary 3 of Theorem 13.4. Remark that if we consider the discrete time model as discrete time approximation of the model (6)-(7), then it seems the equations (1)-(2) feet better than (8)-(9).
In this work we propose the adaptive filter for the model (1)-(2) (steps 1-5). The construction of preliminary method of moments estimators follows [12] and the exposition is in some sense similar to the exposition in the continuous time case of the work [17], where the model of observations is (6)-(7).
In the next section we study the method of moments estimators (preliminary estimators) of the parameters of the system (23)-(24). Then the different Fisher informations are calculated for different parameters (section 3). Having preliminary estimator and Fisher information we introduce the One-step MLE-processes and study their asymptotic properties (section 4). The properties of MLE and Bayesian estimator are described in the section 5. The One-step MLE-process is substituted in the equation (3)-(4) and this provides us the adaptive filter (section 6). The last seventh section is devoted to the question of asymptotic efficiency of the proposed adaptive filters.
## 2 Method of moments estimators
Introduce three statistics
\[S_{1,T}\left(X^{T}\right) =\frac{1}{T}\sum_{t=1}^{T}\left(X_{t}-X_{t-1}\right)^{2},\quad S_ {2,T}\left(X^{T}\right)=\frac{1}{T}\sum_{t=2}^{T}\left(X_{t}-X_{t-1}\right) \left(X_{t-1}-X_{t-2}\right),\] \[S_{3,T}\left(X^{T}\right) =\frac{1}{T}\sum_{t=3}^{T}\left(X_{t}-X_{t-1}\right)\left(X_{t-2 }-X_{t-3}\right)\]
and study their asymptotic (\(T\to\infty\)) behavior. We suppose always that the condition \(\mathscr{A}_{0}\) is fulfilled and the true value is denoted as \(\vartheta_{0}\).
Denote
\[\Phi_{1}\left(\vartheta\right)=\frac{2f^{2}b^{2}}{1+a}+2\sigma^{2},\quad\Phi_{2} \left(\vartheta\right)=\frac{f^{2}b^{2}\left(a-1\right)}{1+a}-\sigma^{2},\quad \Phi_{3}\left(\vartheta\right)=\frac{f^{2}b^{2}a\left(a-1\right)}{\left(1+a \right)}.\]
**Lemma 1**.: _We have the limits_
\[S_{1,T}\left(X^{T}\right) \longrightarrow \Phi_{1}\left(\vartheta_{0}\right), \tag{10}\] \[S_{2,T}\left(X^{T}\right) \longrightarrow \Phi_{2}\left(\vartheta_{0}\right),\] (11) \[S_{3,T}\left(X^{T}\right) \longrightarrow \Phi_{3}\left(\vartheta_{0}\right), \tag{12}\]
_and there exist constants \(C_{1}>0\), \(C_{2}>0\), \(C_{3}>0\) such that_
\[\mathbf{E}_{\vartheta_{0}}\left|S_{1,T}\left(X^{T}\right)-\Phi_{1 }\left(\vartheta_{0}\right)\right|^{2}\leq\frac{C_{1}}{T},\quad\mathbf{E}_{ \vartheta_{0}}\left|S_{2,T}\left(X^{T}\right)-\Phi_{2}\left(\vartheta_{0} \right)\right|^{2}\leq\frac{C_{2}}{T}, \tag{13}\] \[\mathbf{E}_{\vartheta_{0}}\left|S_{3,T}\left(X^{T}\right)-\Phi_{3 }\left(\vartheta_{0}\right)\right|^{2}\leq\frac{C_{3}}{T}. \tag{14}\]
Proof.: According to (23)
\[\frac{1}{T}\sum_{t=1}^{T}\left(X_{t}-X_{t-1}\right)^{2} =\frac{f_{0}^{2}}{T}\sum_{t=1}^{T}\left(Y_{t-1}-Y_{t-2}\right)^{2} +\frac{2f_{0}\sigma_{0}}{T}\sum_{t=1}^{T}\left(Y_{t-1}-Y_{t-2}\right)\left(w_ {t}-w_{t-1}\right)\] \[\qquad+\frac{\sigma_{0}^{2}}{T}\sum_{t=1}^{T}\left(w_{t}-w_{t-1} \right)^{2}.\]
The Gaussian time series \(Y_{t},t\geq 1\) is exponentially mixing with the stationary (invariant) Gaussian distribution \(\mathcal{N}\left(0,\frac{b_{0}^{2}}{1-a_{0}^{2}}\right)\) and it is independent of \(w_{t},t\geq 1\). Therefore by the law of large numbers we have the convergences
\[\frac{f_{0}^{2}}{T}\sum_{t=1}^{T}\left(Y_{t-1}-Y_{t-2}\right)^{2} =\frac{f_{0}^{2}}{T}\sum_{t=1}^{T}\left(\left(a_{0}-1\right)Y_{t-2}+b_{0}v_{t -1}\right)^{2}\] \[=\frac{f_{0}^{2}\left(1-a_{0}\right)^{2}}{T}\sum_{t=1}^{T}Y_{t-2} ^{2}-\frac{2b_{0}\left(1-a_{0}\right)f_{0}^{2}}{T}\sum_{t=1}^{T}Y_{t-2}v_{t-1} +\frac{f_{0}^{2}b_{0}^{2}}{T}\sum_{t=1}^{T}v_{t-1}^{2}\] \[\longrightarrow\frac{f_{0}^{2}b_{0}^{2}\left(1-a_{0}\right)^{2}} {1-a_{0}^{2}}+f_{0}^{2}b_{0}^{2}=\Phi_{1}\left(\vartheta_{0}\right)-2\sigma_ {0}^{2},\] \[\frac{2f_{0}\sigma_{0}}{T}\sum_{t=1}^{T}\left(Y_{t-1}-Y_{t-2} \right)\left(w_{t}-w_{t-1}\right)\longrightarrow 0,\] \[\frac{\sigma_{0}^{2}}{T}\sum_{t=1}^{T}\left(w_{t}-w_{t-1}\right)^{ 2}\longrightarrow 2\sigma_{0}^{2},\]
which proves (10).
To prove (11) we write
\[\left(X_{t}-X_{t-1}\right)\left(X_{t-1}-X_{t-2}\right)\] \[\qquad=\left[f_{0}\left(Y_{t-1}-Y_{t-2}\right)+\sigma_{0}\left(w_{ t}-w_{t-1}\right)\right]\left[f_{0}\left(Y_{t-2}-Y_{t-3}\right)+\sigma_{0}\left(w_{ t-1}-w_{t-2}\right)\right]\] \[\qquad=\left[f_{0}\left(a_{0}-1\right)Y_{t-2}+f_{0}b_{0}v_{t-1}+ \sigma_{0}\left(w_{t}-w_{t-1}\right)\right]\] \[\qquad\qquad\qquad\times\frac{1}{a_{0}}\left[f_{0}\left(a_{0}-1 \right)Y_{t-2}+f_{0}b_{0}a_{0}v_{t-2}+\sigma_{0}a_{0}\left(w_{t-1}-w_{t-2} \right)\right].\]
Therefore
\[\frac{1}{T}\sum_{t=2}^{T}\left(X_{t}-X_{t-1}\right)\left(X_{t-1} -X_{t-2}\right) =\frac{f_{0}^{2}\left(a_{0}-1\right)^{2}}{a_{0}T}\sum_{t=2}^{T}Y_ {t-2}^{2}+\frac{f_{0}^{2}b_{0}\left(a_{0}-1\right)}{T}\sum_{t=2}^{T}Y_{t-2}v_ {t-2}\] \[\qquad+\frac{\sigma_{0}^{2}}{T}\sum_{t=2}^{T}\left(w_{t}-w_{t-1} \right)\left(w_{t-1}-w_{t-2}\right)+o\left(1\right).\]
We have
\[\frac{1}{T}\sum_{t=2}^{T}Y_{t-2}v_{t-2}=\frac{a_{0}}{T}\sum_{t=2} ^{T}Y_{t-3}v_{t-2}+\frac{a_{0}b_{0}}{T}\sum_{t=2}^{T}v_{t-2}^{2}\longrightarrow a _{0}b_{0},\] \[\frac{1}{T}\sum_{t=2}^{T}\left(w_{t}-w_{t-1}\right)\left(w_{t-1}- w_{t-2}\right)\longrightarrow-1.\]
Finally
\[\frac{1}{T}\sum_{t=2}^{T}\left(X_{t}-X_{t-1}\right)\left(X_{t-1} -X_{t-2}\right) \longrightarrow\frac{f_{0}^{2}b_{0}^{2}\left(a_{0}-1\right)^{2}}{a _{0}\left(1-a_{0}^{2}\right)}+f_{0}^{2}b_{0}^{2}a_{0}\left(a_{0}-1\right)- \sigma^{2}\] \[=\Phi_{2}\left(\vartheta_{0}\right).\]
The last convergence (12) we obtain using the similar arguments as follows:
\[\left(X_{t}-X_{t-1}\right)\left(X_{t-2}-X_{t-3}\right)\] \[\qquad=\left[f_{0}\left(Y_{t-1}-Y_{t-2}\right)+\sigma_{0}\left(w_ {t}-w_{t-1}\right)\right]\left[f_{0}\left(Y_{t-3}-Y_{t-4}\right)+\sigma_{0} \left(w_{t-2}-w_{t-3}\right)\right]\] \[\qquad=\left[f_{0}\left(a_{0}Y_{t-2}-Y_{t-2}\right)+f_{0}b_{0}v_ {t-1}+\sigma_{0}\left(w_{t}-w_{t-1}\right)\right]\] \[\qquad\qquad\times\frac{1}{a_{0}}\left[f_{0}\left(a_{0}Y_{t-3}-Y_ {t-3}\right)+f_{0}b_{0}v_{t-3}+a_{0}\sigma_{0}\left(w_{t-2}-w_{t-3}\right) \right].\]
Here we used the equation (24) and the equality \(a_{0}Y_{t-4}=Y_{t-3}-b_{0}v_{t-3}\). Further, as \(w_{t},t\geq 1\) are independent of \(Y_{t},t\geq 1\) and \(v_{t},t\geq 1\) we can write
\[S_{3,T}\left(X^{T}\right)=\frac{1}{a_{0}T}\sum_{t=3}^{T}\left[f_{0}\left(a_{0} -1\right)Y_{t-2}+f_{0}b_{0}v_{t-1}\right]\left[f_{0}\left(a_{0}-1\right)Y_{t-3} +f_{0}b_{0}v_{t-3}\right]+o\left(1\right)\]
\[=\frac{1}{a_{0}T}\sum_{t=3}^{T}\left[f_{0}\left(a_{0}-1\right)a_{0}Y_{t-3}+f \left(a_{0}-1\right)v_{t-2}+f_{0}b_{0}v_{t-1}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\times\left[f_{0}\left(a_{0} -1\right)Y_{t-3}+f_{0}b_{0}v_{t-3}\right]+o\left(1\right)\] \[=\frac{f_{0}^{2}\left(a_{0}-1\right)^{2}}{T}\sum_{t=3}^{T}Y_{t-3 }^{2}+\frac{f_{0}^{2}\left(a_{0}-1\right)b_{0}}{T}\sum_{t=3}^{T}Y_{t-3}v_{t-3} +o\left(1\right)\] \[=\frac{f_{0}^{2}\left(a_{0}-1\right)^{2}}{T}\sum_{t=3}^{T}Y_{t-3 }^{2}+\frac{f_{0}^{2}b_{0}^{2}\left(a_{0}-1\right)}{T}\sum_{t=3}^{T}v_{t-3}^{ 2}+o\left(1\right)\] \[\longrightarrow\frac{f_{0}^{2}b_{0}^{2}\left(1-a_{0}\right)}{1+a_ {0}}+f_{0}^{2}b_{0}^{2}\left(a_{0}-1\right)=\Phi_{3}\left(\vartheta_{0}\right).\]
As the Gaussian AR process \(Y_{t},t\geq 1\) has exponentially decreasing correlation function the convergences (13),(14) follow from the standard arguments. For the higher moments see Rosenthal-type inequalities [7].
**Remark 1**.: From the proofs it follows that the estimates (12) are valid uniformly on compacts \(\mathbb{K}\subset\Theta\) too, i.e.,
\[\sup_{\vartheta_{0}\in\mathbb{K}}\mathbf{E}_{\vartheta_{0}}\left|S _{1,T}\left(X^{T}\right)-\Phi_{1}\left(\vartheta_{0}\right)\right|^{2} \leq\frac{C}{T},\quad\sup_{\vartheta_{0}\in\mathbb{K}}\mathbf{E}_ {\vartheta_{0}}\left|S_{2,T}\left(X^{T}\right)-\Phi_{2}\left(\vartheta_{0} \right)\right|^{2}\leq\frac{C}{T} \tag{15}\] \[\sup_{\vartheta_{0}\in\mathbb{K}}\mathbf{E}_{\vartheta_{0}}\left|S _{3,T}\left(X^{T}\right)-\Phi_{3}\left(\vartheta_{0}\right)\right|^{2} \leq\frac{C}{T}. \tag{16}\]
**Remark 2**.: More detailed analysis allows to verify the asymptotic normality
\[\sqrt{T}\left(S_{1,T}\left(X^{T}\right)-\Phi_{1}\left(\vartheta_ {0}\right)\right) \Longrightarrow\mathcal{N}\left(0,D_{1}\left(\vartheta_{0}\right)^{ 2}\right),\] \[\sqrt{T}\left(S_{2,T}\left(X^{T}\right)-\Phi_{2}\left(\vartheta_ {0}\right)\right) \Longrightarrow\mathcal{N}\left(0,D_{2}\left(\vartheta_{0}\right)^{ 2}\right),\] \[\sqrt{T}\left(S_{3,T}\left(X^{T}\right)-\Phi_{3}\left(\vartheta_ {0}\right)\right) \Longrightarrow\mathcal{N}\left(0,D_{3}\left(\vartheta_{0}\right)^{ 2}\right)\]
but we do not prove these convergences because we need these MMEs just for construction of One-step MLE-processes and the estimates (15),(16) are sufficient for these problems.
All parameters of the model (23)-(24) can be estimated with the help of the introduced statistics \(S_{1,T}\left(X^{T}\right),S_{2,T}\left(X^{T}\right),S_{3,T}\left(X^{T}\right)\). Below the method of moments estimators (MME) of the parameters \(f,a,b,\sigma^{2}\) are proposed and their asymptotic behavior is described.
### Estimation of the parameter \(f\).
Suppose that the parameters \(a,b,\sigma^{2}\) are known and we have to estimate \(\vartheta=f\in\left(\alpha_{f},\beta_{f}\right)\), \(\alpha_{f}>0\). Then the MME can be defined as follows
\[f_{T}^{*}=\alpha_{f}1\hskip-2.845276pt{\rm I}_{\left\{\mathbb{B}_{1,T}\right\}} +\bar{f}_{T}1\hskip-2.845276pt{\rm I}_{\left\{\mathbb{B}_{2,T}\right\}}+\beta _{f}1\hskip-2.845276pt{\rm I}_{\left\{\mathbb{B}_{3,T}\right\}}. \tag{17}\]
Here
\[\bar{f}_{T} =\left(\frac{\left(S_{1,T}\left(X^{T}\right)-2\sigma^{2}\right) \left(1+a\right)}{2b^{2}}\right)^{1/2},\] \[\mathbb{B}_{1,T} =\left\{\mbox{The event}:\ \ S_{1,T}\left(X^{T}\right)\leq\frac{2 \alpha_{f}^{2}b^{2}}{1+a}+2\sigma^{2}\right\},\] \[\mathbb{B}_{2,T} =\left\{\mbox{The event}:\ \ \frac{2\alpha_{f}^{2}b^{2}}{1+a}+2 \sigma^{2}<S_{1,T}\left(X^{T}\right)\leq\frac{2\beta_{f}^{2}b^{2}}{1+a}+2 \sigma^{2}\right\},\] \[\mathbb{B}_{3,T} =\left\{\mbox{The event}:\ \ S_{1,T}\left(X^{T}\right)\geq\frac{2 \beta_{f}^{2}b^{2}}{1+a}+2\sigma^{2}\right\}.\]
Therefore \(f_{T}^{*}\in\left[\alpha_{f},\beta_{f}\right]\). As
\[S_{1,T}\left(X^{T}\right)\longrightarrow\Phi_{1}\left(\vartheta_{0}\right)= \frac{2f_{0}^{2}b^{2}}{1+a}+2\sigma^{2}\]
the probabilities
\[{\bf P}_{f_{0}}\left(\mathbb{B}_{1,T}\right)\longrightarrow 0,\qquad{\bf P}_{f_{0 }}\left(\mathbb{B}_{2,T}\right)\longrightarrow 1,\quad{\bf P}_{f_{0}}\left( \mathbb{B}_{3,T}\right)\longrightarrow 0,\]
and below we omit the representations like (17) for the other MMEs.
It is easy to see that by Lemma 1 and Remark 1 the MME \(f_{T}^{*}\) is consistent, i.e., \(f_{T}^{*}\to f_{0}\).
Let us verify the upper bound
\[\sup_{f_{0}\in\mathbb{K}}{\bf E}_{f_{0}}\left|f_{T}^{*}-f_{0}\right|^{2}\leq \frac{C}{T}.\]
Put \(\eta_{T}=\sqrt{T}\left(S_{1,T}\left(X^{T}\right)-\Phi_{1}\left(\vartheta_{0} \right)\right)\). Note that thanks to the definition (17) of \(f_{T}^{*}\) it is sufficient to study the statistic \(S_{1,T}\left(X_{T}\right)\) on the set \(\mathbb{B}_{2}\) only and therefore we have the estimates
\[\frac{2\alpha_{f}^{2}b^{2}}{1+a}+2\sigma^{2}\leq S_{1,T}\left(X_{T}\right)\leq \frac{2\beta_{f}^{2}b^{2}}{1+a}+2\sigma^{2}. \tag{18}\]
Then
\[f_{T}^{*}=\sqrt{\frac{1+a}{2b^{2}}}\left[\Phi_{1}\left(\vartheta_{0}\right)-2 \sigma^{2}+T^{-1/2}\eta_{T}\right]^{1/2}\]
\[=\sqrt{\frac{1+a}{2b^{2}}}\left[\Phi_{1}\left(\vartheta_{0}\right)-2 \sigma^{2}\right]^{1/2}+\sqrt{\frac{1+a}{2b^{2}}}\left[\Phi_{1}\left(\vartheta_{0 }\right)-2\sigma^{2}+sT^{-1/2}\eta_{T}\right]^{-1/2}T^{-1/2}\eta_{T}\] \[=f_{0}+\sqrt{\frac{1+a}{2b^{2}}}\left[\Phi_{1}\left(\vartheta_{0} \right)-2\sigma^{2}+sT^{-1/2}\eta_{T}\right]^{-1/2}T^{-1/2}\eta_{T},\]
where \(s\in\left(0,1\right)\) and therefore
\[\left|f_{T}^{*}-f_{0}\right|\leq\frac{\left(1+a\right)}{2b^{2}\alpha_{f}}\;T^ {-1/2}\eta_{T}.\]
Here we used (18). Therefore, by Lemma 1
\[\sup_{f_{0}\in\mathbb{K}}\mathbf{E}_{f_{0}}\left|f_{T}^{*}-f_{0}\right|^{2} \leq C\,T^{-1}\mathbf{E}_{f_{0}}\left|\eta_{T}\right|^{2}.\]
**Remark 3**.: Of course, the statistics \(S_{2,T}\left(X^{T}\right)\) and \(S_{3,T}\left(X^{T}\right)\) as well can be used for the construction of the MME of \(f\). For example, we can solve the equation
\[S_{2,T}\left(X^{T}\right)=\frac{f^{2}b^{2}\left(a-1\right)}{1+a}-\sigma^{2}\]
with respect to \(f\) and to put
\[f_{T}^{**}=\left(\frac{\left(S_{2,T}\left(X^{T}\right)+\sigma^{2}\right)\left( 1+a\right)}{b^{2}\left(a-1\right)}\right)^{1/2}.\]
Note that the statistic \(S_{2,T}\left(X^{T}\right)\) takes negative values and the expression under square root brackets is positive with probability tending to \(1\). This MME has the asymptotic properties similar to that of \(f_{T}^{*}\).
### Estimation of the parameter \(b\).
The estimation of \(b\) is almost the same as the estimation of \(f\) because these parameters in \(\Phi_{1}\left(\vartheta\right)\), \(\Phi_{2}\left(\vartheta\right)\) and \(\Phi_{3}\left(\vartheta\right)\) are in the products \(bf\) only. Therefore the MME of \(b\) has the properties
\[b_{T}^{*}=\left(\frac{\left(S_{1,T}\left(X^{T}\right)-2\sigma^{2}\right)\left( 1+a\right)}{2f^{2}}\right)^{1/2}\longrightarrow b_{0},\qquad\sup_{b_{0}\in \mathbb{K}}\mathbf{E}_{b_{0}}\left|b_{T}^{*}-b_{0}\right|^{2}\leq\frac{C}{T} \tag{19}\]
_et ctr._
### Estimation of the parameter \(a\).
The solution of the equation
\[S_{1,T}\left(X^{T}\right)=\frac{2f^{2}b^{2}}{1+a}+2\sigma^{2}\]
leads to the MME
\[a_{T}^{*}=\frac{2f^{2}b^{2}}{S_{1,T}\left(X^{T}\right)-2\sigma^{2}}-1.\]
By Lemma 1 we have
\[a_{T}^{*}\longrightarrow a_{0},\qquad\sup_{a_{0}\in\mathbb{K}} \mathbf{E}_{a_{0}}\left|a_{T}^{*}-a_{0}\right|^{2}\leq\frac{C}{T}.\]
### Estimation of the parameter \(\sigma^{2}\).
The MME
\[\sigma_{T}^{2*}=\frac{1}{2}S_{1,T}\left(X^{T}\right)-\frac{f^{2}b^{2}}{1+a}\]
is consistent and
\[\sup_{\sigma_{0}^{2}\in\mathbb{K}}\mathbf{E}_{\sigma_{0}^{2}} \left|\sigma_{T}^{2*}-\sigma_{0}^{2}\right|^{2}\leq\frac{C}{T}.\]
### Estimation of the parameter \(\vartheta=(a,f)\).
The MME \(\vartheta_{T}^{*}=(a_{T}^{*},f_{T}^{*})\) is solution of the system of equations
\[S_{1,T}\left(X^{T}\right)=\Phi_{1}\left(\vartheta_{T}^{*}\right), \qquad\quad S_{2,T}\left(X^{T}\right)=\Phi_{2}\left(\vartheta_{T}^{*}\right)\]
and has the following form
\[a_{T}^{*}=\frac{S_{1,T}\left(X^{T}\right)+S_{2,T}\left(X^{T} \right)-\sigma^{2}}{S_{1,T}\left(X^{T}\right)-2\sigma^{2}}, \tag{20}\] \[f_{T}^{*}=\left(\frac{S_{1,T}\left(X^{T}\right)\left(1+a_{T}^{*} \right)-2\sigma^{2}}{2b^{2}}\right)^{1/2}. \tag{21}\]
With the help of Lemma 1 it can be shown that
\[(a_{T}^{*},f_{T}^{*})\longrightarrow(a_{0},f_{0})\]
and
\[\mathbf{E}_{\vartheta_{0}}\left\|\vartheta_{T}^{*}-\vartheta_{0} \right\|^{2}\leq\frac{C}{T}. \tag{22}\]
### Estimation of the parameter \(\vartheta=\left(a,f,\sigma^{2}\right)\).
In this case we have three equations
\[S_{1,T}\left(X^{T}\right) =\frac{2f^{2}b^{2}}{1+a}+2\sigma^{2},\quad S_{2,T}\left(X^{T}\right) =\frac{f^{2}b^{2}\left(a-1\right)}{1+a}-\sigma^{2},\] \[S_{3,T}\left(X^{T}\right) =\frac{f^{2}b^{2}a\left(a-1\right)}{(1+a)}.\]
The MME \(\vartheta_{T}^{*}=(a_{T}^{*},f_{T}^{*},\sigma_{T}^{2*})\) is the following solution of this system:
\[a_{T}^{*} =\frac{2S_{3,T}\left(X^{T}\right)}{S_{1,T}\left(X^{T}\right)+2S_{ 2,T}\left(X^{T}\right)}+1,\] \[f_{T}^{*} =\frac{S_{3,T}\left(X^{T}\right)\left(1+a_{T}^{*}\right)}{b^{2}a_ {T}^{*}\left(a_{T}^{*}-1\right)},\] \[\sigma_{T}^{2*} =\frac{1}{2}S_{1,T}\left(X^{T}\right)-\frac{\left(f_{T}^{*} \right)^{2}b^{2}}{1+a_{T}^{*}}.\]
Once more we have the consistency of \(\vartheta_{T}^{*}\) and the bound like (22).
Almost similar result we have in the case of estimation \(\vartheta=(a,b,\sigma^{2})\).
**Remark 4**.: Of course, it is possible to define and study the MMEs of the parameters \(\vartheta=(a,b)\), \(\vartheta=(f,\sigma^{2})\) and \(\vartheta=(b,\sigma^{2})\). The only forbidden couple of parameters is \(\vartheta=(f,b)\). In this case the consistent estimation of \(\vartheta\) is impossible. Indeed, the stationary AR process \(Y_{t},t=\ldots,-1,0,1,\ldots\) admits the representation
\[Y_{t}=b\sum_{k=0}^{\infty}a^{k}v_{t-k}\]
and therefore the observed process is
\[X_{t}=fb\sum_{k=0}^{\infty}a^{k}v_{t-k}+\sigma w_{t},\qquad k=...,-1,0,1,\ldots.\]
Here we introduced the sequence \(v_{t},t=...,-1,0,1,\ldots\) of i.i.d. standard Gaussian r.v.'s. The model depends on the product \(fb\) and therefore these parameters can not be estimated separately.
## 3 Fisher informations
We have the same model of observations
\[X_{t}=fY_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t\geq 1,\]
\[Y_{t}=aY_{t-1}+b\,v_{t},\qquad Y_{0},\qquad t\geq 1,\]
where \(w_{t},v_{t},t\geq 1\) are independent standard Gaussian r.v.'s and \(f,\sigma^{2},a,b\) are parameters of the model. As before, we suppose that some of these parameters are unknown and we have to estimate the unknown parameters \(\vartheta\in\Theta\) by the observations \(X^{T}=\left(X_{0},X_{1},\ldots,X_{T}\right)\).
The MMEs studied above are consistent, but not asymptotically efficient. That is why we propose below the construction of One-step MLE-process, which allow us to solve two problems: first we obtain asymptotically efficient estimators of these parameters and the second - we describe the approximation of the conditional expectation \(m\left(\vartheta,t\right),t\geq 1\).
The construction of One-step MLE-processes requires the knowledge of Fisher information. That is why we calculate below the Fisher informations related with different parameters. To do this we recall first some known properties of \(\gamma\left(\vartheta,t\right)\) and describe the likelihood ratio function for this model.
Consider the model of partially observed time series
\[X_{t} =f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots, \tag{23}\] \[Y_{t} =a\,Y_{t-1}+b\,v_{t},\qquad\quad Y_{0}, \tag{24}\]
where \(X^{T}=\left(X_{0},X_{1},\ldots,X_{T}\right)\) are observations and auto regressive process (AR) \(Y_{t},t\geq 0\) is a hidden process. Here \(w_{t},t\geq 1\) and \(v_{t},t\geq 1\) are independent standard Gaussian random variables, i.e., \(w_{t}\sim\mathcal{N}\left(0,1\right)\), \(v_{t}\sim\mathcal{N}\left(0,1\right)\). The initial values are \(X_{0}\sim\mathcal{N}\left(0,d_{x}^{2}\right)\) and \(Y_{0}\sim\mathcal{N}\left(0,d_{y}^{2}\right)\). The system is defined by the parameters \(a,b,f,\sigma^{2},d_{x}^{2},d_{y}^{2}\). We suppose that some of these parameters are unknown and have to be estimated by observations \(X^{T}\).
It will be convenient for instant to denote \(\vartheta=\left(a,b,f,\sigma^{2}\right)\).
Denote \(\mathfrak{F}_{t}^{X}\) the \(\sigma\)-algebra generated by the first \(t+1\) observations \(X_{0},X_{1},\ldots,X_{t}\). The conditional expectation \(m\left(\vartheta,t\right)=\mathbf{E}_{\vartheta}\left(Y_{t}|\mathfrak{F}_{t}^ {X}\right)\) according to the equations of Kalman filter (see, e.g., [11], [19]) satisfies the equation
\[m\left(\vartheta,t\right)=a\,m\left(\vartheta,t-1\right)+\frac{af\gamma\left( \vartheta,t-1\right)}{\sigma^{2}+f^{2}\gamma\left(\vartheta,t-1\right)}\left[ X_{t}-fm\left(\vartheta,t-1\right)\right],\quad t\geq 1. \tag{25}\]
The initial value is \(m\left(\vartheta,0\right)=\mathbf{E}_{\vartheta}\left(Y_{0}|X_{0}\right)\).
The mean square error \(\gamma\left(\vartheta,t\right)=\mathbf{E}_{\vartheta}\left(Y_{t}-m\left( \vartheta,t\right)\right)^{2}\) is described by the equation
\[\gamma\left(\vartheta,t\right)=a^{2}\gamma\left(\vartheta,t-1\right)+b^{2}- \frac{a^{2}f^{2}\gamma\left(\vartheta,t-1\right)^{2}}{\sigma^{2}+f^{2}\gamma \left(\vartheta,t-1\right)},\qquad\quad t\geq 1 \tag{26}\]
with the initial value \(\gamma\left(\vartheta,0\right)=\mathbf{E}_{\vartheta}\left(Y_{0}-m\left( \vartheta,0\right)\right)^{2}\).
If some of the mentioned parameters are unknown, then, of course, we can not use (25)-(26) for calculation of \(m\left(\vartheta,t\right),t\geq 1\).
We suppose that the observations \(X^{T}=\left(X_{0},X_{1},\ldots,X_{T}\right)\) are given and that some of the parameters are unknown, but their values are always satisfy the condition \(\mathscr{A}_{0}\). Our goal is to propose an approximation of \(m\left(\vartheta,t\right),t\geq 1\) in such situations and to describe the error of approximation in the asymptotic of _large samples_, i.e., as \(T\rightarrow\infty\).
The consistent estimation of the parameters \(\left(d_{x}^{2},d_{y}^{2}\right)\) is impossible and we suppose that these parameters are known. If these values are unknown, then the system (25)-(26) will be solved with some wrong initial values, but due to robustness of the solutions the difference between solutions with true and wrong initial values in our problems is asymptotically negligible.
As before, the proposed program is consists in several steps. First on some learning interval \(\left[0,\tau_{T}\right]\) of negligible length \(\left(\tau_{T}/T\to 0\right)\) we construct a consistent preliminary estimator \(\vartheta_{\tau_{T}}^{\ast}\). Then this estimator is used for defining the One-step MLE-process \(\vartheta_{T}^{\ast}=\left(\vartheta_{t,T}^{\star},t=\tau_{T}+1,\ldots,T\right)\) and finally the approximation \(m_{T}^{\star}=\left(m_{t,T}^{\star},t=\tau_{T}+1,\ldots,T\right)\) is obtained by substituting \(\vartheta_{T}^{\ast}\) in the equations (25)-(26). The last step is to evaluate the error \(m_{t,T}^{\star}-m\left(\vartheta,t\right)\).
Remark that the function \(\gamma\left(\vartheta,t\right)\) converges to the value
\[\gamma_{\ast}\left(\vartheta\right)=\frac{f^{2}b^{2}-\sigma^{2}\left(1-a^{2} \right)}{2f^{2}}+\frac{1}{2}\left[\left(\frac{\sigma^{2}\left(1-a^{2}\right)} {f^{2}}-b^{2}\right)^{2}+\frac{4b^{2}\sigma^{2}}{f^{2}}\right]^{1/2} \tag{27}\]
as \(t\rightarrow\infty\) (see Example 3 in section 14.4, [19]). The value \(\gamma_{\ast}\left(\vartheta\right)\) is obtained as a positive solution of the equation (26), where we put \(\gamma\left(\vartheta,t\right)=\gamma\left(\vartheta,t-1\right)=\gamma_{\ast }\left(\vartheta\right)\), which becomes
\[\gamma_{\ast}\left(\vartheta\right)^{2}+\left[\frac{\sigma^{2}\left(1-a^{2} \right)}{f^{2}}-b^{2}\right]\gamma_{\ast}\left(\vartheta\right)-\frac{b^{2} \sigma^{2}}{f^{2}}=0.\]
Below we study the asymptotic (\(T\rightarrow\infty\)) properties of estimators. That is why to simplify the exposition we suppose that the initial value \(\gamma\left(\vartheta,0\right)=\gamma_{\ast}\left(\vartheta\right)\). Then for any \(t\geq 1\) we have \(\gamma\left(\vartheta,t\right)=\gamma_{\ast}\left(\vartheta\right)\). Of course, this is condition on correlation between \(X_{0}\) and \(Y_{0}\) and the values \(d_{x}^{2},d_{y}^{2}\).
Therefore the equation (25) is replaced by the equation
\[m_{t}\left(\vartheta\right)=a\,m_{t-1}\left(\vartheta\right)+\frac{af\gamma_{ \ast}\left(\vartheta\right)}{\sigma^{2}+f^{2}\gamma_{\ast}\left(\vartheta \right)}\left[X_{t}-fm_{t-1}\left(\vartheta\right)\right],\quad t\geq 1 \tag{28}\]
with the corresponding initial value, providing \(\gamma\left(\vartheta,0\right)=\gamma_{\ast}\left(\vartheta\right)\). Recall that equation (25) is stable w.r.t. the initial value, i.e., for the wrong initial condition the difference \(m\left(\vartheta,t\right)-m_{t}\left(\vartheta\right)\to 0\).
Note as well that if we denote \(\vartheta_{0}\) the true value, then
\[\zeta_{t}\left(\vartheta_{0}\right)=\frac{X_{t}-f_{0}m_{t-1}\left(\vartheta_ {0}\right)}{\sqrt{\sigma_{0}^{2}+f_{0}^{2}\gamma_{\ast}\left(\vartheta_{0} \right)}},\qquad t\geq 1\]
are i.i.d. standard Gaussian random variables (see Theorem 13.5 in [19]). This means that the equation of observations (23) can be written as follows
\[X_{t}=f_{0}m_{t-1}\left(\vartheta_{0}\right)+\sqrt{\sigma_{0}^{2}+f_{0}^{2} \gamma_{*}\left(\vartheta_{0}\right)}\,\zeta_{t}\left(\vartheta_{0}\right), \qquad t\geq 1.\]
Using this representation we can rewrite the equation (28) too
\[m_{t}\left(\vartheta\right)=a\,m_{t-1}\left(\vartheta\right)+ \frac{af\gamma_{*}\left(\vartheta\right)}{\sigma^{2}+f^{2}\gamma_{*}\left( \vartheta\right)}\left[f_{0}m_{t-1}\left(\vartheta_{0}\right)-fm_{t-1}\left( \vartheta\right)\right]\] \[\qquad+\frac{af\gamma_{*}\left(\vartheta\right)\sqrt{\sigma_{0}^ {2}+f_{0}^{2}\gamma_{*}\left(\vartheta_{0}\right)}}{\sigma^{2}+f^{2}\gamma_{* }\left(\vartheta\right)}\zeta_{t}\left(\vartheta_{0}\right),\quad t\geq 1. \tag{29}\]
The likelihood function is
\[L\left(\vartheta,X^{T}\right) =\left(2\pi\left(\sigma^{2}+f^{2}\gamma_{*}\left(\vartheta\right) \right)\right)^{-T/2}\exp\left(-\frac{1}{2}\sum_{t=1}^{T}\frac{\left(X_{t}-fm _{t-1}\left(\vartheta\right)\right)^{2}}{\sigma^{2}+f^{2}\gamma_{*}\left( \vartheta\right)}\right)\] \[=\left(\frac{1}{2\pi P\left(\vartheta\right)}\right)^{T/2}\exp \left(-\frac{1}{2}\sum_{t=1}^{T}\frac{\left(X_{t}-fm_{t-1}\left(\vartheta \right)\right)^{2}}{P\left(\vartheta\right)}\right),\qquad\vartheta\in\Theta. \tag{30}\]
Here
\[P\left(\vartheta\right)=\sigma^{2}+f^{2}\gamma_{*}\left(\vartheta\right)\]
and \(\Theta\) is an open, bounded, convex set of the possible values of the parameter \(\vartheta\).
### Unknown parameter \(b\)
We start with one-dimensional case, say, \(\vartheta=b\in\Theta=\left(\alpha_{b},\beta_{b}\right),\alpha_{b}>0\). As above we suppose that \(f\neq 0,a^{2}\in\left[0,1\right)\). Therefore the system is
\[X_{t} =f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots,\] \[Y_{t} =a\,Y_{t-1}+\vartheta\,v_{t},\qquad\,Y_{0},\]
Introduce the notation:
\[\Gamma\left(\vartheta\right)=f^{2}\gamma_{*}\left(\vartheta\right),\qquad P\left(\vartheta\right)=\sigma^{2}+\Gamma\left(\vartheta\right),\qquad A \left(\vartheta\right)=\frac{a\sigma^{2}}{P\left(\vartheta\right)},\] \[B\left(\vartheta,\vartheta_{0}\right)=\frac{a\Gamma\left( \vartheta\right)\sqrt{P\left(\vartheta_{0}\right)}}{P\left(\vartheta\right)},\qquad\dot{B}_{b}\left(\vartheta_{0},\vartheta_{0}\right)=\left.\frac{\partial B \left(\vartheta,\vartheta_{0}\right)}{\partial\vartheta}\right|_{\vartheta= \vartheta_{0}}=\frac{a\sigma^{2}\dot{\Gamma}\left(\vartheta_{0}\right)}{P \left(\vartheta_{0}\right)^{3/2}}.\]
Note that \(\inf_{\vartheta\in\Theta}\dot{\Gamma}\left(\vartheta_{0}\right)>0\) (see (44) below). Another estimate which we will use is
\[\sup_{\vartheta\in\Theta}\left|A\left(\vartheta\right)\right|<1. \tag{31}\]
Let us show the well known fact that it is always fulfilled if \(a^{2}\in[0,1)\). We have
\[P\left(\vartheta\right)=\frac{f^{2}\vartheta^{2}+\sigma^{2}+\sigma^{2}a^{2}}{2}+ \frac{1}{2}\left[\left(\sigma^{2}\left(1-a^{2}\right)-f^{2}\vartheta^{2} \right)^{2}+4f^{2}\vartheta^{2}\sigma^{2}\right]^{1/2}\]
and
\[\left|A\left(\vartheta\right)\right| =2\left|a\right|\sigma^{2}\left(f^{2}\vartheta^{2}+\sigma^{2}+ \sigma^{2}a^{2}+\left[\left(\sigma^{2}\left(1-a^{2}\right)-f^{2}\vartheta^{2} \right)^{2}+4f^{2}\vartheta^{2}\sigma^{2}\right]^{1/2}\right)^{-1}\] \[<\frac{2\left|a\right|\sigma^{2}}{\sigma^{2}+\sigma^{2}a^{2}}= \frac{2\left|a\right|}{1+a^{2}}<1.\]
**Proposition 1**.: _The Fisher information is_
\[\mathrm{I}_{b}\left(\vartheta_{0}\right)=\frac{\dot{P}_{b}\left(\vartheta_{0 }\right)^{2}\left[P\left(\vartheta_{0}\right)^{2}+a^{2}\sigma^{4}\right]}{2P \left(\vartheta_{0}\right)^{2}\left[P\left(\vartheta_{0}\right)^{2}-a^{2} \sigma^{4}\right]}. \tag{32}\]
Proof.: The equation (29) we multiply by \(f\), denote \(M_{t}\left(\vartheta\right)=fm_{t}\left(\vartheta\right)\) and rewrite as follows
\[M_{t}\left(\vartheta\right) =\frac{a\sigma^{2}}{P\left(\vartheta\right)}\,M_{t-1}\left( \vartheta\right)+\frac{a\Gamma\left(\vartheta\right)}{P\left(\vartheta\right) }M_{t-1}\left(\vartheta_{0}\right)+\frac{a\Gamma\left(\vartheta\right)\sqrt{P \left(\vartheta_{0}\right)}}{P\left(\vartheta\right)}\,\zeta_{t}\left( \vartheta_{0}\right)\] \[=A\left(\vartheta\right)M_{t-1}\left(\vartheta\right)+\left[a-A \left(\vartheta\right)\right]M_{t-1}\left(\vartheta_{0}\right)+B\left( \vartheta,\vartheta_{0}\right)\,\,\zeta_{t}\left(\vartheta_{0}\right),\qquad t \geq 1.\]
The Fisher score is (see (30))
\[\frac{\partial\ln L\left(\vartheta,X^{T}\right)}{\partial\vartheta }=-\frac{\partial}{\partial\vartheta}\sum_{t=1}^{T}\left[\frac{\left(X_{t}-M_ {t-1}\left(\vartheta\right)\right)^{2}}{2P\left(\vartheta\right)}+\frac{1}{2 }\ln P\left(\vartheta\right)\right]\] \[=\sum_{t=1}^{T}\left[\frac{\left[X_{t}-M_{t-1}\left(\vartheta \right)\right]}{P\left(\vartheta\right)}\dot{M}_{t-1}\left(\vartheta\right)+ \frac{\left[X_{t}-M_{t-1}\left(\vartheta\right)\right]^{2}\,\,\dot{P}\left( \vartheta\right)}{2P\left(\vartheta\right)}-\frac{\dot{P}\left(\vartheta \right)}{2P\left(\vartheta\right)}\right]\] \[=\sum_{t=1}^{T}\left[\frac{\left[X_{t}-M_{t-1}\left(\vartheta \right)\right]}{\sqrt{P\left(\vartheta\right)}}\frac{\dot{M}_{t-1}\left( \vartheta\right)}{\sqrt{P\left(\vartheta\right)}}+\frac{\left[X_{t}-M_{t-1} \left(\vartheta\right)\right]^{2}}{P\left(\vartheta\right)}\frac{\dot{P} \left(\vartheta\right)}{2P\left(\vartheta\right)}-\frac{\dot{P}\left(\vartheta \right)}{2P\left(\vartheta\right)}\right]\] \[=\frac{1}{\sqrt{P\left(\vartheta\right)}}\sum_{t=1}^{T}\left[ \zeta_{t}\left(\vartheta\right)\dot{M}_{t-1}\left(\vartheta\right)+\left[ \zeta_{t}\left(\vartheta\right)^{2}-1\right]\frac{\dot{P}\left(\vartheta \right)}{2\sqrt{P\left(\vartheta\right)}}\right].\]
Recall that \(\dot{M}_{t-1}\left(\vartheta\right)=\partial M_{t-1}\left(\vartheta\right)/ \partial\vartheta,\dot{P}\left(\vartheta\right)=\partial P\left(\vartheta \right)/\partial\vartheta\) and
\[\zeta_{t}\left(\vartheta_{0}\right)=\frac{X_{t}-M_{t-1}\left(\vartheta_{0} \right)}{\sqrt{\sigma^{2}+\Gamma\left(\vartheta_{0}\right)}}=\frac{X_{t}-M_{t -1}\left(\vartheta_{0}\right)}{\sqrt{P\left(\vartheta_{0}\right)}},\qquad t \geq 1\]
are independent standard Gaussian random variables ( \(\zeta_{t}\left(\vartheta_{0}\right)\sim\mathcal{N}\left(0,1\right)\)).
The equation for derivative \(\dot{M}_{t}\left(\vartheta\right),t\geq 1\) is
\[\dot{M}_{t}\left(\vartheta\right)=A\left(\vartheta\right)\dot{M}_{t-1}\left( \vartheta\right)+\dot{A}\left(\vartheta\right)\left[M_{t-1}\left(\vartheta \right)-M_{t-1}\left(\vartheta_{0}\right)\right]+\dot{B}_{b}\left(\vartheta, \vartheta_{0}\right)\zeta_{t}\left(\vartheta_{0}\right)\]
with the initial value \(\dot{M}_{0}\left(\vartheta\right)\). If \(\vartheta=\vartheta_{0}\), then
\[\dot{M}_{t}\left(\vartheta_{0}\right)=A\left(\vartheta_{0}\right)\dot{M}_{t-1} \left(\vartheta_{0}\right)+\dot{B}_{b}\left(\vartheta_{0},\vartheta_{0}\right) \;\zeta_{t}\left(\vartheta_{0}\right),\quad\dot{M}_{0}\left(\vartheta_{0} \right),\qquad t\geq 1.\]
The stationary version of the process \(\dot{m}_{t}\left(\vartheta_{0}\right),t\geq 1\) can be written as a sum of i.i.d. variables
\[\dot{M}_{t}\left(\vartheta_{0}\right) =\dot{B}_{b}\left(\vartheta_{0},\vartheta_{0}\right)\sum_{k=0}^{ \infty}A\left(\vartheta_{0}\right)^{k}\zeta_{t-k}\left(\vartheta_{0}\right),\] \[\dot{M}_{0}\left(\vartheta_{0}\right) =\dot{B}_{b}\left(\vartheta_{0},\vartheta_{0}\right)\sum_{k=0}^{ \infty}A\left(\vartheta_{0}\right)^{k}\zeta_{-k}\left(\vartheta_{0}\right),\]
where we introduced i.i.d. r.v.'s \(\zeta_{k}\left(\vartheta_{0}\right)\sim\mathcal{N}\left(0,1\right),k=0,-1,-2,\ldots\). The real process \(\dot{M}\left(\vartheta_{0},t\right),0\leq t\leq T\) has the similar representation with the finite sum, but as we are interested by asymptotic (\(T\rightarrow\infty\)) properties of estimators we write immediately this infinite sum and the difference between these two representations for these processes and for several other similar processes below are asymptotically negligible. We have
\[\mathbf{E}_{\vartheta_{0}}\dot{M}_{t}\left(\vartheta_{0}\right)^{2}=\dot{B}_{ b}\left(\vartheta_{0},\vartheta_{0}\right)^{2}\sum_{k=0}^{\infty}A\left( \vartheta_{0}\right)^{2k}=\frac{\dot{B}_{b}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{1-A\left(\vartheta_{0}\right)^{2}}=\frac{a^{2}\sigma^{4}\dot{P} \left(\vartheta_{0}\right)^{2}}{P\left(\vartheta_{0}\right)^{3}\left(1-A \left(\vartheta_{0}\right)^{2}\right)}.\]
For the second moment of the score-function we have the following expression
\[\mathbf{E}_{\vartheta_{0}}\left[\frac{\partial\ln L\left(\vartheta,X^{T} \right)}{\partial\vartheta}\right]_{\vartheta=\vartheta_{0}}^{2}\] \[=\frac{1}{P\left(\vartheta_{0}\right)}\mathbf{E}_{\vartheta_{0}} \left(\sum_{t=1}^{T}\left[\zeta_{t}\left(\vartheta_{0}\right)\dot{M}_{t-1} \left(\vartheta_{0}\right)+\frac{1}{2}\left[\zeta_{t}\left(\vartheta_{0} \right)^{2}-1\right]\dot{P}\left(\vartheta_{0}\right)P\left(\vartheta_{0} \right)^{-1/2}\right]\right)^{2}\] \[=\frac{1}{P\left(\vartheta_{0}\right)}\mathbf{E}_{\vartheta_{0}} \sum_{t=1}^{T}\left(\zeta_{t}\left(\vartheta_{0}\right)\dot{M}_{t-1}\left( \vartheta_{0}\right)+\frac{1}{2}\left[\zeta_{t}\left(\vartheta_{0}\right)^{2 }-1\right]\dot{P}\left(\vartheta_{0}\right)P\left(\vartheta_{0}\right)^{-1/2} \right)^{2}\]
because (below \(t>s\) and for simplicity we omit \(\frac{1}{2}P\left(\vartheta_{0}\right)^{-1/2}\))
\[\mathbf{E}_{\vartheta_{0}}\left(\zeta_{t}\left(\vartheta_{0} \right)\dot{M}_{t-1}\left(\vartheta_{0}\right)+\left[\zeta_{t}\left(\vartheta_ {0}\right)^{2}-1\right]\dot{P}\left(\vartheta_{0}\right)\right)\] \[\times\left(\zeta_{s}\left(\vartheta_{0}\right)\dot{M}_{s-1} \left(\vartheta_{0}\right)+\left[\zeta_{s}\left(\vartheta_{0}\right)^{2}-1 \right]\dot{P}\left(\vartheta_{0}\right)\right)\] \[=\mathbf{E}_{\vartheta_{0}}\zeta_{t}\left(\vartheta_{0}\right) \dot{M}_{t-1}\left(\vartheta_{0}\right)\zeta_{s-1}\left(\vartheta_{0}\right) \dot{M}_{s-1}\left(\vartheta_{0}\right)\]
\[+\dot{P}\left(\vartheta_{0}\right)\mathbf{E}_{\vartheta_{0}}\zeta_{t} \left(\vartheta_{0}\right)\dot{M}_{t-1}\left(\vartheta_{0}\right)\left[\zeta_{s} \left(\vartheta_{0}\right)^{2}-1\right]\] \[+\dot{P}\left(\vartheta_{0}\right)\mathbf{E}_{\vartheta_{0}} \zeta_{s}\left(\vartheta_{0}\right)\dot{M}_{s-1}\left(\vartheta_{0}\right) \left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]\] \[+\dot{P}\left(\vartheta_{0}\right)^{2}\mathbf{E}_{\vartheta_{0}} \left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]\left[\zeta_{s}\left( \vartheta_{0}\right)^{2}-1\right]=0.\]
Here we used the equalities like
\[\mathbf{E}_{\vartheta_{0}}\left[\zeta_{t}\left(\vartheta_{0} \right)\dot{M}_{t-1}\left(\vartheta_{0}\right)\zeta_{s}\left(\vartheta_{0} \right)\dot{M}_{s-1}\left(\vartheta_{0}\right)\right]\] \[=\mathbf{E}_{\vartheta_{0}}\left[\dot{M}_{t-1}\left(\vartheta_{0} \right)\zeta_{s}\left(\vartheta_{0}\right)\dot{M}_{s-1}\left(\vartheta_{0} \right)\mathbf{E}_{\vartheta_{0}}\left(\zeta_{t}\left(\vartheta_{0}\right) \left|\mathfrak{F}_{t-1}^{X}\right)\right]=0.\]
Further
\[\mathbf{E}_{\vartheta_{0}}\left(\zeta_{t}\left(\vartheta_{0} \right)\dot{M}_{t-1}\left(\vartheta_{0}\right)+\left[\zeta_{t}\left(\vartheta_ {0}\right)^{2}-1\right]\frac{\dot{P}\left(\vartheta_{0}\right)}{2\sqrt{P\left( \vartheta_{0}\right)}}\right)^{2}\] \[=\mathbf{E}_{\vartheta_{0}}\zeta_{t}\left(\vartheta_{0}\right)^{ 2}\dot{M}_{t-1}\left(\vartheta_{0}\right)^{2}+\frac{\dot{P}\left(\vartheta_{0 }\right)^{2}}{4P\left(\vartheta_{0}\right)}\mathbf{E}_{\vartheta_{0}}\left[ \zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]^{2}\] \[=\mathbf{E}_{\vartheta_{0}}\dot{M}_{t-1}\left(\vartheta_{0} \right)^{2}+\frac{\dot{P}\left(\vartheta_{0}\right)^{2}}{2P\left(\vartheta_{0 }\right)}\] \[=\frac{\dot{P}\left(\vartheta_{0}\right)^{2}}{2P\left(\vartheta_{ 0}\right)}\left[\frac{2a^{2}\sigma^{4}}{P\left(\vartheta_{0}\right)^{2}-a^{2} \sigma^{4}}+1\right].\]
Therefore, even if we have a non stationary at the beginning processes the limit will be
\[\lim_{T\rightarrow\infty}\frac{1}{T}\mathbf{E}_{\vartheta_{0}}\left[\frac{ \partial\ln L\left(\vartheta,X^{T}\right)}{\partial\vartheta}\right]_{ \vartheta=\vartheta_{0}}^{2}=\mathrm{I}_{b}\left(\vartheta_{0}\right).\]
**Remark 5**.: If the unknown parameter is \(\vartheta=f\) and all other parameters are known, then the filtration equations are almost the same
\[M_{t}\left(\vartheta\right) =a\,M_{t-1}\left(\vartheta\right)+\frac{a\Gamma\left(\vartheta \right)}{\sigma^{2}+\Gamma\left(\vartheta\right)}\left[X_{t}-M_{t-1}\left( \vartheta\right)\right],\quad M_{t}\left(\vartheta\right)=\vartheta m_{t} \left(\vartheta\right),\quad t\geq 1,\] \[\Gamma\left(\vartheta\right) =\frac{1}{2}\left[\vartheta^{2}b^{2}-\sigma^{2}\left(1-a^{2} \right)\right]+\frac{1}{2}\left[\left(\sigma^{2}\left(1-a^{2}\right)- \vartheta^{2}b^{2}\right)^{2}+4\vartheta^{2}b^{2}\sigma^{2}\right]^{1/2}.\]
It is easy to see that the score-function has the same form but of course the derivation now is w.r.t. \(\vartheta=f\)
\[\frac{\partial L(\vartheta,X^{T})}{\partial\vartheta}=\frac{1}{\sqrt{P\left( \vartheta\right)}}\sum_{t=1}^{T}\left[\zeta_{t}\left(\vartheta\right)\dot{M}_ {t-1}\left(\vartheta\right)+\left[\zeta_{t}\left(\vartheta\right)^{2}-1 \right]\frac{\dot{P}_{f}\left(\vartheta\right)}{2\sqrt{P\left(\vartheta\right) }}\right].\]
Here \(P\left(\vartheta\right)=\sigma^{2}+\Gamma\left(\vartheta\right)\).
To calculate the Fisher information we have to use the following equation for the derivative \(\dot{M}_{t}\left(\vartheta\right)\)
\[\dot{M}_{t}\left(\vartheta_{0}\right)=A\left(\vartheta_{0}\right)\dot{M}_{t-1 }\left(\vartheta_{0}\right)+\dot{B}_{f}\left(\vartheta_{0},\vartheta_{0} \right)\zeta_{t}\left(\vartheta_{0}\right),\qquad t\geq 1\]
with the function
\[A\left(\vartheta\right)=\frac{a\sigma^{2}}{P\left(\vartheta\right)},\qquad B _{f}\left(\vartheta,\vartheta_{0}\right)=\frac{a\Gamma\left(\vartheta\right) \sqrt{P\left(\vartheta_{0}\right)}}{P\left(\vartheta\right)}.\]
This gives us the Fisher information
\[\mathrm{I}_{f}\left(\vartheta_{0}\right)=\frac{\dot{P}_{f}\left(\vartheta_{0} \right)^{2}\left[P\left(\vartheta_{0}\right)^{2}+a^{2}\sigma^{4}\right]}{2P \left(\vartheta_{0}\right)^{2}\left[P\left(\vartheta_{0}\right)^{2}-a^{2} \sigma^{4}\right]}. \tag{33}\]
### Unknown parameter \(a\)
Suppose that the values of \(f>0,\sigma^{2}>0,b>0\) are known and the parameter \(\vartheta=a\in\Theta=\left(\alpha_{a},\beta_{a}\right),-1<\alpha_{a}<\beta_{a }<1\) is unknown. Therefore the partially observed system is
\[X_{t} =f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots,\] \[Y_{t} =\vartheta\,Y_{t-1}+b\,v_{t},\qquad\;\;Y_{0},\]
and the Kalman filter for \(M_{t}\left(\vartheta\right)=\vartheta m_{t}\left(\vartheta\right)\) is given by the relations
\[M_{t}\left(\vartheta\right) =\left[\vartheta-\frac{\vartheta\Gamma\left(\vartheta\right)}{ \sigma^{2}+\Gamma\left(\vartheta\right)}\right]\,M_{t-1}\left(\vartheta\right) +\frac{\vartheta\Gamma\left(\vartheta\right)}{\sigma^{2}+\Gamma\left( \vartheta\right)}X_{t},\] \[=A\left(\vartheta\right)\,M_{t-1}\left(\vartheta\right)+\frac{ \vartheta\Gamma\left(\vartheta\right)}{P\left(\vartheta\right)}M_{t-1}\left( \vartheta_{0}\right)+B\left(\vartheta,\vartheta_{0}\right)\zeta_{t}\left( \vartheta_{0}\right),\quad t\geq 1,\] \[\Gamma\left(\vartheta\right) =\frac{1}{2}\left[f^{2}b^{2}-\sigma^{2}\left(1-\vartheta^{2} \right)\right]+\frac{1}{2}\left[\left(\sigma^{2}\left(1-\vartheta^{2}\right)- f^{2}b^{2}\right)^{2}+4b^{2}\sigma^{2}\right]^{1/2}.\]
Here
\[A\left(\vartheta\right)=\frac{\vartheta\sigma^{2}}{P\left(\vartheta\right)}, \qquad B\left(\vartheta,\vartheta_{0}\right)=\frac{\vartheta\Gamma\left( \vartheta\right)\sqrt{P\left(\vartheta_{0}\right)}}{P\left(\vartheta\right)},\]
\[\zeta_{t}\left(\vartheta_{0}\right)=\frac{X_{t}-M_{t-1}\left(\vartheta_{0}\right)}{ \sqrt{\sigma^{2}+\Gamma\left(\vartheta_{0}\right)}},\qquad t\geq 1\]
are independent standard Gaussian random variables.
We present the corresponding score-function and Fisher information without detailed proofs.
We have
\[\frac{\partial\ln L\left(\vartheta,X^{T}\right)}{\partial\vartheta}=\frac{1}{ \sqrt{P\left(\vartheta\right)}}\sum_{t=1}^{T}\left[\zeta_{t}\left(\vartheta \right)\dot{M}_{t-1}\left(\vartheta\right)+\left[\zeta_{t}\left(\vartheta \right)^{2}-1\right]\frac{\dot{P}\left(\vartheta\right)}{2\sqrt{P\left( \vartheta\right)}}\right].\]
The equation for \(\dot{M}_{t}\left(\vartheta_{0}\right)=f\partial m_{t}\left(\vartheta_{0} \right)/\partial a\) is
\[\dot{M}_{t}\left(\vartheta_{0}\right)=A\left(\vartheta_{0}\right)\dot{M}_{t-1 }\left(\vartheta_{0}\right)+M_{t-1}\left(\vartheta_{0}\right)+\dot{B}_{a} \left(\vartheta_{0},\vartheta_{0}\right)\zeta_{t}\left(\vartheta_{0}\right), \qquad t\geq 1,\]
where
\[\dot{B}_{a}\left(\vartheta_{0},\vartheta_{0}\right)=\frac{\Gamma\left( \vartheta_{0}\right)P\left(\vartheta_{0}\right)+\vartheta_{0}\sigma^{2}\dot{P }\left(\vartheta_{0}\right)}{P\left(\vartheta_{0}\right)^{3/2}}.\]
Therefore, using stationarity of all processes we write
\[\mathbf{E}_{\vartheta_{0}}\dot{M}_{t}\left(\vartheta_{0}\right)^ {2} =A\left(\vartheta_{0}\right)^{2}\mathbf{E}_{\vartheta_{0}}\dot{M}_ {t-1}\left(\vartheta_{0}\right)^{2}+\mathbf{E}_{\vartheta_{0}}M_{t-1}\left( \vartheta_{0}\right)^{2}+\dot{B}_{a}\left(\vartheta_{0},\vartheta_{0}\right)^ {2}\] \[\qquad+2A\left(\vartheta_{0}\right)\mathbf{E}_{\vartheta_{0}}\dot {M}_{t-1}\left(\vartheta_{0}\right)M_{t-1}\left(\vartheta_{0}\right)\] \[=\left(1-A\left(\vartheta_{0}\right)^{2}\right)^{-1}\left[ \mathbf{E}_{\vartheta_{0}}M_{t-1}\left(\vartheta_{0}\right)^{2}+\dot{B}_{a} \left(\vartheta_{0},\vartheta_{0}\right)^{2}\right.\] \[\qquad\left.+2A\left(\vartheta_{0}\right)\mathbf{E}_{\vartheta_{ 0}}\dot{M}_{t-1}\left(\vartheta_{0}\right)M_{t-1}\left(\vartheta_{0}\right)\right]\]
The equations for \(M_{t}\left(\vartheta_{0}\right)\) and \(\dot{M}_{t}\left(\vartheta_{0}\right)\) allow us to calculate the following moments
\[\mathbf{E}_{\vartheta_{0}}M_{t-1}\left(\vartheta_{0}\right)^{2} =\frac{\vartheta_{0}^{2}\Gamma\left(\vartheta_{0}\right)^{2}}{P \left(\vartheta_{0}\right)^{2}\left(1-\vartheta_{0}^{2}\right)},\] \[\mathbf{E}_{\vartheta_{0}}\dot{M}_{t-1}\left(\vartheta_{0}\right) M_{t-1}\left(\vartheta_{0}\right) =\frac{\vartheta_{0}\Gamma\left(\vartheta_{0}\right)^{2}}{P \left(\vartheta_{0}\right)^{2}\left(1-\vartheta_{0}A\left(\vartheta_{0} \right)\right)}\left[\frac{\vartheta_{0}}{\left(1-A\left(\vartheta_{0}\right) ^{2}\right)}+P\left(\vartheta_{0}\right)\right.\] \[\qquad\left.+\frac{\vartheta_{0}\sigma^{2}\dot{P}\left(\vartheta_ {0}\right)}{\Gamma\left(\vartheta_{0}\right)}\right].\]
Hence
\[\mathbf{E}_{\vartheta_{0}}\dot{M}_{t}\left(\vartheta_{0}\right)^{2}=\left(1-A \left(\vartheta_{0}\right)^{2}\right)^{-1}\left[\frac{\vartheta_{0}^{2}\Gamma \left(\vartheta_{0}\right)^{2}}{P\left(\vartheta_{0}\right)^{2}\left(1- \vartheta_{0}^{2}\right)}+\frac{\left[\Gamma\left(\vartheta_{0}\right)P\left( \vartheta_{0}\right)+\sigma^{2}\dot{P}\left(\vartheta_{0}\right)\right]^{2}}{P \left(\vartheta_{0}\right)^{3}}\right.\]
\[+\frac{2A\left(\vartheta_{0}\right)\vartheta_{0}\Gamma\left(\vartheta_{0} \right)^{2}}{P\left(\vartheta_{0}\right)^{2}\left(1-\vartheta_{0}A\left( \vartheta_{0}\right)\right)}\left(\frac{\vartheta_{0}}{\left(1-A\left(\vartheta _{0}\right)^{2}\right)}+P\left(\vartheta_{0}\right)+\frac{\vartheta_{0} \sigma^{2}\dot{P}\left(\vartheta_{0}\right)}{\Gamma\left(\vartheta_{0}\right) }\right)\] \[\equiv Q\left(\vartheta_{0}\right).\]
Recall that the Fisher information is
\[\mathrm{I}_{a}\left(\vartheta_{0}\right) =\frac{1}{T}\mathbf{E}_{\vartheta_{0}}\left(\left.\frac{\partial \ln L\left(\vartheta,X^{T}\right)}{\partial\vartheta}\right|_{\vartheta= \vartheta_{0}}\right)^{2}\] \[=\frac{1}{TP\left(\vartheta\right)}\sum_{t=1}^{T}\mathbf{E}_{ \vartheta_{0}}\left[\zeta_{t}\left(\vartheta\right)\dot{M}_{t-1}\left( \vartheta\right)+\left[\zeta_{t}\left(\vartheta\right)^{2}-1\right]\frac{ \dot{P}\left(\vartheta\right)}{2\sqrt{P\left(\vartheta\right)}}\right]^{2}\] \[=\frac{1}{P\left(\vartheta\right)}\left[\mathbf{E}_{\vartheta_{0 }}\dot{M}_{t-1}\left(\vartheta_{0}\right)^{2}+\frac{\dot{P}\left(\vartheta_{0 }\right)^{2}}{2P\left(\vartheta_{0}\right)}\right]\] \[=\frac{2Q\left(\vartheta_{0}\right)+\dot{P}\left(\vartheta_{0} \right)^{2}}{2P\left(\vartheta_{0}\right)}. \tag{34}\]
**Remark 6**.: The similar calculations allow us to write the score-function and Fisher information in the situation where \(\vartheta=\sigma^{2}\), but we will not write it here.
### Unknown parameter \(\vartheta=\left(f,a\right)\)
Consider the system
\[X_{t} =\theta_{1}\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\dots,\] \[Y_{t} =\theta_{2}\,Y_{t-1}+b\,v_{t},\qquad\;\;Y_{0}.\]
Here the unknown parameter is \(\vartheta=\left(\theta_{1},\theta_{2}\right)\). Let us denote \(M\left(\vartheta,t\right)=\theta_{1}m_{t}\left(\vartheta\right),\)\(\Gamma\left(\vartheta\right)=\theta_{1}^{2}\gamma_{*}\left(\vartheta\right)\). The Kalman filter we write as follows
\[M\left(\vartheta,t\right) =\frac{\theta_{2}\sigma^{2}}{P\left(\vartheta\right)}M\left( \vartheta,t-1\right)+\frac{\theta_{2}\Gamma\left(\vartheta\right)}{P\left( \vartheta\right)}X_{t}\] \[\Gamma\left(\vartheta\right) =\frac{1}{2}\left[\theta_{1}^{2}b^{2}-\sigma^{2}\left(1-\theta_{ 2}^{2}\right)\right]+\frac{1}{2}\left[\left(\sigma^{2}\left(1-\theta_{2}^{2} \right)-\theta_{1}^{2}b^{2}\right)^{2}+4b^{2}\sigma^{2}\right]^{1/2},\]
where \(P\left(\vartheta\right)=\sigma^{2}+\Gamma\left(\vartheta\right),\)
\[\mathrm{A}\left(\vartheta\right)=\frac{\theta_{2}\sigma^{2}}{P\left(\vartheta \right)},\qquad E\left(\vartheta\right)=\frac{\theta_{2}\Gamma\left(\vartheta \right)}{P\left(\vartheta\right)},\qquad\mathrm{B}\left(\vartheta,\vartheta_{0 }\right)=\frac{\theta_{2}\Gamma\left(\vartheta\right)\sqrt{P\left(\vartheta_{0 }\right)}}{P\left(\vartheta\right)}. \tag{35}\]
The equations for derivatives \(\dot{M}_{f}\left(\vartheta_{0},t\right),\dot{M}_{a}\left(\vartheta_{0},t\right)\) are
\[\dot{M}_{f}\left(\vartheta_{0},t\right)=A\left(\vartheta_{0}\right)\dot{M}_{f} \left(\vartheta_{0},t-1\right)+\dot{B}_{f}\left(\vartheta_{0},\vartheta_{0} \right)\zeta_{t}\left(\vartheta_{0}\right),\]
\[\dot{M}_{a}\left(\vartheta_{0},t\right)=A\left(\vartheta_{0}\right)\dot{M}_{a} \left(\vartheta_{0},t-1\right)+M\left(\vartheta_{0},t-1\right)+\dot{B}_{a} \left(\vartheta_{0},\vartheta_{0}\right)\zeta_{t}\left(\vartheta_{0}\right),\]
where recall that
\[M\left(\vartheta_{0},t-1\right) =\theta_{0,2}M\left(\vartheta_{0},t-2\right)+\mathrm{B}\left( \vartheta_{0},\vartheta_{0}\right)\zeta_{t-1}\left(\vartheta_{0}\right),\] \[\dot{B}_{f}\left(\vartheta_{0},\vartheta_{0}\right) =\frac{\theta_{0,2}\sigma^{2}\dot{P}_{f}\left(\vartheta_{0}\right) }{P\left(\vartheta_{0}\right)^{3/2}},\qquad\dot{B}_{a}\left(\vartheta_{0}, \vartheta_{0}\right)=\frac{\Gamma\left(\vartheta_{0}\right)P\left(\vartheta_ {0}\right)+\theta_{0,2}\dot{P}_{a}\left(\vartheta_{0}\right)}{P\left( \vartheta_{0}\right)^{3/2}}.\]
Here we used the relation \(A\left(\vartheta\right)+\mathrm{E}\left(\vartheta\right)=\theta_{2}\). The vector of score-function is
\[\frac{\partial L(\vartheta,X^{T})}{\partial\theta_{1}}\bigg{|}_{ \vartheta=\vartheta_{0}} =\frac{1}{\sqrt{P\left(\vartheta_{0}\right)}}\sum_{t=1}^{T}\left[ \zeta_{t}\left(\vartheta_{0}\right)\dot{M}_{f}\left(\vartheta_{0},t-1\right) +\left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]\frac{\dot{P}_{f}\left( \vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)}\right],\] \[\frac{\partial L(\vartheta,X^{T})}{\partial\theta_{2}}\bigg{|}_{ \vartheta=\vartheta_{0}} =\frac{1}{\sqrt{P\left(\vartheta_{0}\right)}}\sum_{t=1}^{T}\left[ \zeta_{t}\left(\vartheta_{0}\right)\dot{M}_{a}\left(\vartheta_{0},t-1\right) +\left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]\frac{\dot{P}_{a}\left( \vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)}\right].\]
Using the stationarity of the underlying processes we write
\[\mathbf{E}_{\vartheta_{0}}\left(\left.\frac{\partial L(\vartheta, X^{T})}{\partial\theta_{1}}\right|_{\vartheta=\vartheta_{0}}\frac{\partial L( \vartheta,X^{T})}{\partial\theta_{2}}\right|_{\vartheta=\vartheta_{0}}\right)\] \[=\frac{T}{P\left(\vartheta_{0}\right)}\left[\mathbf{E}_{\vartheta_ {0}}\dot{M}_{f}\left(\vartheta_{0},t-1\right)\dot{M}_{a}\left(\vartheta_{0},t -1\right)+\frac{\dot{\mathrm{P}}_{f}\left(\vartheta_{0}\right)\dot{P}_{a} \left(\vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)^{2}}\right].\]
Further
\[\mathbf{E}_{\vartheta_{0}}\left[\dot{M}_{a}\left(\vartheta_{0},t -1\right)\dot{M}_{f}\left(\vartheta_{0},t-1\right)\right]=A\left(\vartheta_{0} \right)^{2}\mathbf{E}_{\vartheta_{0}}\left[\dot{M}_{a}\left(\vartheta_{0},t-2 \right)\dot{M}_{f}\left(\vartheta_{0},t-2\right)\right]\] \[\qquad\qquad+A\left(\vartheta_{0}\right)\mathbf{E}_{\vartheta_{0} }\left[M\left(\vartheta_{0},t-2\right)\dot{M}_{f}\left(\vartheta_{0},t-2 \right)\right]+\dot{\mathrm{B}}_{a}\left(\vartheta_{0},\vartheta_{0}\right) \dot{\mathrm{B}}_{f}\left(\vartheta_{0},\vartheta_{0}\right)\] \[=\frac{1}{\left(1-A\left(\vartheta_{0}\right)^{2}\right)}\left[ \frac{A\left(\vartheta_{0}\right)\dot{\mathrm{B}}_{f}\left(\vartheta_{0}, \vartheta_{0}\right)\mathrm{B}\left(\vartheta_{0},\vartheta_{0}\right)\mathrm{ B}\left(\vartheta_{0},\vartheta_{0}\right)}{\left(1-\theta_{0,2}\mathrm{A}\left( \vartheta_{0}\right)\right)}+\dot{\mathrm{B}}_{a}\left(\vartheta_{0},\vartheta_{0} \right)\dot{\mathrm{B}}_{f}\left(\vartheta_{0},\vartheta_{0}\right)\right]\] \[=\frac{A\left(\vartheta_{0}\right)\mathrm{B}\left(\vartheta_{0}, \vartheta_{0}\right)\dot{\mathrm{B}}_{f}\left(\vartheta_{0},\vartheta_{0} \right)+\left(1-\theta_{0,2}\mathrm{A}\left(\vartheta_{0}\right)\right)\dot{ \mathrm{B}}_{a}\left(\vartheta_{0},\vartheta_{0}\right)\dot{\mathrm{B}}_{f} \left(\vartheta_{0},\vartheta_{0}\right)}{\left(1-\theta_{0,2}\mathrm{A}\left( \vartheta_{0}\right)\right)\left(1-A\left(\vartheta_{0}\right)^{2}\right)}\equiv K \left(\vartheta_{0}\right)\]
because
\[\mathbf{E}_{\vartheta_{0}}\left[M\left(\vartheta_{0},t-2\right)\dot{M}_{f} \left(\vartheta_{0},t-2\right)\right]=\frac{\mathrm{B}\left(\vartheta_{0}, \vartheta_{0}\right)\dot{\mathrm{B}}_{f}\left(\vartheta_{0},\vartheta_{0}\right) }{\left(1-\theta_{0,2}\mathrm{A}\left(\vartheta_{0}\right)\right)}.\]
Hence
\[I_{12}\left(\vartheta_{0}\right)=\frac{2P\left(\vartheta_{0}\right)^{2}K\left( \vartheta_{0}\right)+\dot{P}_{a}\left(\vartheta_{0}\right)\dot{P}_{f}\left( \vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)^{3}}. \tag{36}\]
The Fisher information matrix is
\[\mathbf{I}\left(\vartheta_{0}\right)=\begin{pmatrix}I_{11}\left(\vartheta_{0} \right),&I_{12}\left(\vartheta_{0}\right)\\ I_{12}\left(\vartheta_{0}\right)&I_{22}\left(\vartheta_{0}\right)\end{pmatrix},\]
where the values \(I_{11}\left(\vartheta_{0}\right)\) and \(\mathrm{I}_{22}\left(\vartheta_{0}\right)\) are given in (33) and (34) respectively.
One-step MLE-process
We consider the construction of One-step MLE-process in the case of unknown parameter \(\vartheta=b\in\Theta=\left(\alpha_{b},\beta_{b}\right)\),\(\alpha_{b}>0\). Let us fix a learning interval of observations \(X^{\tau_{T}}=\left(X_{0},X_{1},\ldots,X_{\tau_{T}}\right)\), where \(\tau_{T}=\left[T^{\delta}\right],\delta\in\left(\frac{1}{2},1\right)\) and \(\left[A\right]\) here is the integer part of \(A\). As preliminary estimator we take the MME \(\vartheta_{\tau_{T}}^{*}=b_{\tau_{T}}^{*}\) defined in (19)
\[\vartheta_{\tau_{T}}^{*}=2^{-1/2}f^{-1}\left(\left[S_{1,\tau_{T}}\left(X^{\tau _{T}}\right)-2\sigma^{2}\right]\left(1+a\right)\right)^{1/2}.\]
One-step MLE-process is
\[\vartheta_{t,T}^{\star}=\vartheta_{\tau_{T}}^{*}+\frac{1}{\mathrm{ I}_{b}(\vartheta_{\tau_{T}}^{*})\left(t-\tau_{T}\right)}\sum_{s=\tau_{T}+1}^{t} \left[\frac{\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{T}}^{*})\right]}{P(\vartheta _{\tau_{T}}^{*})}f\dot{m}_{s-1}(\vartheta_{\tau_{T}}^{*})\right.\] \[\left.+\left(\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{T}}^{*}) \right]^{2}-P(\vartheta_{\tau_{T}}^{*})\right)\frac{\dot{P}(\vartheta_{\tau_{ T}}^{*})}{2P(\vartheta_{\tau_{T}}^{*})^{2}}\right],\qquad t\in\left[\tau_{T}+2,T \right]. \tag{37}\]
**Theorem 1**.: _If \(t=\left[vT\right],v\in\left(0,1\right]\) and \(T\rightarrow\infty\), then the following convergences_
\[\sqrt{t}\left(\vartheta_{t,T}^{\star}-\vartheta_{0}\right)\Longrightarrow \mathcal{N}\left(0,\mathrm{I}_{b}(\vartheta_{0})^{-1}\right),\qquad t\mathbf{ E}_{\vartheta_{0}}\left(\vartheta_{t,T}^{\star}-\vartheta_{0}\right)^{2} \longrightarrow\mathrm{I}_{b}(\vartheta_{0})^{-1}.\]
_hold uniformly on compacts \(\mathbb{K}\subset\Theta\)._
Proof.: To study this estimate we need the bounds on the first and second derivatives of \(m_{t}\left(\vartheta\right)\) presented in the next lemma.
**Lemma 2**.: _For any \(p>1\) there exist constants \(C_{1}>0,C_{2}>0\) not depending on \(\vartheta_{0}\in\Theta\) and \(t\geq 1\) such that for_
\[\sup_{\vartheta\in\Theta}\mathbf{E}_{\vartheta_{0}}\left|\dot{m}_{t}\left( \vartheta\right)\right|^{p}<C_{1},\qquad\qquad\sup_{\vartheta\in\Theta} \mathbf{E}_{\vartheta_{0}}\left|\ddot{m}_{t}\left(\vartheta\right)\right|^{p}< C_{2}. \tag{38}\]
Proof.: The equation for the second derivative \(\ddot{m}\left(\vartheta,t\right)\) is
\[\ddot{m}_{t}\left(\vartheta\right)=A\left(\vartheta\right)\ddot{ m}_{t-1}\left(\vartheta\right)+2\dot{A}\left(\vartheta\right)\dot{m}_{t-1} \left(\vartheta\right)+\ddot{A}\left(\vartheta\right)m_{t-1}\left(\vartheta\right)\] \[-\ddot{A}\left(\vartheta\right)m_{t-1}\left(\vartheta_{0}\right)+ \ddot{B}^{*}\left(\vartheta,\vartheta_{0}\right)\zeta_{t}\left(\vartheta_{0} \right).\]
Note that here \(B^{*}\left(\vartheta,\vartheta_{0}\right)=af\gamma_{*}\left(\vartheta\right) \sqrt{P\left(\vartheta_{0}\right)}P\left(\vartheta\right)^{-1}\).
Recall that the stationary versions of the random functions \(m\left(\vartheta_{0},t-1\right),\)
\(m\left(\vartheta,t-1\right)\), \(\dot{m}\left(\vartheta,t-1\right)\) and \(\ddot{m}\left(\vartheta,t-1\right)\) are
\[m_{t-1}\left(\vartheta_{0}\right)=B^{*}\left(\vartheta_{0},\vartheta_{0}\right) \sum_{k=0}^{\infty}a^{k}\zeta_{t-k-1},\]
\[m_{t-1}\left(\vartheta\right)=A\left(\vartheta\right)m_{t-2}\left( \vartheta\right)+\left[a-A\left(\vartheta\right)\right]B^{\ast}\left(\vartheta_{ 0},\vartheta_{0}\right)\sum_{k=0}^{\infty}a^{k}\zeta_{t-k-2}+B^{\ast}\left( \vartheta,\vartheta_{0}\right)\zeta_{t-k-1}\] \[\qquad\qquad=\left[a-A\left(\vartheta\right)\right]B^{\ast}\left( \vartheta_{0},\vartheta_{0}\right)\sum_{j=0}^{\infty}A\left(\vartheta\right)^ {j}\sum_{k=0}^{\infty}a^{k}\zeta_{t-j-k-2}+B^{\ast}\left(\vartheta,\vartheta_{ 0}\right)\sum_{j=0}^{\infty}A\left(\vartheta\right)^{j}\zeta_{t-j-1},\] \[\dot{m}_{t-1}\left(\vartheta\right)=B^{\ast}\left(\vartheta_{0}, \vartheta_{0}\right)\hat{A}\left(\vartheta\right)\,A\left(\vartheta\right)^{ -1}\sum_{j=0}^{\infty}\left[aj-A\left(\vartheta\right)\left(j+1\right)\right] A\left(\vartheta\right)^{j}\sum_{k=0}^{\infty}a^{k}\zeta_{t-j-k-2}\] \[\qquad\qquad\qquad+\sum_{j=0}^{\infty}\left[\dot{B}^{\ast}\left( \vartheta,\vartheta_{0}\right)+jB^{\ast}\left(\vartheta,\vartheta_{0}\right) \hat{A}\left(\vartheta\right)\,A\left(\vartheta\right)^{-1}\right]A\left( \vartheta\right)^{j}\zeta_{t-j-1},\] \[\ddot{m}_{t}\left(\vartheta\right)=\sum_{j=0}^{\infty}\left[F_{0 }\left(\vartheta,\vartheta_{0}\right)+jF_{1}\left(\vartheta,\vartheta_{0} \right)+j^{2}F_{2}\left(\vartheta,\vartheta_{0}\right)\right]A\left(\vartheta \right)^{j}\sum_{k=0}^{\infty}a^{k}\zeta_{t-j-k-2}\] \[\qquad\qquad\qquad+\sum_{j=0}^{\infty}\left[H_{0}\left(\vartheta, \vartheta_{0}\right)+jH_{1}\left(\vartheta,\vartheta_{0}\right)+j^{2}H_{2} \left(\vartheta,\vartheta_{0}\right)\right]A\left(\vartheta\right)^{j}\zeta_{t- j-1}.\]
The bounded functions \(F_{i}\left(\vartheta,\vartheta_{0}\right),H_{i}\left(\vartheta,\vartheta_{0} \right),i=0,1,2,\) can be easily calculated by the formal differentiation of \(\dot{m}_{t-1}\left(\vartheta\right)\).
From these representations, (31), condition \(a^{2}\in\left(0,1\right)\) and boundedness of the functions \(F_{i}\left(\cdot,\cdot\right),H_{i}\left(\cdot,\cdot\right)\) it follows that the Gaussian processes \(\dot{m}_{t-1}\left(\vartheta\right)\) and \(\ddot{m}_{t-1}\left(\vartheta\right)\) have bounded variances and fast decreasing covariance functions.
For example,
\[\mathbf{E}_{\vartheta_{0}}\left(\sum_{j=0}^{\infty}j^{2}A\left( \vartheta\right)^{j}\sum_{k=0}^{\infty}a^{k}\zeta_{t-j-k-2}\right)^{2}\] \[\qquad\qquad=\sum_{j=0}^{\infty}j^{2}A\left(\vartheta\right)^{j} \sum_{l=0}^{\infty}l^{2}A\left(\vartheta\right)^{l}\sum_{k=0}^{\infty}\sum_{m= 0}^{\infty}a^{k}a^{m}\mathbf{E}_{\vartheta_{0}}\Big{(}\zeta_{t-j-k-2}\zeta_{t- m-l-2}\Big{)}\] \[\qquad\qquad=\sum_{j=0}^{\infty}j^{2}A\left(\vartheta\right)^{j} \sum_{l=0}^{\infty}l^{2}A\left(\vartheta\right)^{l}a^{j-l}\sum_{k\geq l-j\geq 0 }^{\infty}a^{2k}\] \[\qquad\qquad=\frac{1}{1-a^{2}}\sum_{j=0}^{\infty}j^{2}A\left( \vartheta\right)^{j}\sum_{l\geq j}^{\infty}l^{2}A\left(\vartheta\right)^{l}a^ {j-l}a^{2\left(l-j\right)}\] \[\qquad\qquad=\frac{1}{1-a^{2}}\sum_{j=0}^{\infty}j^{2}A\left( \vartheta\right)^{j}a^{-j}\sum_{l\geq j}^{\infty}l^{2}\left[aA\left(\vartheta \right)\right]^{l}.\]
Recall that
\[\sum_{l\geq j}^{\infty}l^{2}\left[aA\left(\vartheta\right)\right]^ {l} =\left[\ln aA\left(\vartheta\right)\right]^{-2}\left.\frac{\partial^{2}}{ \partial x^{2}}\sum_{l\geq j}^{\infty}\left.\left[aA\left(\vartheta\right) \right]^{xl}\right|_{x=1}\] \[=\left[\ln aA\left(\vartheta\right)\right]^{-2}\left.\frac{ \partial^{2}}{\partial x^{2}}\left.\left(\frac{\left[aA\left(\vartheta\right) \right]^{xj}}{1-\left[aA\left(\vartheta\right)\right]^{x}}\right)\right|_{x=1}.\]
Therefore the moments \(\mathbf{E}_{\vartheta_{0}}\dot{m}_{t}\left(\vartheta\right)^{2}\) and \(\mathbf{E}_{\vartheta_{0}}\ddot{m}_{t}\left(\vartheta\right)^{2}\) can be calculated exactly and the constants \(C_{1},C_{2}\) in (38) can be chosen not depending on \(\vartheta_{0}\in\Theta\) and \(t\geq 0\).
Consider the normalized difference
\[\sqrt{T}\left(\vartheta_{t,T}^{\star}-\vartheta_{0}\right)=\sqrt{T} \left(\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_{0}\right)\] \[\qquad+\frac{\sqrt{T}}{\mathrm{I}_{b}(\vartheta_{\tau_{{}_{T}}}^{ \ast})\left(t-\tau_{T}\right)}\sum_{s=\tau_{T}+1}^{t}\left[\frac{\left[X_{s}- fm_{s-1}(\vartheta_{\tau_{{}_{T}}}^{\ast})\right]}{P(\vartheta_{\tau_{{}_{T}}}^{ \ast})}f\dot{m}_{s-1}(\vartheta_{\tau_{{}_{T}}}^{\ast})\right.\] \[\qquad\left.+\left(\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{{}_{T}}} ^{\ast})\right]^{2}-P(\vartheta_{\tau_{{}_{T}}}^{\ast})\right)\frac{\dot{P}( \vartheta_{\tau_{{}_{T}}}^{\ast})}{2P(\vartheta_{\tau_{{}_{T}}}^{\ast})^{2}} \right],\qquad t\in\left[\tau_{{}_{T}}+2,T\right].\]
We can write
\[X_{s}-fm_{s-1}(\vartheta_{\tau_{{}_{T}}}^{\ast})=X_{s}-fm_{s-1} (\vartheta_{0})+f\left[m_{s-1}(\vartheta_{0})-m_{s-1}(\vartheta_{\tau_{{}_{T} }}^{\ast})\right]\] \[\qquad\qquad=\sqrt{P\left(\vartheta_{0}\right)}\,\zeta_{s}\left( \vartheta_{0}\right)-(\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_{0})f\dot{m} _{s-1}\left(\vartheta_{0}\right)-\frac{f}{2}(\vartheta_{\tau_{{}_{T}}}^{\ast} -\vartheta_{0})^{2}\ddot{m}_{s-1}(\ddot{\vartheta}),\] \[P(\vartheta_{\tau_{{}_{T}}}^{\ast})=P(\vartheta_{0})+(\vartheta_ {\tau_{{}_{T}}}^{\ast}-\vartheta_{0})\dot{P}(\vartheta_{0})+\frac{1}{2}( \vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_{0})^{2}\ddot{P}(\ddot{\vartheta}),\] \[\mathrm{I}_{b}(\vartheta_{\tau_{{}_{T}}}^{\ast})=\mathrm{I}_{b}( \vartheta_{0})+O\left(\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_{0}\right)\] \[\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{{}_{T}}}^{\ast})\right]^{2 }-P(\vartheta_{\tau_{{}_{T}}}^{\ast})=P(\vartheta_{0})\left[\zeta_{s}\left( \vartheta_{0}\right)^{2}-1\right]-(\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_ {0})\dot{P}\left(\vartheta_{0}\right)\] \[\qquad\qquad\qquad-2(\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta_ {0})\sqrt{P(\vartheta_{0})}\zeta_{s}\left(\vartheta_{0}\right)\dot{m}_{s-1} \left(\vartheta_{0}\right)+O\left((\vartheta_{\tau_{{}_{T}}}^{\ast}-\vartheta _{0})^{2}\right).\]
The substitution of these relations in (37) yields
\[\vartheta_{t,T}^{\star} =\vartheta_{\tau_{{}_{T}}}^{\ast}\] \[+\sum_{s=\tau_{T}+1}^{t}\frac{\left[2\zeta_{s}(\vartheta_{0})f \dot{m}_{s-1}(\vartheta_{0})\sqrt{P(\vartheta_{0})}+[\zeta_{s}(\vartheta_{0})^{ 2}-1]\,\dot{P}(\vartheta_{0})\right]}{2\left(t-\tau_{{}_{T}}\right)\mathrm{I}_ {b}(\vartheta_{0})P(\vartheta_{0})}\left(1+O\left(\vartheta_{\tau_{{}_{T}}}^{ \ast}-\vartheta_{0}\right)\right)\]
\[-\frac{(\vartheta_{\tau_{T}}^{*}-\vartheta_{0})}{\left(t-\tau_{T} \right)\mathrm{I}_{b}(\vartheta_{0})}\sum_{s=\tau_{T}+1}^{t}\left(\frac{f^{2} \dot{m}_{s-1}\left(\vartheta_{0}\right)^{2}}{P(\vartheta_{0})}+\frac{\dot{P}( \vartheta_{0})^{2}}{2P(\vartheta_{0})^{2}}\right)+O\left((\vartheta_{\tau_{T }}^{*}-\vartheta_{0})^{2}\right).\]
Note that if \(t\to\infty\) then by LLN and by the CLT uniformly on compacts \(\mathbb{K}\subset\Theta\) we have
\[R_{1,T} =\frac{1}{t}\sum_{s=\tau_{T}+1}^{t}\left(\frac{f^{2}\dot{m}_{s-1} \left(\vartheta_{0}\right)^{2}}{P(\vartheta_{0})}+\frac{\dot{P}(\vartheta_{0} )^{2}}{2P(\vartheta_{0})^{2}}\right)\longrightarrow\mathrm{I}_{b}(\vartheta_{ 0}),\] \[R_{2,T} =\frac{1}{\sqrt{t}}\sum_{s=\tau_{T}+1}^{t}\left(\mathrm{I}_{b}( \vartheta_{0})-\frac{f^{2}\dot{m}_{s-1}\left(\vartheta_{0}\right)^{2}}{P( \vartheta_{0})}-\frac{\dot{P}(\vartheta_{0})^{2}}{2P(\vartheta_{0})^{2}} \right)\Longrightarrow\mathcal{N}\left(0,\mathrm{D}(\vartheta_{0})^{2}\right),\] \[R_{3,T} =\frac{1}{t}\mathbf{E}_{\vartheta_{0}}\left[\sum_{s=\tau_{T}+1}^ {t}\left(\frac{f^{2}\dot{m}_{s-1}\left(\vartheta_{0}\right)^{2}}{P(\vartheta_ {0})}+\frac{\dot{P}(\vartheta_{0})^{2}}{2P(\vartheta_{0})^{2}}-\mathrm{I}_{b} (\vartheta_{0})\right)\right]^{2}\] \[=\frac{1}{tP(\vartheta_{0})^{2}}\mathbf{E}_{\vartheta_{0}}\left[ \sum_{s=\tau_{T}+1}^{t}\left(f^{2}\left[\dot{m}_{s-1}\left(\vartheta_{0} \right)^{2}-\mathbf{E}_{\vartheta_{0}}\dot{m}_{s-1}\left(\vartheta_{0},t \right)^{2}\right]\right)\right]^{2}\leq C,\]
where the constant \(C>0\) does not depend on \(t\) and \(\vartheta_{0}\in\mathbb{K}\). We have as well the uniform convergence
\[\frac{1}{\sqrt{t}}\sum_{s=\tau_{T}+1}^{t}\frac{\left[2\zeta_{s}(\vartheta_{0} )f\dot{m}_{s-1}(\vartheta_{0})\sqrt{P(\vartheta_{0})}+\left[\zeta_{s}(\vartheta _{0})^{2}-1\right]\dot{P}(\vartheta_{0})\right]}{2\left(t-\tau_{T}\right) \mathrm{I}_{b}(\vartheta_{0})P(\vartheta_{0})}\Longrightarrow\mathcal{N}\left( 0,\mathrm{I}_{b}(\vartheta_{0})^{-1}\right).\]
The value \(\sup_{\vartheta_{0}\in\Theta}\mathrm{D}(\vartheta_{0})^{2}<\infty\) can be calculated too, but we need not it. Therefore if we put \(t=vT\), \(v\in(T^{\delta-1},1]\), then
\[\sqrt{t}\left(\vartheta_{t,T}^{*}-\vartheta_{0}\right)=\sqrt{t} \left(\vartheta_{\tau_{T}}^{*}-\vartheta_{0}\right)\] \[\quad+\sum_{s=\tau_{T}+1}^{t}\frac{\left[2\zeta_{s}(\vartheta_{0} )f\dot{m}_{s-1}(\vartheta_{0})\sqrt{P(\vartheta_{0})}+\left[\zeta_{s}(\vartheta _{0})^{2}-1\right]\dot{P}(\vartheta_{0})\right]}{2\left(\sqrt{t}-\tau_{T} \right)\mathrm{I}_{b}(\vartheta_{0})P(\vartheta_{0})}\left(1+O\left(\vartheta_ {\tau_{T}}^{*}-\vartheta_{0}\right)\right)\] \[\quad-\frac{\sqrt{t}(\vartheta_{\tau_{T}}^{*}-\vartheta_{0})}{ \left(t-\tau_{T}\right)\mathrm{I}_{b}(\vartheta_{0})}\sum_{s=\tau_{T}+1}^{t} \left(\frac{f^{2}\dot{m}_{s-1}\left(\vartheta_{0}\right)^{2}}{P(\vartheta_{0}) }+\frac{\dot{P}(\vartheta_{0})^{2}}{2P(\vartheta_{0})^{2}}\right)+O\left(\sqrt{ t}(\vartheta_{\tau_{T}}^{*}-\vartheta_{0})^{2}\right).\]
Remind that \(\delta\in\left(\frac{1}{2},1\right)\) and the MME \(\vartheta_{\tau_{T}}^{*}\) satisfies (19). Hence, for any fixed \(v\in(0,1]\)
\[\sqrt{t}\mathbf{E}_{\vartheta_{0}}\left|\vartheta_{\tau_{T}}^{*}-\vartheta_{0} \right|^{2}\leq\frac{C\sqrt{t}}{\tau_{T}^{2}}\leq\frac{C\sqrt{v}T^{1/2}}{T^{2 \delta}}\longrightarrow 0.\]
Further,
\[\frac{\sqrt{t}\left(\vartheta_{\tau_{T}}^{*}-\vartheta_{0}\right)}{ \mathrm{I}_{b}(\vartheta_{0})}\left[\mathrm{I}_{b}(\vartheta_{0})-\frac{1}{t} \sum_{s=\tau_{T}+1}^{t}\left(\frac{f^{2}\dot{m}_{s-1}\left(\vartheta_{0}\right) ^{2}}{P(\vartheta_{0})}+\frac{\dot{P}(\vartheta_{0})^{2}}{2P(\vartheta_{0})^{2 }}\right)\right]\] \[=\frac{\sqrt{t}\left(\vartheta_{\tau_{T}}^{*}-\vartheta_{0} \right)}{\mathrm{I}_{b}(\vartheta_{0})}\frac{1}{t}\sum_{s=\tau_{T}+1}^{t} \left[\frac{f^{2}\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0}\right)^{2}}{ \left[1-A\left(\vartheta_{0}\right)^{2}\right]P(\vartheta_{0})}-\frac{f^{2} \dot{m}_{s-1}\left(\vartheta_{0}\right)^{2}}{P(\vartheta_{0})}\right]\] \[=\frac{\sqrt{t}\left(\vartheta_{\tau_{T}}^{*}-\vartheta_{0} \right)f^{2}}{\mathrm{I}_{b}(\vartheta_{0})P(\vartheta_{0})}\frac{1}{t}\sum_{ s=\tau_{T}+1}^{t}\left[\frac{\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{\left[1-A\left(\vartheta_{0}\right)^{2}\right]}-\dot{m}_{s-1} \left(\vartheta_{0}\right)^{2}\right]\] \[\longrightarrow 0.\]
Therefore for any fixed \(v\in(0,1)\) and \(t=vT\) uniformly on \(\vartheta_{0}\in\mathbb{K}\)
\[\sqrt{t}\left(\vartheta_{t,T}^{*}-\vartheta_{0}\right)\Longrightarrow\mathcal{ N}\left(0,\mathrm{I}_{b}(\vartheta_{0})^{-1}\right),\]
and
\[t\mathbf{E}_{\vartheta_{0}}\left(\vartheta_{t,T}^{*}-\vartheta_{0}\right)^{2} \longrightarrow\mathrm{I}_{b}(\vartheta_{0})^{-1}.\]
**Remark 7**.: The One-step MLE-processes in the cases \(\vartheta=f\), \(\vartheta=a\), \(\vartheta=\sigma^{2}\) can be constructed following the same lines. We do not present the corresponding calculations because it will be mainly repetition of the given above proof.
If the unknown parameter is two-dimensional, say, \(\vartheta=\left(f,a\right)^{\top}=\left(\theta_{1},\theta_{2}\right)^{\top}\), then two-dimensional One-step MLE-process \(\vartheta_{t,T}^{\star}=\left(\theta_{1,t,T}^{\star},\theta_{2,t,T}^{\star} \right)^{\top}\) can be constructed as follows. Introduce MMEs \(\theta_{\tau_{T}}^{*}=\left(\theta_{1,\tau_{T}}^{*},\theta_{2,\tau_{T}}^{*} \right)^{\top}\), \(\theta_{1,\tau_{T}}^{*}=f_{\tau_{T}}^{*}\) and \(\theta_{2,\tau_{T}}^{*}=a_{\tau_{T}}^{*}\), where \(\tau_{T}=\left[T^{\delta}\right],\delta\in(1/2,1]\) and
\[\theta_{1,\tau_{T}}^{*} =2^{-1/2}b^{-1}\left[S_{1,\tau_{T}}\left(X^{\tau_{T}}\right)(1+ \theta_{2,\tau_{T}}^{*})-2\sigma^{2}\right]^{1/2},\] \[\theta_{2,\tau_{T}}^{*} =\frac{S_{1,\tau_{T}}\left(X^{\tau_{T}}\right)+S_{2,\tau_{T}} \left(X^{\tau_{T}}\right)-\sigma^{2}}{S_{1,\tau_{T}}\left(X^{\tau_{T}}\right) -2\sigma^{2}}.\]
Denote
\[M\left(\vartheta,t\right)=\theta_{1}m_{t}\left(\vartheta\right),\ \dot{\mathrm{M}}\left(\vartheta,t\right)=\left(\dot{M}_{f}\left(\vartheta,t \right),\dot{M}_{a}\left(\vartheta,t\right)\right)^{\top},\ \dot{\mathrm{P}}\left(\vartheta\right)=\left(\dot{\mathrm{P}}_{f}\left( \vartheta\right),\dot{\mathrm{P}}_{a}\left(\vartheta\right)\right)^{\top}.\]
The equations for \(\dot{M}_{f}\left(\vartheta,t\right)\) and \(\dot{M}_{a}\left(\vartheta,t\right)\) we obtain by differentiation of the equation
\[M\left(\vartheta,t\right)=\mathrm{A}\left(\vartheta\right)M\left(\vartheta,t-1 \right)+\mathrm{E}\left(\vartheta\right)M\left(\vartheta_{0},t-1\right)+ \mathrm{B}\left(\vartheta,\vartheta_{0}\right)\zeta_{t}\left(\vartheta_{0} \right),\qquad t\geq 1,\]
by \(\theta_{1}\) and \(\theta_{2}\) correspondingly. For definition of the functions \(\mathrm{A}\left(\cdot\right),\mathrm{E}\left(\cdot\right)\) and \(\mathrm{B}\left(\cdot,\cdot\right)\) see (35) Recall that the Fisher information matrix \(\mathbf{I}\left(\vartheta\right)\) is defined by the relations (33),(34) and (36). Suppose that this matrix is uniformly on \(\vartheta\in\Theta\) non degenerate. Then the One-step MLE-process is given by the equality
\[\vartheta_{t,T}^{\star} =\theta_{{}_{\tau_{T}}}^{\ast}+\frac{\mathbf{I}(\theta_{{}_{ \tau_{T}}}^{\ast})^{-1}}{t-\tau_{T}}\sum_{s=\tau_{T}+1}^{t}\left[\frac{\left[X_ {s}-M(\vartheta_{{}_{\tau_{T}}}^{\ast},s-1)\right]}{P(\vartheta_{{}_{\tau_{T} }}^{\ast})}\dot{\mathrm{M}}(\vartheta_{{}_{\tau_{T}}}^{\ast},s-1)\right.\] \[\left.+\left(\left[X_{s}-M(\vartheta_{{}_{\tau_{T}}}^{\ast},s-1) \right]^{2}-P(\vartheta_{{}_{\tau_{T}}}^{\ast})\right)\frac{\dot{\mathrm{P}}( \vartheta_{{}_{\tau_{T}}}^{\ast})}{2P(\vartheta_{{}_{\tau_{T}}}^{\ast})^{2}} \right],\qquad t\in\left[\tau_{T}+2,T\right]. \tag{39}\]
It can be shown that for any \(v\in\left(0,T\right]\), if we put \(t=vT\) then the normalized difference is asymptotically normal:
\[t\mathbf{E}_{\vartheta_{0}}\left\|\vartheta_{t,T}^{\star}- \vartheta_{0}\right\|^{2}\longrightarrow\mathbf{E}_{\vartheta_{0}}\left\| \xi\left(\vartheta_{0}\right)\right\|^{2}.\]
## 5 MLE and BE
Consider the model of observations (23)-(24), where the unknown parameter is \(\vartheta=b\in\Theta=\left(\alpha_{b},\beta_{b}\right),\alpha_{b}>0\). Below we study the MLE \(\hat{\vartheta}_{{}_{T}}\) and BE \(\tilde{\vartheta}_{{}_{T}}\) defined by the usual relations
\[L(\hat{\vartheta}_{{}_{T}},X^{T})=\sup_{\vartheta\in\Theta}L(\hat{\vartheta},X^{T}),\qquad\qquad\tilde{\vartheta}_{{}_{T}}=\frac{\int_{\Theta} \vartheta p\left(\vartheta\right)L(\hat{\vartheta},X^{T})\mathrm{d}\vartheta} {\int_{\Theta}p\left(\vartheta\right)L(\hat{\vartheta},X^{T})\mathrm{d} \vartheta}.\]
Here \(p\left(\vartheta\right),\vartheta\in\Theta\) is continuous, positive on \(\Theta\) density a priory.
Recall the notation
\[P\left(\vartheta\right) =\sigma^{2}+f^{2}\gamma_{\ast}\left(\vartheta\right),\qquad \qquad A\left(\vartheta\right)=\frac{a\sigma^{2}}{P\left(\vartheta\right)^{2}}\] \[B^{\ast}\left(\vartheta,\vartheta_{0}\right) =\frac{af\gamma_{\ast}\left(\vartheta\right)\sqrt{P\left(\vartheta _{0}\right)}}{P\left(\vartheta\right)},\qquad\dot{B}^{\ast}\left(\vartheta_{0}, \vartheta_{0}\right)=\left.\frac{\partial B^{\ast}\left(\vartheta,\vartheta_{0} \right)}{\partial\vartheta}\right|_{\vartheta=\vartheta_{0}}=\frac{af\sigma^{2 }\dot{\gamma}_{\ast}\left(\vartheta_{0}\right)}{P\left(\vartheta_{0}\right)^{3 /2}}.\] \[\mathrm{I}_{b}\left(\vartheta_{0}\right) =\frac{f^{2}\dot{B}^{\ast}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{\left[1-A\left(\vartheta_{0}\right)^{2}\right]P\left(\vartheta_{ 0}\right)}+\frac{\dot{P}\left(\vartheta_{0}\right)^{2}}{2P\left(\vartheta_{0} \right)^{2}}\] \[=\frac{f^{2}\dot{\gamma}_{\ast}\left(\vartheta\right)^{2}}{P \left(\vartheta_{0}\right)}\left[\frac{a^{2}\sigma^{4}}{P\left(\vartheta_{0} \right)^{2}-a^{2}\sigma^{4}}+\frac{1}{2P\left(\vartheta_{0}\right)}\right].\]
Below it is shown (see (40)-(41)) that the family of measures is LAN. Therefore we have Hajek-Le Cam's lower bound: for any estimator \(\bar{\vartheta}_{T}\) and any \(\vartheta_{0}\in\Theta\)
\[\lim_{\nu\to 0}\lim_{T\rightarrow\infty}\sup_{\left|\vartheta-\vartheta_{0} \right|\leq\nu}T\mathbf{E}_{\vartheta}\left(\bar{\vartheta}_{T}-\vartheta\right) ^{2}\geq\mathrm{I}_{b}\left(\vartheta_{0}\right)^{-1}.\]
This bound allows us to give the following definition
Define the asymptotically efficient estimator \(\vartheta_{T}^{\circ}\) as estimator for which the equality
\[\lim_{\nu\to 0}\lim_{T\to\infty}\sup_{|\vartheta-\vartheta_{0}|\leq\nu}T \mathbf{E}_{\vartheta}\left(\vartheta_{T}^{\circ}-\vartheta\right)^{2}=\mathrm{ I}_{b}\left(\vartheta_{0}\right)^{-1}.\]
holds for all \(\vartheta_{0}\in\Theta\).
**Theorem 2**.: _The MLE and BE are consistent, asymptotically normal,_
\[\sqrt{T}\left(\hat{\vartheta}_{T}-\vartheta_{0}\right)\Longrightarrow\mathcal{ N}\left(0,\mathrm{I}_{b}\left(\vartheta_{0}\right)^{-1}\right),\qquad\qquad \sqrt{T}\left(\tilde{\vartheta}_{T}-\vartheta_{0}\right)\Longrightarrow \mathcal{N}\left(0,\mathrm{I}_{b}\left(\vartheta_{0}\right)^{-1}\right),\]
_the polynomial moments converge and the both estimators are asymptotically efficient._
Proof.: The mentioned properties of the estimators we will proof with the help of the general Theorems 3.1.1 and 3.2.1 in [9], i.e., we verify the conditions of these theorems for our model of observations. Introduce the normalized \(\left(\varphi_{T}=T^{-1/2}\right)\) likelihood ratio function
\[Z_{T}\left(u\right)=\frac{L(\vartheta_{0}+\varphi_{T}u,X^{T})}{L(\vartheta_{ 0},X^{T})},\qquad u\in\mathbb{U}_{T}=\left(\sqrt{T}\left(\alpha_{T}-\vartheta _{0}\right),\sqrt{T}\left(\beta_{T}-\vartheta_{0}\right)\right)\]
Therefore we have to prove the following properties of the process \(Z_{T}\left(u\right),u\in\mathbb{U}_{T}\).
1. _The process_ \(Z_{T}\left(\cdot\right)\) _admits the representation_ \[Z_{T}\left(u\right)=\exp\left(u\Delta_{T}(\vartheta_{0},X^{T})-\frac{u^{2}}{2} \mathrm{I}_{b}\left(\vartheta_{0}\right)+r_{T}(\vartheta_{0},u,X^{T})\right),\qquad u\in\mathbb{U}_{T},\] (40) _where uniformly on compacts_ \(\mathbb{K}\subset\Theta\) _the convergence_ \(r_{T}(\vartheta_{0},u,X^{T})\to 0\) _holds,_ \[\Delta_{T}(\vartheta_{0},X^{T})=\frac{1}{\sqrt{TP\left(\vartheta_{0}\right)}} \sum_{t=1}^{T}\left[\zeta_{t}\left(\vartheta_{0}\right)f\dot{m}_{t-1}\left( \vartheta_{0}\right)+\left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right] \frac{\dot{P}\left(\vartheta_{0}\right)}{2\sqrt{P\left(\vartheta_{0}\right)}}\right]\] _and uniformly on_ \(\mathbb{K}\)__ \[\Delta_{T}(\vartheta_{0},X^{T})\Longrightarrow\mathcal{N}\left(0,\mathrm{I}_{b }\left(\vartheta_{0}\right)\right).\] (41)
2. _There exists constant_ \(C>0\) _such that_ \[\sup_{\vartheta_{0}\in\mathbb{K}}\mathbf{E}_{\vartheta_{0}}\left|Z_{T}\left(u_ {2}\right)^{1/2}-Z_{T}\left(u_{1}\right)^{1/2}\right|^{2}\leq C\,\left|u_{1}-u _{2}\right|^{2}.\] (42)
3. _There exists a constant_ \(\kappa>0\) _and for any_ \(N>0\) _there is a constant_ \(C>0\) _such that_ \[\sup_{\vartheta_{0}\in\mathbb{K}}\mathbf{P}_{\vartheta_{0}}\left(Z_{T}\left(u \right)>e^{-\kappa u^{2}}\right)\leq\frac{C}{\left|u\right|^{N}}.\] (43)
Let us verify (40)-(41). Using Taylor expansions we can write (below \(\vartheta_{u}=\vartheta_{0}+\varphi_{T}u\))
\[\ln Z_{T}\left(u\right) =-\frac{1}{2}\sum_{t=1}^{T}\left[\frac{\left(X_{t}-fm_{t-1}\left( \vartheta_{u}\right)\right)^{2}}{P\left(\vartheta_{u}\right)}+\ln\frac{P\left( \vartheta_{u}\right)}{P\left(\vartheta_{0}\right)}-\frac{\left(X_{t}-fm_{t-1 }\left(\vartheta_{0}\right)\right)^{2}}{P\left(\vartheta_{0}\right)}\right]\] \[=\frac{u}{\sqrt{T}}\sum_{t=1}^{T}\left[\frac{\zeta_{t}\left( \vartheta_{0}\right)f\dot{m}_{t-1}\left(\vartheta_{0}\right)}{\sqrt{P\left( \vartheta_{0}\right)}}+\frac{\left(\zeta_{t}\left(\vartheta_{0}\right)^{2}-1 \right)\dot{P}\left(\vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)}\right]\] \[\qquad-\frac{u^{2}}{2T}\left[\frac{f^{2}\dot{m}_{t-1}\left( \vartheta_{0}\right)^{2}}{P\left(\vartheta_{0}\right)}+\frac{\zeta_{t}\left( \vartheta_{0}\right)^{2}\dot{P}\left(\vartheta_{0}\right)^{2}}{P\left( \vartheta_{0}\right)^{2}}-\frac{\dot{P}\left(\vartheta_{0}\right)^{2}}{2P \left(\vartheta_{0}\right)^{2}}\right]+O\left(T^{-1/2}\right).\]
Now the representation (40)-(41) follows from the CLT and the LLN:
\[\frac{1}{\sqrt{T}}\sum_{t=1}^{T}\left[\frac{\zeta_{t}\left( \vartheta_{0}\right)f\dot{m}_{t-1}\left(\vartheta_{0}\right)}{\sqrt{P\left( \vartheta_{0}\right)}}+\frac{\left(\zeta_{t}\left(\vartheta_{0}\right)^{2}-1 \right)\dot{P}\left(\vartheta_{0}\right)}{2P\left(\vartheta_{0}\right)}\right] \Longrightarrow\mathcal{N}\left(0,\mathrm{I}_{b}\left(\vartheta_{0}\right) \right),\] \[\frac{1}{T}\sum_{t=1}^{T}\left[\frac{f^{2}\dot{m}_{t-1}\left( \vartheta_{0}\right)^{2}}{P\left(\vartheta_{0}\right)}+\frac{\zeta_{t}\left( \vartheta_{0}\right)^{2}\dot{P}\left(\vartheta_{0}\right)^{2}}{P\left( \vartheta_{0}\right)^{2}}-\frac{\dot{P}\left(\vartheta_{0}\right)^{2}}{2P \left(\vartheta_{0}\right)^{2}}\right]\longrightarrow\mathrm{I}_{b}\left( \vartheta_{0}\right).\]
Let us denote
\[\Pi\left(u\right)=-\frac{1}{2}\sum_{t=1}^{T}\left[\frac{\left(X_{t}-fm_{t-1} \left(\vartheta_{u}\right)\right)^{2}}{P\left(\vartheta_{u}\right)}+\ln\frac{P \left(\vartheta_{u}\right)}{P\left(\vartheta_{0}\right)}-\frac{\left(X_{t}-fm _{t-1}\left(\vartheta_{0}\right)\right)^{2}}{P\left(\vartheta_{0}\right)}\right]\]
and calculate its derivative
\[\dot{\Pi}\left(u\right)=\frac{\partial}{\partial u}\Pi\left(u\right) =\frac{\varphi_{T}}{2}\sum_{t=1}^{T}\left[\frac{2\left(X_{t}-fm_{t-1} \left(\vartheta_{u}\right)\right)f\dot{m}_{t-1}\left(\vartheta_{u}\right)}{P \left(\vartheta_{u}\right)}-\frac{\dot{P}\left(\vartheta_{u}\right)}{P\left( \vartheta_{u}\right)}\right.\] \[\qquad\qquad\left.+\frac{\left(X_{t}-fm_{t-1}\left(\vartheta_{u }\right)\right)^{2}\dot{P}\left(\vartheta_{u}\right)}{P\left(\vartheta_{u} \right)^{2}}\right].\]
Note that \(X_{t}-fm_{t-1}\left(\vartheta_{u}\right)=\zeta_{t}\left(\vartheta_{u}\right) \sqrt{P\left(\vartheta_{u}\right)},t\geq 1\), where \(\zeta_{t}\left(\vartheta_{u}\right),t\geq 1\) under measure \(\mathbf{P}_{\vartheta_{u}}\) are i.i.d. r.v.'s \(\zeta_{t}\left(\vartheta_{u}\right)\sim\mathcal{N}\left(0,1\right)\). Therefore, with \(\mathbf{P}_{\vartheta_{u}}\) probability \(1\) we have
\[\dot{\Pi}\left(u\right)=\varphi_{T}\sum_{t=1}^{T}\left[\frac{\zeta_{t}\left( \vartheta_{u}\right)f\dot{m}_{t-1}\left(\vartheta_{u}\right)}{\sqrt{P\left( \vartheta_{u}\right)}}+\frac{\left[\zeta_{t}\left(\vartheta_{u}\right)^{2}-1 \right]\dot{P}\left(\vartheta_{u}\right)}{2P\left(\vartheta_{u}\right)}\right]\]
and
\[\mathbf{E}_{\vartheta_{u}}\dot{\Pi}\left(u\right)^{2} =\varphi_{T}^{2}\sum_{t=1}^{T}\mathbf{E}_{\vartheta_{u}}\left[ \frac{\zeta_{t}\left(\vartheta_{u}\right)f\dot{m}_{t-1}\left(\vartheta_{u} \right)}{\sqrt{P\left(\vartheta_{u}\right)}}+\frac{\left[\zeta_{t}\left( \vartheta_{u}\right)^{2}-1\right]\dot{P}\left(\vartheta_{u}\right)}{2P\left( \vartheta_{u}\right)}\right]^{2}\] \[=\varphi_{T}^{2}\sum_{t=1}^{T}\left[\frac{f^{2}\mathbf{E}_{ \vartheta_{u}}\left[\zeta_{t}\left(\vartheta_{u}\right)^{2}\dot{m}_{t-1}\left( \vartheta_{u}\right)^{2}\right]}{P\left(\vartheta_{u}\right)}+\frac{\dot{P} \left(\vartheta_{u}\right)^{2}\mathbf{E}_{\vartheta_{u}}\left[\zeta_{t}\left( \vartheta_{u}\right)^{2}-1\right]^{2}}{4P\left(\vartheta_{u}\right)^{2}}\right]\]
\[=\frac{f^{2}\dot{B}\left(\vartheta_{u},\vartheta_{u}\right)^{2}}{P \left(\vartheta_{u}\right)\left(1-A\left(\vartheta_{u}\right)\right)^{2}}+\frac{ \dot{P}\left(\vartheta_{u}\right)^{2}}{2P\left(\vartheta_{u}\right)^{2}}\leq C.\]
Hence we can write
\[\mathbf{E}_{\vartheta_{0}}\left|Z_{T}\left(u_{2}\right)^{1/2}-Z_{T} \left(u_{1}\right)^{1/2}\right|^{2}=\mathbf{E}_{\vartheta_{0}}\left|\int_{u_{ 1}}^{u_{2}}\frac{\partial}{\partial u}Z_{T}\left(u\right)^{1/2}\mathrm{d}u \right|^{2}\] \[\qquad\leq\frac{\left(u_{2}-u_{1}\right)}{4}\int_{u_{1}}^{u_{2}} \mathbf{E}_{\vartheta_{0}}Z_{T}\left(u\right)\dot{\Pi}\left(u\right)^{2} \mathrm{d}u=\frac{\left(u_{2}-u_{1}\right)}{4}\int_{u_{1}}^{u_{2}}\mathbf{E}_ {\vartheta_{u}}\dot{\Pi}\left(u\right)^{2}\mathrm{d}u\] \[\qquad\leq C\left(u_{2}-u_{1}\right)^{2},\]
where the constant \(C>0\) can be chosen not depending on \(\vartheta_{0}\). The estimate (42) is proved too.
To verify the last estimate (43) we write
\[\mathbf{P}_{\vartheta_{0}}\left(Z_{T}\left(u\right)>e^{-\kappa u ^{2}}\right) =\mathbf{P}_{\vartheta_{0}}\left(\Pi\left(u\right)>-\kappa u^{2}\right)\] \[=\mathbf{P}_{\vartheta_{0}}\Big{(}\Pi\left(u\right)-\mathbf{E}_{ \vartheta_{0}}\Pi\left(u\right)>-\kappa u^{2}-\mathbf{E}_{\vartheta_{0}}\Pi \left(u\right)\Big{)}.\]
Note that
\[-2\mathbf{E}_{\vartheta_{0}}\Pi\left(u\right) =\sum_{t=1}^{T}\frac{\mathbf{E}_{\vartheta_{0}}\Big{[}\zeta_{t} \left(\vartheta_{0}\right)\sqrt{P\left(\vartheta_{0}\right)}-f\left[m_{t-1} \left(\vartheta_{u}\right)-m_{t-1}\left(\vartheta_{0}\right)\right]\Big{]}^{2 }}{P\left(\vartheta_{u}\right)}\] \[\qquad\qquad\qquad+T\ln\frac{P\left(\vartheta_{u}\right)}{P\left( \vartheta_{0}\right)}-\sum_{t=1}^{T}\mathbf{E}_{\vartheta_{0}}\zeta_{t}\left( \vartheta_{0}\right)^{2}\] \[=\frac{f^{2}}{P\left(\vartheta_{u}\right)}\sum_{t=1}^{T}\mathbf{ E}_{\vartheta_{0}}\left[m_{t-1}\left(\vartheta_{u}\right)-m_{t-1}\left( \vartheta_{0}\right)\right]^{2}\] \[\qquad\qquad\qquad+T\left(1+\ln\frac{P\left(\vartheta_{0} \right)}{P\left(\vartheta_{u}\right)}-\frac{P\left(\vartheta_{0}\right)}{P \left(\vartheta_{u}\right)}\right).\]
Consider two regions of \(u\). First suppose that \(\left|u\varphi_{T}\right|\leq\nu\) with some small \(\nu>0\). Then using expansions at the vicinity of \(\vartheta_{0}\) we can write
\[-\mathbf{E}_{\vartheta_{0}}\Pi\left(u\right) =\left(\frac{u^{2}f^{2}\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{2P\left(\vartheta_{u}\right)\left(1-A\left(\vartheta_{0}\right)^{2 }\right)}+\frac{u^{2}\dot{P}\left(\vartheta_{0}\right)^{2}}{2P\left( \vartheta_{0}\right)^{2}}\right)\left(1+o\left(\nu\right)\right)\] \[=\frac{u^{2}}{2}\mathrm{I}_{b}\left(\vartheta_{0}\right)\left(1+o \left(\nu\right)\right)\geq\frac{u^{2}}{4}\mathrm{I}_{b}\left(\vartheta_{0} \right)\geq\kappa_{1}u^{2}\]
for sufficiently small \(\nu\). Remark that
\[\inf_{\vartheta\in\Theta}\mathrm{I}_{b}\left(\vartheta\right)>0\]
and the constant \(\kappa_{1}\) can be chosen not depending of \(\vartheta_{0}\).
Let \(\left|u\varphi_{T}\right|\geq\nu\). Consider the difference of two equations
\[m_{t-1}\left(\vartheta_{0}\right) =am_{t-2}\left(\vartheta_{0}\right)\,+B^{\ast}\left(\vartheta_{0},\vartheta_{0}\right)\zeta_{t-1}\left(\vartheta_{0}\right),\] \[m_{t-1}\left(\vartheta_{u}\right) =A\left(\vartheta_{u}\right)m_{t-2}\left(\vartheta_{u}\right)+ \left[a-A\left(\vartheta_{u}\right)\right]m_{t-2}\left(\vartheta_{0}\right)+B^ {\ast}\left(\vartheta_{u},\vartheta_{0}\right)\zeta_{t-1}\left(\vartheta_{0} \right),\]
and write
\[m_{t-1}\left(\vartheta_{u}\right)-m_{t-1}\left(\vartheta_{0} \right)=A\left(\vartheta_{u}\right)\left[m_{t-2}\left(\vartheta_{u}\right)-m_ {t-2}\left(\vartheta_{0}\right)\right]\] \[\qquad\qquad\qquad\qquad\qquad+\left[B^{\ast}\left(\vartheta_{u },\vartheta_{0}\right)-B^{\ast}\left(\vartheta_{0},\vartheta_{0}\right)\right] \zeta_{t-1}\left(\vartheta_{0}\right).\]
Therefore
\[\mathbf{E}_{\vartheta_{0}}\left[m_{t-1}\left(\vartheta_{u}\right) -m_{t-1}\left(\vartheta_{0}\right)\right]^{2}=A\left(\vartheta_{u}\right)^{2} \mathbf{E}_{\vartheta_{0}}\left[m_{t-2}\left(\vartheta_{u}\right)-m_{t-2} \left(\vartheta_{0}\right)\right]^{2}\] \[\qquad\qquad\qquad\qquad\qquad+\left[B^{\ast}\left(\vartheta_{u },\vartheta_{0}\right)-B^{\ast}\left(\vartheta_{0},\vartheta_{0}\right)\right] ^{2}.\]
We suppose that \(m_{t}\left(\vartheta_{u}\right)\) and \(m_{t}\left(\vartheta_{0}\right)\) are stationary processes. Hence
\[\mathbf{E}_{\vartheta_{0}}\left[m_{t-1}\left(\vartheta_{u}\right)-m_{t-1} \left(\vartheta_{0}\right)\right]^{2}=\frac{\left[B^{\ast}\left(\vartheta_{u},\vartheta_{0}\right)-B^{\ast}\left(\vartheta_{0},\vartheta_{0}\right)\right] ^{2}}{1-A\left(\vartheta_{u}\right)^{2}}\]
and
\[-2\mathbf{E}_{\vartheta_{0}}\Pi\left(u\right)=\frac{Tf^{2}\left[B^{\ast}\left( \vartheta_{u},\vartheta_{0}\right)-B^{\ast}\left(\vartheta_{0},\vartheta_{0} \right)\right]^{2}}{1-A\left(\vartheta_{u}\right)^{2}}+T\left(1+\ln\frac{P\left( \vartheta_{0}\right)}{P\left(\vartheta_{u}\right)}-\frac{P\left(\vartheta_{0} \right)}{P\left(\vartheta_{u}\right)}\right).\]
Denote
\[G\left(\vartheta_{u},\vartheta_{0}\right)=\frac{f^{2}\left[B^{\ast}\left( \vartheta_{u},\vartheta_{0}\right)-B^{\ast}\left(\vartheta_{0},\vartheta_{0} \right)\right]^{2}}{2\left(1-A\left(\vartheta_{u}\right)^{2}\right)}+\frac{1}{ 2}\left(1+\ln\frac{P\left(\vartheta_{0}\right)}{P\left(\vartheta_{u}\right)}- \frac{P\left(\vartheta_{0}\right)}{P\left(\vartheta_{u}\right)}\right)\]
and show that
\[g\left(\vartheta_{0},\nu\right)=\inf_{\left|\vartheta-\vartheta_{0}\right| \geq\nu}G\left(\vartheta,\vartheta_{0}\right)>0.\]
To do this it is sufficient to verify that
\[\inf_{\left|\vartheta-\vartheta_{0}\right|\geq\nu}\left|P\left(\vartheta\right) -P\left(\vartheta_{0}\right)\right|>0.\]
As \(\left|P\left(\vartheta\right)-P\left(\vartheta_{0}\right)\right|=f^{2}\left|\gamma_{ \ast}\left(\vartheta\right)-\gamma_{\ast}\left(\vartheta_{0}\right)\right|\) we check the condition \(\dot{\gamma}_{\ast}\left(\vartheta\right)>0\). We have (see (27))
\[\dot{\gamma}_{\ast}\left(\vartheta\right) =\vartheta+\frac{\left(4\left[f^{2}\vartheta^{2}-\sigma^{2}\left( 1-a^{2}\right)\right]f^{2}\vartheta+8\vartheta\sigma^{2}f^{2}\right)}{f^{2} \left[\left(\sigma^{2}\left(1-a^{2}\right)-f^{2}\vartheta^{2}\right)^{2}+4 \vartheta^{2}\sigma^{2}f^{2}\right]^{1/2}}\] \[=\vartheta+\frac{\left[4f^{2}\vartheta^{2}+4\sigma^{2}\left(1+a^{ 2}\right)\right]f^{2}\vartheta}{f^{2}\left[\left(\sigma^{2}\left(1-a^{2} \right)-f^{2}\vartheta^{2}\right)^{2}+4\vartheta^{2}\sigma^{2}f^{2}\right]^{1 /2}}>0 \tag{44}\]
and
\[\inf_{\vartheta_{0}\in\Theta}g\left(\vartheta_{0},\nu\right)>0.\]
Therefore, using the relation \(T\geq u^{2}/\left(\beta_{b}-\alpha_{b}\right)^{2}\) we can write
\[\inf_{\left|u\right|\geq\nu\sqrt{T}}\Bigl{(}-\mathbf{E}_{\vartheta_{0}}\Pi \left(u\right)\Bigr{)}\geq g\left(\vartheta_{0},\nu\right)T\geq\frac{g\left( \vartheta_{0},\nu\right)u^{2}}{\left(\beta_{b}-\alpha_{b}\right)^{2}}=\kappa_{ 2}u^{2}.\]
We have
\[\Pi\left(u\right) -\mathbf{E}_{\vartheta_{0}}\Pi\left(u\right)=\sum_{t=1}^{T} \left[\zeta_{t}\left(\vartheta_{0}\right)^{2}-1\right]\left(\frac{P\left( \vartheta_{u}\right)-P\left(\vartheta_{0}\right)}{2P\left(\vartheta_{u}\right)}\right)\] \[\quad\quad+\sum_{t=1}^{T}\frac{f\sqrt{P\left(\vartheta_{0}\right) }}{P\left(\vartheta_{u}\right)}\zeta_{t}\left(\vartheta_{0}\right)\left[m_{t- 1}\left(\vartheta_{u}\right)-m_{t-1}\left(\vartheta_{0}\right)\right]\] \[\quad\quad-\sum_{t=1}^{T}\frac{f^{2}\left(\left[m_{t-1}\left( \vartheta_{u}\right)-m_{t-1}\left(\vartheta_{0}\right)\right]^{2}-\mathbf{E} _{\vartheta_{0}}\left[m_{t-1}\left(\vartheta_{u}\right)-m_{t-1}\left(\vartheta _{0}\right)\right]^{2}\right)}{2P\left(\vartheta_{u}\right)}\] \[=\Pi_{1}\left(u\right)+\Pi_{2}\left(u\right)-\Pi_{3}\left(u\right),\]
where for any integer \(N\geq 1\)
\[\mathbf{E}_{\vartheta_{0}}\left|\Pi_{1}\left(u\right)\right|^{2N} =\mathbf{E}_{\vartheta_{0}}\left|\sum_{t=1}^{T}\left[\zeta_{t} \left(\vartheta_{0}\right)^{2}-1\right]\left(\frac{P\left(\vartheta_{u}\right) -P\left(\vartheta_{0}\right)}{2P\left(\vartheta_{u}\right)}\right)\right|^{2N}\] \[=\left|u\varphi_{T}\dot{P}\left(\tilde{\vartheta}_{u}\right) \right|^{2N}\mathbf{E}_{\vartheta_{0}}\left|\sum_{t=1}^{T}\left[\zeta_{t} \left(\vartheta_{0}\right)^{2}-1\right]\right|^{2N}\leq C\left|u\right|^{2N}, \tag{45}\] \[\mathbf{E}_{\vartheta_{0}}\left|\Pi_{2}\left(u\right)\right|^{2N} =\frac{f^{2N}P\left(\vartheta_{0}\right)^{N}}{P\left(\vartheta_{ u}\right)^{2N}}\mathbf{E}_{\vartheta_{0}}\left|\sum_{t=1}^{T}\zeta_{t} \left(\vartheta_{0}\right)\left[m_{t-1}\left(\vartheta_{u}\right)-m_{t-1} \left(\vartheta_{0}\right)\right]\right|^{2N}\] \[\leq C\left|u\right|^{2N}T^{-N}\mathbf{E}_{\vartheta_{0}}\left| \sum_{t=1}^{T}\zeta_{t}\left(\vartheta_{0}\right)\dot{m}_{t-1}\left(\tilde{ \vartheta}_{u}\right)\right|^{2N}\leq C\left|u\right|^{2N},\]
and
\[\mathbf{E}_{\vartheta_{0}}\left|\Pi_{3}\left(u\right)\right|^{2N} =\frac{f^{4N}\left|u\right|^{4N}\varphi_{T}^{4N}}{2^{2N}P\left( \vartheta_{u}\right)^{2N}}\mathbf{E}_{\vartheta_{0}}\left|\sum_{t=1}^{T} \Bigl{[}\overset{\cdot}{m}_{t-1}(\tilde{\vartheta}_{u})^{2}-\mathbf{E}_{ \vartheta_{0}}\overset{\cdot}{m}_{t-1}(\tilde{\vartheta}_{u})^{2}\Bigr{]} \right|^{2N}\] \[\leq C\left|u\right|^{4N}\varphi_{T}^{4N}T^{N}=C\left|u\right|^{2N }\ \frac{\left|u\right|^{2N}}{T}\leq C\left|u\right|^{2N}. \tag{46}\]
Here we twice used the estimate
\[\mathbf{E}_{\vartheta_{0}}\left|\sum_{t=1}^{T}K_{t}\left(\vartheta\right) \right|^{2N}\leq CT^{N},\]
which is valid for centered Gaussian time series \(K_{t}\left(\vartheta\right),t\geq 1\) with bounded variance and exponential mixing. We used as well the relation \(\left|u\varphi_{T}\right|<\beta_{b}-\alpha_{b}\). Remark that all constants \(C>0\) in (45)-(46) can be chosen not depending of \(\vartheta_{0}\).
Denote \(\kappa=2\left(\kappa_{1}\wedge\kappa_{2}\right)\). Now by Chebyshev inequality and (45)-(46)
\[\mathbf{P}_{\vartheta_{0}}\left(Z_{T}\left(u\right)>e^{-\kappa u ^{2}}\right)\leq\mathbf{P}_{\vartheta_{0}}\left(\left|\Pi\left(u\right)- \mathbf{E}_{\vartheta_{0}}\Pi\left(u\right)\right|\geq\frac{\kappa u^{2}}{2}\right)\] \[\qquad\leq\frac{6^{2N}}{\kappa u^{2N}}\mathbf{E}_{\vartheta_{0}} \left|\Pi_{1}\left(u\right)\right|^{2N}+\frac{6^{2N}}{\kappa u^{2N}}\mathbf{E} _{\vartheta_{0}}\left|\Pi_{2}\left(u\right)\right|^{2N}\frac{6^{2N}}{\kappa u ^{2N}}\mathbf{E}_{\vartheta_{0}}\left|\Pi_{3}\left(u\right)\right|^{2N}\leq \frac{C}{u^{2N}}.\]
Therefore the conditions (40)-(43) are verified and the estimators \(\overset{\cdot}{\vartheta}_{T}\) and \(\overset{\cdot}{\vartheta}_{T}\) by Theorems 3.1.1 and 3.2.1 in [9] have all mentioned in Theorem 2 properties.
## 6 Adaptive filter
We are given the partially observed system
\[X_{t} =f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots,\] \[Y_{t} =a\,Y_{t-1}+b\,v_{t},\qquad\quad Y_{0}.\]
Recall that if all parameters of the model \(\vartheta=\left(f,\sigma^{2},a,b\right)\) are known, then the stationary version \(m_{t}\left(\vartheta\right)\) of the conditional expectation \(m\left(\vartheta,t\right)=\mathbf{E}_{\vartheta}\left(Y_{t}|X_{s},s\leq t\right)\) satisfies the equation (see (28), (27))
\[m_{t}\left(\vartheta\right) =am_{t-1}\left(\vartheta\right)+\frac{af\gamma_{*}\left(\vartheta \right)}{\sigma^{2}+f^{2}\gamma_{*}\left(\vartheta\right)}\left[X_{t}-fm_{t-1 }\left(\vartheta\right)\right],\quad m_{0}\left(\vartheta\right),\quad t\geq 1, \tag{47}\] \[\gamma_{*}\left(\vartheta\right) =\frac{1}{2f^{2}}\left[f^{2}b^{2}-\sigma^{2}\left(1-a^{2}\right) \right]+\frac{1}{2f^{2}}\left[\left(\sigma^{2}\left(1-a^{2}\right)-b^{2}f^{2} \right)^{2}+4f^{2}b^{2}\sigma^{2}\right]^{1/2}.\]
Consider the problem of approximation of the random function \(m_{t}\left(\vartheta\right),t\geq 1\) when one of the parameters is unknown.
### Unknown parameter \(b\)
Suppose that the values \(f>0\), \(a^{2}\in[0,1)\) and \(\sigma^{2}>0\) are known and the only unknown parameter is \(\vartheta=b\in\Theta=(\alpha_{b},\beta_{b})\), \(\alpha_{b}>0\). The proposed below construction is a direct analogue of the similar solutions discussed in [16] and [17] for the models in continuous time like (8)-(9). First by observations \(X_{0},X_{1},\ldots,X_{\tau_{T}}\) we calculate the MME \(\vartheta_{\tau_{T}}^{*}=b_{\tau_{T}}^{*}\), \(\tau_{T}=\left[T^{\delta}\right],\delta\in\left(\frac{1}{2},1\right)\) (see (19))
\[\vartheta_{\tau_{T}}^{*}=2^{-1/2}f^{-1}\left[\left(\frac{1}{\tau_{T}}\sum_{s= 1}^{\tau_{T}}\left[X_{s}-X_{s-1}\right]^{2}+\sigma^{2}\right)\left(1+a\right) \right]^{1/2}. \tag{48}\]
Then we define the One-step MLE-process (see (37))
\[\vartheta_{t,T}^{*} =\vartheta_{\tau_{T}}^{*}+\frac{1}{\mathrm{I}_{b}(\vartheta_{ \tau_{T}}^{*})\left(t-\tau_{T}\right)}\sum_{s=\tau_{T}+1}^{t}\left[\frac{ \left[X_{s}-fm_{s-1}(\vartheta_{\tau_{T}}^{*})\right]}{P(\vartheta_{\tau_{T} }^{*})}f\dot{m}_{s-1}(\vartheta_{\tau_{T}}^{*})\right.\] \[\left.+\left(\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{T}}^{*}) \right]^{2}-P(\vartheta_{\tau_{T}}^{*})\right)\frac{\dot{P}(\vartheta_{\tau_{ T}}^{*})}{2P(\vartheta_{\tau_{T}}^{*})^{2}}\right],\quad t\in\left[\tau_{T}+1,T \right]. \tag{49}\]
Here \(m_{s-1}(\vartheta_{\tau_{T}}^{*},s-1),s\geq\tau_{T}+1\) satisfies the equation (47), where \(b\) is replaced by \(\vartheta_{\tau_{T}}^{*}\) and \(\dot{m}_{s-1}(\vartheta_{\tau_{T}}^{*},s-1),s\geq\tau_{T}+1\) is obtained by differentiation of (47) on \(b\) and once more replacing \(b\) by \(\vartheta_{\tau_{T}}^{*}\). Therefore for \(s>\tau_{T}+1\) we can write
\[m_{s-1}(\vartheta_{\tau_{T}}^{*}) =P(\vartheta_{\tau_{T}}^{*})^{-1}\left[a\sigma^{2}m_{s-2}( \vartheta_{\tau_{T}}^{*})+af\gamma_{*}(\vartheta_{\tau_{T}}^{*})X_{s-1}\right], \tag{50}\] \[\dot{m}_{s-1}(\vartheta_{\tau_{T}}^{*}) =P(\vartheta_{\tau_{T}}^{*})^{-2}a\sigma^{2}\left[P(\vartheta_{ \tau_{T}}^{*})\dot{m}_{s-2}(\vartheta_{\tau_{T}}^{*},s-2)-f^{2}\dot{\gamma}_{ *}(\vartheta_{\tau_{T}}^{*})\,m_{s-2}(\vartheta_{\tau_{T}}^{*})\right]\] \[\quad+\frac{af\sigma^{2}\dot{\gamma}_{*}\left(\vartheta\right)}{ P\left(\vartheta\right)^{2}}\,X_{s-1}. \tag{51}\]
Here \(P\left(\vartheta\right)=\sigma^{2}+f^{2}\gamma_{*}\left(\vartheta\right)\) and the Fisher information is
\[\mathrm{I}_{b}(\vartheta)=\frac{\dot{P}_{b}\left(\vartheta_{0}\right)^{2} \left[P\left(\vartheta_{0}\right)^{2}+a^{2}\sigma^{4}\right]}{2P\left( \vartheta_{0}\right)^{2}\left[P\left(\vartheta_{0}\right)^{2}-a^{2}\sigma^{4 }\right]},\]
where \(A\left(\vartheta\right)=P\left(\vartheta\right)^{-1}a\sigma^{2}\) (see (32)).
The adaptive Kalman filter we introduce with the help of the process \(m_{t,T}^{*}\) defined as follows
\[\hat{m}_{t,T}=P(\vartheta_{t-1,T}^{*})^{-1}\left[a\sigma^{2}m_{t-1,T}^{*}+af \gamma_{*}(\vartheta_{t-1,T}^{*})X_{t}\right],\qquad t\in\left[\tau_{T}+1,T \right]. \tag{52}\]
We have to compare \(m_{t,T}^{*}\) with \(m\left(\vartheta_{0},t\right)\) for large values of \(T\). Below \(\eta_{t,T}=\sqrt{t}\left[\vartheta_{t,T}^{*}-\vartheta_{0}\right]\) and \(\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0}\right)=-f^{-1}\sqrt{P\left( \vartheta_{0}\right)}\dot{A}(\vartheta_{0})=P\left(\vartheta_{0}\right)^{-3/2} af\sigma^{2}\dot{\gamma}_{*}\left(\vartheta_{0}\right)\).
**Theorem 3**.: _Let \(t=\left[vT\right],v\in\left(0,1\right]\), \(k=t-\tau_{{}_{T}}+2\) and \(T\rightarrow\infty\). Then the following relations hold_
\[\sqrt{t}\left[m_{t,T}^{\star}-m_{t}\left(\vartheta_{0}\right) \right]=\dot{B}^{\ast}\left(\vartheta_{0},\vartheta_{0}\right)\sum_{m=0}^{k}A \left(\vartheta_{0}\right)^{m}\eta_{t-m-1,T}\;\zeta_{t-m}\left(\vartheta_{0} \right)+o\left(1\right),\] \[t\mathbf{E}_{\vartheta_{0}}\left[m_{t,T}^{\star}-m_{t}\left( \vartheta_{0}\right)\right]^{2}\longrightarrow S_{b}^{\star}\left(\vartheta_ {0}\right)^{2}\equiv\frac{\dot{B}^{\ast}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{\mathrm{I}_{b}\left(\vartheta_{0}\right)\left(1-A\left(\vartheta _{0}\right)^{2}\right)}. \tag{53}\]
_Here \(o\left(1\right)\) and (53) converge uniformly on compacts \(\mathbb{K}\subset\Theta\)._
Proof.: Recall that
\[m_{t}\left(\vartheta_{0}\right) =A(\vartheta_{0})m_{t-1}\left(\vartheta_{0}\right)+f^{-1}\left[a -A(\vartheta_{0})\right]X_{t},\] \[m_{t,T}^{\star} =A(\vartheta_{t-1,T}^{\star})\hat{m}_{t-1,T}+f^{-1}\left[a-A( \vartheta_{t-1,T}^{\star})\right]X_{t}.\]
Therefore for \(\delta_{t,T}=\sqrt{t}\left[m_{t,T}^{\star}-m_{t}\left(\vartheta_{0}\right)\right]\) we have the equation
\[\delta_{t,T} =A(\vartheta_{t-1,T}^{\star})\delta_{t-1,T}+\sqrt{t}\left[A( \vartheta_{t-1,T}^{\star})-A(\vartheta_{0})\right]m_{t-1}\left(\vartheta_{0}\right)\] \[\qquad\qquad+f^{-1}\sqrt{t}\left[A(\vartheta_{0})-A(\vartheta_{t -1,T}^{\star})\right]X_{t}\] \[=A(\vartheta_{t-1,T}^{\star})\delta_{t-1,T}+f^{-1}\sqrt{P\left( \vartheta_{0}\right)}\sqrt{T}\left[A(\vartheta_{0})-A(\vartheta_{t-1,T}^{ \star})\right]\zeta_{t}\left(\vartheta_{0}\right)\] \[=A(\vartheta_{0})\;\delta_{t-1,T}+\dot{B}^{\ast}\left(\vartheta_ {0},\vartheta_{0}\right)\;\eta_{t-1,T}\;\zeta_{t}\left(\vartheta_{0}\right)+ \varepsilon_{t},\qquad t\in\left[\tau_{T}+1,T\right],\]
where \(\varepsilon_{t}=O\left(\vartheta_{t-1,T}^{\star}-\vartheta_{0}\right)=O\left( T^{-1/2}\right)\).
Let us denote \(A\equiv A\left(\vartheta_{0}\right),\dot{B}^{\ast}\equiv\dot{B}^{\ast}\left( \vartheta_{0},\vartheta_{0}\right),\zeta_{t}=\zeta_{t}\left(\vartheta_{0}\right)\) and make \(k=t-\tau_{{}_{T}}+2\) iterations
\[\delta_{t,T} =A\,\delta_{t-1,T}+\dot{B}^{\ast}\,\eta_{t-1,T}\;\zeta_{t}+ \varepsilon_{t}\] \[=A^{2}\,\delta_{t-2,T}+A\dot{B}^{\ast}\,\eta_{t-2,T}\;\zeta_{t-1 }+\dot{B}^{\ast}\,\eta_{t-1,T}\;\zeta_{t}+A\varepsilon_{t-1}+\varepsilon_{t}\] \[=\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots\dots \dots\dots\dots\dots\dots\dots\dots\] \[=A^{k+1}\delta_{t-k,T}+\dot{B}^{\ast}\sum_{m=0}^{k}A^{m}\eta_{t-m-1,T}\;\zeta_{t-m}+\sum_{m=0}^{k}A^{m}\varepsilon_{t-m}.\]
Therefore for the large values of \(t\) and \(k=t-\tau_{{}_{T}}+2\) we have the representation,
\[\delta_{t,T}=\dot{B}^{\ast}\sum_{m=0}^{k}A^{m}\eta_{t-m-1,T}\;\zeta_{t-m}+o \left(1\right).\]
Note that \(\eta_{t-m-1,T}\) and \(\zeta_{t-m}\) are independent. Hence, if \(t=\left[vT\right],T\rightarrow\infty\), then
\[\mathbf{E}_{\vartheta_{0}}\delta_{t,T}^{2}=\dot{B}^{\ast 2}\sum_{m=0}^{k} \sum_{l=0}^{k}A^{m+l}\mathbf{E}_{\vartheta_{0}}\left[\eta_{t-m-1,T}\;\zeta_{t- m}\eta_{t-l-1,T}\;\zeta_{t-l}\right]+o\left(1\right)\]
\[=\dot{B}^{*2}\sum_{m=0}^{k}A^{2m}\mathbf{E}_{\vartheta_{0}}\left[\eta_{t-m -1,T}^{2}\zeta_{t-m}^{2}\right]+o\left(1\right)=\dot{B}^{*2}\sum_{m=0}^{k}A^{2m} \mathbf{E}_{\vartheta_{0}}\left[\eta_{t-m-1,T}^{2}\right]+o\left(1\right)\] \[=\frac{\dot{B}^{*2}}{\mathrm{I}_{b}\left(\vartheta_{0}\right)} \sum_{m=0}^{k}A^{2m}\frac{t}{t-m-1}+o\left(1\right)\] \[\longrightarrow\frac{\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0} \right)^{2}}{\mathrm{I}\left(\vartheta_{0}\right)}\sum_{m=0}^{\infty}A^{2m}= \frac{\dot{B}^{*}\left(\vartheta_{0},\vartheta_{0}\right)^{2}}{\mathrm{I}_{b} \left(\vartheta_{0}\right)\left(1-A\left(\vartheta_{0}\right)^{2}\right)}.\]
**Remark 8**.: The adaptive Kalman filter is given by the relations (48)-(52), where the only One-step MLE-process (49) is in non recurrent form. Let us write this estimator in recurrent form too. Denote
\[S_{t,T}(\vartheta_{\tau_{T}}^{*}) =\frac{1}{\mathrm{I}_{b}(\vartheta_{\tau_{T}}^{*})\left(t-\tau_{ T}\right)}\sum_{s=\tau_{T}+1}^{t}\left[\frac{\left[X_{s}-fm_{s-1}(\vartheta_{ \tau_{T}}^{*})\right]}{P(\vartheta_{\tau_{T}}^{*})}f\dot{m}_{s-1}(\vartheta_{ \tau_{T}}^{*})\right.\] \[\left.+\left(\left[X_{s}-fm_{s-1}(\vartheta_{\tau_{T}}^{*}) \right]^{2}-P(\vartheta_{\tau_{T}}^{*})\right)\frac{\dot{P}(\vartheta_{\tau_{ T}}^{*})}{2P(\vartheta_{\tau_{T}}^{*})^{2}}\right].\]
Then we can write
\[\vartheta_{t,T}^{*} =\vartheta_{\tau_{T}}^{*}+S_{t,T}(\vartheta_{\tau_{T}}^{*})=\frac {\vartheta_{\tau_{T}}^{*}}{t-\tau_{T}}+\frac{\left(t-1-\tau_{T}\right)}{\left( t-\tau_{T}\right)}\left[\vartheta_{\tau_{T}}^{*}+S_{t-1,T}(\vartheta_{ \tau_{T}}^{*})\right]\] \[\qquad\qquad+\frac{1}{\mathrm{I}_{b}(\vartheta_{\tau_{T}}^{*}) \left(t-\tau_{T}\right)}\left[\frac{\left[X_{t}-fm_{t-1}(\vartheta_{\tau_{T}} ^{*})\right]}{P(\vartheta_{\tau_{T}}^{*})}f\dot{m}_{t-1}(\vartheta_{\tau_{T}}^ {*})\right.\] \[\qquad\qquad+\left(\left[X_{t}-fm_{t-1}(\vartheta_{\tau_{T}}^{*}) \right]^{2}-P(\vartheta_{\tau_{T}}^{*})\right)\frac{\dot{P}(\vartheta_{\tau_{ T}}^{*})}{2P(\vartheta_{\tau_{T}}^{*})^{2}}\right]\] \[=\frac{\vartheta_{\tau_{T}}^{*}}{t-\tau_{T}}+\left(1-\frac{1}{t- \tau_{T}}\right)\ \vartheta_{t-1,T}^{*}+\frac{\left[X_{t}-fm_{t-1}(\vartheta_{\tau_{T}}^{*}) \right]f\dot{m}_{t-1}(\vartheta_{\tau_{T}}^{*})}{\mathrm{I}_{b}(\vartheta_{ \tau_{T}}^{*})\left(t-\tau_{T}\right)P(\vartheta_{\tau_{T}}^{*})}\] \[\qquad\qquad+\frac{\left(\left[X_{t}-fm_{t-1}(\vartheta_{\tau_{T} }^{*})\right]^{2}-P(\vartheta_{\tau_{T}}^{*})\right)\dot{P}(\vartheta_{\tau_{ T}}^{*})}{2\mathrm{I}_{b}(\vartheta_{\tau_{T}}^{*})\left(t-\tau_{T}\right)P( \vartheta_{\tau_{T}}^{*})^{2}},\quad t\in\left[\tau_{T}+1,T\right].\]
Therefore the recurrent equation for the One-step MLE-process is
\[\vartheta_{t,T}^{*}=\frac{\vartheta_{\tau_{T}}^{*}}{t-\tau_{T}}+\left(1-\frac{ 1}{t-\tau_{T}}\right)\ \vartheta_{t-1,T}^{*}+\frac{\left[X_{t}-fm_{t-1}(\vartheta_{\tau_{T}}^{*}) \right]f\dot{m}_{t-1}(\vartheta_{\tau_{T}}^{*})}{\mathrm{I}_{b}(\vartheta_{\tau _{T}}^{*})\left(t-\tau_{T}\right)P(\vartheta_{\tau_{T}}^{*})}\]
\[\qquad+\frac{\left(\left[X_{t}-fm_{t-1}(\vartheta_{{}_{{}_{T}}^{*}}) \right]^{2}-P(\vartheta_{{}_{{}_{T}}^{*}})\right)\dot{P}(\vartheta_{{}_{{}_{T}} ^{*}}^{*})}{2\mathbb{I}_{b}(\vartheta_{{}_{{}_{T}}^{*}}^{*})\left(t-\tau_{{}_{T }}\right)P(\vartheta_{{}_{{}_{T}}^{*}}^{*})^{2}},\quad t\in\left[\tau_{{}_{T}}+ 1,T\right]. \tag{54}\]
Now the adaptive Kalman filter (48),(50) -(52), (54) is in recurrent form.
**Remark 9**.: The cases \(\vartheta=f\), \(\vartheta=a\) and \(\vartheta=\sigma^{2}\) can be studied similarly. If we consider the two-dimensional cases, say, \(\vartheta=(f,a)\), then the corresponding adaptive Kalman filter can be written as well. The preliminary MME estimator \(\vartheta_{{}_{{}_{T}}}^{*}=\left(f_{T}^{*},a_{T}^{*}\right)^{\top}\) was defined in (20)-(21), for the Fisher information matrix see (33),(34) and (36), One-step MLE-process is given in (39), the equations for \(\dot{m}_{f}\left(\vartheta_{{}_{T}},t\right)\) and \(\dot{m}_{a}\left(\vartheta_{{}_{T}},t\right)\) can be easily written. The equation for \(m_{t,T}^{*}\) has exactly the same form as the given above in (52).
**Remark 10**.: It is possible as well to verify the asymptotic normality
\[\sqrt{t}\left(m_{t,T}^{*}-m\left(\vartheta_{0},t\right)\right) \Longrightarrow\mathcal{N}\left(0,S_{b}^{*}\left(\vartheta_{0}\right)^{2} \right).\]
## 7 Asymptotic efficiency
We have the same system
\[X_{t}=f\,Y_{t-1}+\sigma\,w_{t},\qquad X_{0},\qquad t=1,2,\ldots,\] \[Y_{t}=a\,Y_{t-1}+b\,v_{t},\qquad\,Y_{0}.\]
where the parameters \(f,a,b,\sigma^{2}\) satisfy the condition \(\mathscr{A}_{0}\).
The adaptive filter is given by (52) and we would like to know if the error of approximation \(\mathbf{E}_{\vartheta}\left|m_{t,T}^{*}-m\left(\vartheta,t\right)\right|^{2}\) is asymptotically minimal? As usual in such situations we propose a lower minimax bound on the risks of all estimators \(\bar{m}_{t}\) supposing that \(\bar{m}_{t}\) is based on the observations up to time \(t\), i.e., these estimators depend on \(X^{t}=\left(X_{s},s=0,1,\ldots,t\right)\).
Recall some notations
\[A\left(\vartheta_{0}\right)=\frac{a\sigma^{2}}{\sigma^{2}+f^{2}\gamma_{*} \left(\vartheta_{0}\right)},\qquad B^{*}\left(\vartheta_{0},\vartheta_{0} \right)=\frac{a\sigma^{2}f\dot{\gamma}_{*}\left(\vartheta_{0}\right)}{\left[ \sigma^{2}+f^{2}\gamma_{*}\left(\vartheta_{0}\right)\right]^{3/2}}\]
and the equations for \(\dot{m}\left(\vartheta_{0},\cdot\right)\)
\[\dot{m}\left(\vartheta,t\right)=A\left(\vartheta_{0}\right)\dot{m}\left( \vartheta_{0},t-1\right)+B^{*}\left(\vartheta_{0},\vartheta_{0}\right)\zeta_{ t}\left(\vartheta_{0}\right),\]
Recall as well the asymptotic representations for \(\eta_{t,T}^{*}\left(\vartheta_{0}\right)=\sqrt{vT}\left(\vartheta_{t,T}^{*}- \vartheta_{0}\right),t=vT,v\in\left(0,1\right]\)
\[\eta_{t,T}^{*}\left(\vartheta_{0}\right)=\frac{\operatorname{I}\left( \vartheta_{0}\right)^{-1}}{\sqrt{tP\left(\vartheta_{0}\right)}}\sum_{s=1}^{t} \left[\zeta_{s}\left(\vartheta_{0}\right)f\dot{m}_{s-1}\left(\vartheta_{0} \right)+\left[\zeta_{s}\left(\vartheta_{0}\right)^{2}-1\right]\frac{\dot{P} \left(\vartheta_{0}\right)}{2\sqrt{P\left(\vartheta_{0}\right)}}\right]+o\left( 1\right). \tag{55}\]
The similar representations for the MLE \(\hat{\eta}_{t}\left(\vartheta_{0}\right)=\sqrt{t}\left(\hat{\vartheta}_{t}- \vartheta_{0}\right),v\in\left(0,1\right]\) and \(\text{BE}\,\tilde{\eta}_{t}\left(\vartheta_{0}\right)=\sqrt{t}\left(\tilde{ \vartheta}_{t}-\vartheta_{0}\right),t=vT,v\in\left(0,1\right]\) are
\[\hat{\eta}_{t}\left(\vartheta_{0}\right) =\frac{\text{I}\left(\vartheta_{0}\right)^{-1}}{\sqrt{tP\left( \vartheta_{0}\right)}}\sum_{s=1}^{t}\left[\zeta_{s}\left(\vartheta_{0}\right) f\dot{m}_{s-1}\left(\vartheta_{0}\right)+\left[\zeta_{s}\left(\vartheta_{0} \right)^{2}-1\right]\frac{\dot{P}\left(\vartheta_{0}\right)}{2\sqrt{P\left( \vartheta_{0}\right)}}\right]+o\left(1\right),\] \[\tilde{\eta}_{t}\left(\vartheta_{0}\right) =\frac{\text{I}\left(\vartheta_{0}\right)^{-1}}{\sqrt{tP\left( \vartheta_{0}\right)}}\sum_{s=1}^{t}\left[\zeta_{s}\left(\vartheta_{0}\right) f\dot{m}_{s-1}\left(\vartheta_{0}\right)+\left[\zeta_{s}\left(\vartheta_{0} \right)^{2}-1\right]\frac{\dot{P}\left(\vartheta_{0}\right)}{2\sqrt{P\left( \vartheta_{0}\right)}}\right]+o\left(1\right). \tag{56}\]
The properties (40)-(43) of the normalized LR \(Z_{T}\left(\cdot\right)\) established in the Theorem 2 correspond to the sufficient conditions of Theorems 3.1.2 and 3.2.2 in [9], where such representations were proved in the general case.
Introduce the limit (53)
\[S_{b}^{\ast}\left(\vartheta\right)^{2}=\lim_{T\to\infty}\mathbf{E}_{\vartheta }\left[\dot{m}\left(\vartheta,t\right)^{2}\tilde{\eta}_{t}\left(\vartheta \right)^{2}\right]=\frac{\dot{B}^{\ast}\left(\vartheta,\vartheta\right)^{2}}{ \text{I}\left(\vartheta\right)\left(1-A\left(\vartheta\right)^{2}\right)}. \tag{57}\]
As the asymptotic representations of \(\tilde{\eta}_{t,T}\left(\vartheta\right)\) and \(\eta_{t,T}^{\ast}\left(\vartheta\right)\) are similar (see (55) and (56)) hence the limits (53) and (57) coincide too.
**Theorem 4**.: _Let the conditions of Theorem 2 be fulfilled. Then we have the following lower minimax bound: for any estimator \(\bar{m}_{t,T}\) of \(m\left(\vartheta,t\right)\) (below \(t=vT\))_
\[\lim_{\nu\to 0}\lim_{T\to\infty}\sup_{\left|\vartheta-\vartheta_{0}\right|\leq \nu}t\mathbf{E}_{\vartheta}\left|\bar{m}_{t,T}-m\left(\vartheta,t\right) \right|^{2}\geq S_{b}^{\ast}\left(\vartheta_{0}\right)^{2}.\]
Proof.: The given below proof is based on the proof of Theorem 1.9.1 in [9] and was published in [17] in the case of continuous time observations. It is quite short and we repeat it here for convenience of reading. We have the elementary estimate
\[\sup_{\left|\vartheta-\vartheta_{0}\right|\leq\nu}\mathbf{E}_{\vartheta}\left| \bar{m}_{t,T}-m\left(\vartheta,t\right)\right|^{2}\geq\int_{\vartheta_{0}- \nu}^{\vartheta_{0}+\nu}\mathbf{E}_{\vartheta}\left|\bar{m}_{t,T}-m\left( \vartheta,t\right)\right|^{2}p_{\nu}\left(\vartheta\right)\mathrm{d}\vartheta.\]
Here the function \(p_{\nu}\left(\vartheta\right),\vartheta_{0}-\nu<\vartheta<\vartheta_{0}+\nu\) is a positive continuous density on the interval \(\left[\vartheta_{0}-\nu,\vartheta_{0}+\nu\right]\). If we denote \(\tilde{m}_{t}\) Bayesian estimator of \(m\left(\vartheta,t\right)\), which corresponds to this density \(p_{\nu}\left(\cdot\right)\), then
\[\tilde{m}_{t}=\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}m\left(\theta,t \right)p_{\nu}\left(\theta|X^{t}\right)\mathrm{d}\theta,\qquad p_{\nu}\left( \theta|X^{t}\right)=\frac{p_{\nu}\left(\theta\right)L\left(\theta,X^{t} \right)}{\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}p_{\nu}\left(\theta \right)L\left(\theta,X^{t}\right)\mathrm{d}\theta}\]
and
\[\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}\mathbf{E}_{\vartheta}\left|\bar{m} _{t,T}-m\left(\vartheta,t\right)\right|^{2}p_{\nu}\left(\vartheta\right)\mathrm{d }\vartheta\geq\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}\mathbf{E}_{\vartheta} \left|\tilde{m}_{t}-m\left(\vartheta,t\right)\right|^{2}p_{\nu}\left( \vartheta\right)\mathrm{d}\vartheta.\]
The asymptotic behavior of BE \(\tilde{m}_{t}\) can be described as follows (below \(\theta_{u}=\vartheta+\varphi_{t}u,\varphi_{t}=t^{-1/2},\mathbb{U}_{\nu}=\left( \sqrt{t}\left(\vartheta_{0}-\nu-\vartheta\right),\sqrt{t}\left(\vartheta_{0}+ \nu-\vartheta\right)\right)\) )
\[\tilde{m}_{t} =\frac{\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}m\left( \theta,t\right)p_{\nu}\left(\theta\right)L\left(\theta,X^{t}\right)\mathrm{d} \theta}{\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}p_{\nu}\left(\theta\right) L\left(\theta,X^{t}\right)\mathrm{d}\theta}=\frac{\int_{\mathbb{U}_{\nu}}m\left( \theta_{u},t\right)p_{\nu}\left(\theta_{u}\right)L\left(\theta_{u},X^{t}\right) \mathrm{d}u}{\int_{\mathbb{U}_{\nu}}p_{\nu}\left(\theta_{u}\right)L\left( \theta_{u},X^{t}\right)\mathrm{d}u}\] \[=m\left(\vartheta,t\right)+\varphi_{t}\dot{m}\left(\vartheta,t \right)\frac{\int_{\mathbb{U}_{\nu}}up_{\nu}\left(\theta_{u}\right)\frac{L \left(\theta_{u},X^{t}\right)}{L\left(\vartheta,X^{t}\right)}\mathrm{d}u}{ \int_{\mathbb{U}_{\nu}}p_{\nu}\left(\theta_{u}\right)L\left(\theta_{u},X^{t} \right)\mathrm{d}u}\left(1+o\left(1\right)\right)\] \[=m\left(\vartheta,t\right)+\varphi_{t}\dot{m}\left(\vartheta,t \right)\frac{\int_{\mathbb{U}_{\nu}}up_{\nu}\left(\vartheta\right)Z_{t}\left( u\right)\mathrm{d}u}{\int_{\mathbb{U}_{\nu}}p_{\nu}\left(\vartheta\right)Z_{t} \left(u\right)\mathrm{d}u}\left(1+o\left(1\right)\right).\]
Hence
\[\sqrt{t}\left(\tilde{m}_{t}-m\left(\vartheta,t\right)\right) =\dot{m}\left(\vartheta,t\right)\frac{\int_{\mathbb{U}_{\nu}} uZ_{t}\left(u\right)\mathrm{d}u}{\int_{\mathbb{U}_{\nu}}Z_{t}\left(u\right) \mathrm{d}u}\left(1+o\left(1\right)\right)\] \[=\dot{m}\left(\vartheta,t\right)\frac{\Delta_{t}\left(\vartheta, X^{t}\right)}{\mathrm{I}\left(\vartheta\right)}\left(1+o\left(1\right) \right),\]
where (see Lemma 1)
\[\Delta_{t}\left(\vartheta,X^{t}\right)=\frac{1}{\sqrt{tP\left(\vartheta_{0} \right)}}\sum_{s=1}^{t}\left[\zeta_{s}\left(\vartheta_{0}\right)f\dot{m}_{s-1 }\left(\vartheta_{0}\right)+\left[\zeta_{s}\left(\vartheta_{0}\right)^{2}-1 \right]\frac{\dot{P}\left(\vartheta_{0}\right)}{2\sqrt{P\left(\vartheta_{0} \right)}}\right].\]
Recall that \(Z_{t}\left(u\right)\Rightarrow Z\left(u\right)=\exp\left(u\Delta\left( \vartheta\right)-\frac{u^{2}}{2}\mathrm{I}\left(\vartheta\right)\right)\) and
\[\frac{\int_{\mathcal{R}}uZ\left(u\right)\mathrm{d}u}{\int_{\mathcal{R}}Z\left( u\right)\mathrm{d}u}=\frac{\Delta\left(\vartheta\right)}{\mathrm{I}\left( \vartheta\right)}.\]
Moreover the uniform on compacts \(\mathbb{K}\subset\left(\vartheta_{0}-\nu,\vartheta_{0}+\nu\right)\) convergence of moments of BE allows us to write
\[t\mathbf{E}_{\vartheta}\left(\tilde{m}_{t}-m\left(\vartheta,t\right)\right)^{2 }\rightarrow\lim_{t\rightarrow\infty}t\mathbf{E}_{\vartheta}\left[\dot{m} \left(\vartheta,t\right)^{2}\left(\tilde{\vartheta}_{t}-\vartheta\right)^{2} \right]=S_{b}^{\star}\left(\vartheta\right)^{2}\]
holds too.
The detailed proof of written above relations can be found in the proofs of Theorems 3.2.1 and 3.2.2 in [9]
Therefore
\[t\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}\mathbf{E}_{\vartheta}\left| \tilde{m}_{t}-m\left(\vartheta,t\right)\right|^{2}p_{\nu}\left(\vartheta\right) \mathrm{d}\vartheta\longrightarrow\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}S _{b}^{\star}\left(\vartheta\right)^{2}p_{\nu}\left(\vartheta\right)\mathrm{d}\vartheta\]
and as \(\nu\to 0\)
\[\int_{\vartheta_{0}-\nu}^{\vartheta_{0}+\nu}S_{b}^{\star}\left(\vartheta\right)^{ 2}p_{\nu}\left(\vartheta\right)\mathrm{d}\vartheta\longrightarrow S_{b}^{\star }\left(\vartheta_{0}\right)^{2}.\]
We call the estimator \(m_{t,T}^{\circ},\tau_{T}<t\leq T\)_asymptotically efficient_ if for all \(\vartheta_{0}\in\Theta\), \(t=vT\), any \(v\in[\varepsilon_{0},1]\)
\[\lim_{\nu\to 0}\lim_{T\rightarrow\infty}\sup_{\left|\vartheta-\vartheta_{0} \right|\leq\nu}t\mathbf{E}_{\vartheta}\left|m_{t,T}^{\circ}-m\left(\vartheta,t \right)\right|^{2}=S_{b}^{\star}\left(\vartheta_{0}\right)^{2}.\]
Here \(\varepsilon_{0}\in(0,1)\).
**Theorem 5**.: _The estimator \(m_{t}^{\star},\tau_{T}<t\leq T\) is asymptotically efficient._
Proof.: The proof follows from the uniform convergence (53) of the Theorem 3.
|
2310.15087 | Atomism Axiomatised Using Mereological Composition as a Primitive Notion | Atomism is the view that everything is composed of atoms. The view within the
framework of the contemporary formal approach is expressed on the ground of
mereology with the use of the primitive notion of being a part as every object
has at least one atomic part [2, 48], [3, 145], [17, 42], or using mereological
fusion [16, 24] which is defined by being a part. We will briefly present a
discussion between A. Varzi and A. Shiver concerning the two approaches and
propose a new intuitive axiomatic characterization of atomism. We build a
system with a primitive notion of composition that holds between individuals
and pluralities. We assume only two specific axioms: each object is a unique
composition of unique atoms, and being a composition of some objects is
equivalent to being the composition of all atoms of these objects. In our
approach, notions of part, and atom, are secondary to composition: atom is
defined as an object that cannot be a composition of two or more objects, and
part is defined as inclusion between atoms. We will show that the theory, with
only these two specific axioms, is sufficient to adequately express atomism as
we prove that the theory is definitionally equivalent to atomistic extensional
mereology with plural quantification allowed and mereological fusion defined.
Our theory requires neither full comprehension schema nor the existence of any
specific compositions (there are models with only atoms). Therefore, it may
constitute the basis on which atomistic concepts of reality are further
strengthened, specifically those in which the notion of being a part is not
used. To the best of our knowledge, our proposal is the only formal theory of
atomism using the primitive notion of composition. | Marcin Łyczak | 2023-10-23T16:47:11Z | http://arxiv.org/abs/2310.15087v1 | # Atomism Axiomatised Using Mereological Composition as a Primitive Notion1
###### Abstract
Atomism is the view that _everything is composed of atoms_. The view within the framework of the contemporary formal approach is expressed on the ground of mereology with the use of the primitive notion of _being a part_ as _every object has at least one atomic part_[2, 48], [3, 145], [17, 42], or using _mereological fusion_[16, 24] which is defined by being a part. We will briefly present a discussion between A. Varzi and A. Shiver concerning the two approaches and propose a new intuitive axiomatic characterization of atomism. We build a system with a primitive notion of composition that holds between individuals and pluralities. We assume only two specific axioms: _each object is a unique composition of unique atoms_, and _being a composition of some objects is equivalent to being the composition of all atoms of these objects_. In our approach, notions of _part_, and _atom_, are secondary to composition: atom is defined as an object that cannot be a composition of two or more objects, and part is defined as inclusion between atoms. We will show that the theory, with only these two specific axioms, is sufficient to adequately express atomism as we prove that the theory is definitionally equivalent to _atomistic extensional mereology_ with plural quantification allowed and mereological fusion defined. Our theory requires neither full _comprehension schema_ nor the existence of any specific compositions (there are models with only atoms). Therefore, it may constitute the basis on which atomistic concepts of reality are further strengthened, specifically those in which the notion of being a part is not used. To the best of our knowledge, our proposal is the only formal theory of atomism using the primitive notion of composition.
## Introduction
In philosophy, there are many different kinds of atomism. Here, we understand atomism in a very general sense: as a view that claims that everything that _exists_ is either a simple object or is composed of simple objects, with the assumption that something _exists_. Atomism is justified by different theories, where we find various interpretations of the statement
_Everything is composed of atoms._ (AT1)
Atomism within the framework of the contemporary formal approach is usually expressed on the ground of mereology, which is a formal theory of parts and wholes, originally formulated by S. Lesniewski. The central notion of mereology is mereological _composition_ which is usually defined with the use of the primitive notion of _being a part_/_proper part_. These notions are appropriately specified in different versions of mereology. Loosely speaking, the mereological composition of given objects is a concrete whole composed of all of them, and only them. In contemporary literature [2, 48], [3, 145], [17, 42], however, atomism is formulated without using the notion of composition as
_Every object has at least one atomic part._ (AT2)
A. Shiver criticizes articulating atomism as (AT2) and argues that (AT1) should be expressed using mereological fusion [16]. However, as A. Varzi shows, having only assumptions of reflexivity and the transitivity of being a part and the definition of mereological composition: _sum_ or _fusion_, formal counterparts of (AT1) and (AT2) are equivalent already in classical logic [24]. As Varzi pointed out, the main problem of the models considered by Shiver against (AT2) does not lie in the wrong formulation of atomism by (AT2), but that being a part is not well founded.1 We believe that Varzi is formally right, at the same time we agree with Shiver in that (AT2) does not intuitively express the atomistic position. In defend of (AT2) at least two issues can be indicated: (AT2) is expressible in first-order logic, and it is not necessary to decide how mereological composition is defined and what its properties are. However, that from an intuitive point of view, mereological composition very clearly express atomism. Shiver expresses atomism using defined notion of mereological fusion that is secondary to being a part. We propose reformulating Shiver's approach and take mereological composition as the only primitive notion.2 We aim to:
* find a formal system with a primitive notion \(F\) of composition that holds between individuals, and pluralities;
* show that with this notion the standard mereological notions of part and atom are definable, and primitive operator \(F\) can be understood as a standard composition;
* show that the system is definitionally equivalent to standard system of extensional atomistic mereology, in which plural quantification is allowed.
We begin with some remarks on mereological composition, atomistic mereology, and the discussion between Shiver and Varzi (1). Next, we provide new axiomatics for atomistic extensional mereology, where a formal counterpart of (AT1) is an axiom, and the only primitive notion is composition (2). Finally, we prove definitional equivalence of the presented system with the standard axiomatization of _atomistic extensional mereology_ in which plural quantification is allowed (3).
## 1 Mereological composition, extensional mereology and atomicity
The first version of mereology was given by Lesniewski in a natural language [13] (trans. in [14]). He later presented a mature version of mereology on the ground of ontology, which is a theory of objects, expressed in a first-order identity-free language. The only primitive notion of his ontology was a two-place predicate \(\varepsilon\) read 'is', applied to two names of the same category. Lesniewski added to his system a primitive name-forming operator \(pt\) applied to names, read 'part of'. The atomic expression \(x\varepsilon pt(y)\) is read '\(x\) is a part of \(y\)'. The original Lesniewski's ontological approach to mereology is described by R. Urbaniak in [22]. A. Tarski extracted mereology from its ontological context and treated mereology as a theory of mathematical relational structures with the primitive relation of being a part, and used this as a foundation for point-free geometric considerations [21]. Mereology today is very willingly treated as just first-order theory with identity, with the only primitive predicate for 'is a part/proper part'. This approach was described by P. Simons in [17], and more recently by Varzi and Cotnoir [3]. While mentioned approaches differ, they share a common core [26]. We focus on mereology which allows plural quantification described in [3, 238-245]. Mereology with plural quantification has greater expressive power than first-order mereology, what is used to formal analyzes of atomism [7, 10]. Quantification over pluralities allows us to say that for any object, there exist atoms from which the object is composed, and this is crucial for our characterization of atomism. As we mentioned, the mereological composition of given objects is a concrete of all of them, and only them, put together. Mereological composition is a subject of actual philosophical studies concerning, inter alia, its extensionality, the relationship between wholes and their parts, the existence of arbitrary compositions, and the nature of how objects are put together [1]. In the original mereology,
to say that something is a mereological composition of some objects Lesniewski used \(x\varepsilon Kl(y)\). What is important is that in the expression \(x\varepsilon Kl(y)\) both \(x\) and \(y\) are variables of the same category. In modern first-order approaches, counterparts of non-empty names used by Lesniewski are constructed with the use of formulas with at least one free variable. To say that \(x\) is a mereological composition of objects that are \(\varphi\), \(F_{\varphi_{y}}x\) is used, where \(y\) is a free variable in the formula \(\varphi\). In the case of mereology with plural quantification, there is no need to use formulas with free variables, because plural variables play their role. The mereological composition is expressed then as \(F_{zz}x\), where \(zz\) is a plural variable and \(x\) is individual variable. In the case of mereological relational structures, the notation \(xFX\) is used, where \(F\) is a relation between an element \(x\) belonging to the domain of a given structure and \(X\) is a distributive subset of the domain.
Shiver formulates atomism using plural quantification and defined mereological fusion. We follow the nomenclature from [3, 238-245] and use individual variables: \(x,y,z\), and plural ones: \(xx,yy,zz,...\). The logical symbols are classical connectives, quantifiers, identity = applied to individual variables, and \(\prec\) applied to individual and plural variables on the left and right side, respectively. Expression \(x\prec yy\) is read '\(x\) is one of \(yy\)'s'. The only non-logical symbol is \(P\), for 'is a part of' applied to individual variables. We assume as a base only axioms of two sorted logic and axioms for first-order identity. Following Varzi [26], we call _extensional mereology_ the theory which has the following theses:
\[Pxx,\] (ref) \[Pxy\wedge Pyz\to Pxz,\] (trans) \[Pxy\wedge Pyx\to x=y,\] (ants) \[\neg Pxy\rightarrow\exists z(Pzx\wedge\neg Ozy),\] (ssp)
where \(O\) is a predicate for 'overlapping' defined as
\[Oxy:=\exists z(Pzx\wedge Pzy).\] ( \[O\] )
We denote the set of theses of extensional mereology with plural quantification as \(\mathsf{EM}_{\mathsf{pl}}\) and we use the following notation
\[\mathsf{EM}_{\mathsf{pl}}=(\mathtt{ref})+(\mathtt{ants})+(\mathtt{trans})+( \mathtt{ssp}).\]
Let us briefly comment extensional mereology. Formulas (ref), (trans), and (ants) taken as axioms state that the semantic correlate of predicate \(P\) is a _partial order relation_. This is the core of mereological theories which describe being a part in an inclusive sense. Let us focus for a moment on (ssp) and its properties. Formula (ssp) is called the _strong supplementation principle_ and was considered in the context of mereology by Simons [17, 28-29], however its even stronger version was used as a mereological axiom already by Lesniewski [19]. (ssp) is a reinforcement of widely discussed in philosophical literature [4] the _weak supplementation principle_
\[PPxy\rightarrow\exists z(Pzy\wedge\neg Ozx),\] (wsp)
where \(PP\) is a predicate for 'proper part' defined as
\[PPxy:=Pxy\wedge\neg x=y.\]
The axiom (ssp) is stronger than (wsp) because assuming partial order axioms for \(P\) (ssp) implies (wsp); the converse does not hold. Moreover, what is more important for us, (ssp) added to partial order axioms for \(P\) allows one to prove (a): the equivalence of two formulations of mereological composition: sum and fusion, which we will introduce soon, as well as ensuring that (b): if there is a given sum, then it is unique, and (c): if there is a given fusion, then it is unique. Of course, (a) implies (b)\(\leftrightarrow\)(c). However, weak supplementation principle (wsp) added to partial order axioms for \(P\) allows to prove only (b) (see e.g. [15]). Thus, (ssp) is convenient for building weak mereological theories that do not determine which sums/fusions exist, while still guaranteeing their generally wanted properties.
We introduce a definition of Lesniewski's mereological sum \(F_{zz}x\), read '\(x\) is a mereological sum of \(zz\)'s'; and a definition of mereological fusion \(F^{\star}_{zz}x\), read '\(x\) is a mereological fusion of \(zz\)'s' in the following way:
\[F_{zz}x\leftrightarrow\forall_{z\prec zz}Pzx\wedge\forall y(Pyx \rightarrow\exists_{z\prec zz}Ozy), (\texttt{Df}.F)\] \[F^{\star}_{zz}x\leftrightarrow\forall y(Oyx\leftrightarrow\exists _{z\prec zz}Ozy). (\texttt{Df}.F^{\star})\]
In _general extensional mereology_, after Lesniewski, it is assumed that for any objects represented by any non-empty general name there exists (unique) mereological composition of them [11]. Regardless of the definition of composition and its existential assumptions, an _atom_ is any object that has no proper parts, which we define as
\[Ax\leftrightarrow\forall y(Pyx\to x=y).\]
Using the above we can express (AT2) as
\[\forall x\exists y(Pyx\wedge\ Ay).\]
To express atomism, which we want to discuss here, Shiver uses the notion of fusion. Using plural quantification, he define plural constant \(aa\) for 'atoms', which we introduce as an instantiation of comprehension schema
\[x\prec aa\leftrightarrow Ax.\]
Moreover, Shiver uses the predicate \(S\) for 'being a fusion of atoms'
\[Sx:=\exists_{yy\prec aa}F^{\star}_{yy}x,\]
where \(\prec\) is many-many predicate read as '\(xx\)'s are among \(yy\)'s' that we define as
\[xx\prec yy:=\forall z(z\prec xx\to z\prec yy)\wedge\exists x(x\prec xx).\]
Atomistic standpoint (AT1) my be expressed as
\[\forall x(Sx).\]
Shiver expresses the atomistic standpoint in slighty different way, which he calls _general atomicity_, by the formula
\[\forall x\exists y(Pxy\wedge Sy).\]
This approach is analyzed by Varzi in first-order mereology in [24] (and later also in [3, 145-147] and [10]). We express Varzi analysis in mereology with plural quantification, because we want to be consistent with earlier and later consideration. We take the notion of atoms as Shiver does, and we introduce a notion of _atomic parts of_
\[y\prec\mathfrak{at}_{x}\leftrightarrow Pyx\wedge y\prec aa.\] (Df. \[\mathfrak{at}_{P}^{\mathsf{ind}}\] )
Following Varzi's considerations, having only (ref) and (trans), axiom (AT2\({}_{P}\)) implies
\[\forall x(\forall y(Oyx\leftrightarrow\exists_{z\prec\mathfrak{at}_{x}}Oyz)),\]
and
\[\forall x(\forall_{y\prec\mathfrak{at}_{x}}Pyx\wedge\forall y(Pyx\to\exists_{z \prec\mathfrak{at}_{x}}Oyz)).\]
These two formulas, applying the definitions of mereological fusion (Df.\(F^{\star}\)) and sum (Df.\(F\)), yields
\[\forall x(\mathcal{F}_{\mathfrak{at}_{x}}x),\text{ for }\mathcal{F}\in\{F^{ \star},F\}.\] (AT1 \[{}_{\mathcal{F}}\] )
The above schema expresses atomism (AT1) in both senses of mereological composition: fusion and sum.
From (AT1\({}_{\mathcal{F}}\)) and the reflexivity of being a part (ref) we infer
\[\forall x\exists y\exists_{zz\prec aa}(Pxy\wedge\mathcal{F}_{zz}y),\text{ for }\mathcal{F}\in\{F,F^{\star}\}.\]
When \(\mathcal{F}\) is \(F^{\star}\), we use definition (\(S\)) and obtain Shiver's formula for general atomicity.
Following Varzi [26] we call _atomistic extensional mereology_ a theory which is extensional mereology and has as a thesis (AT2\({}_{P}\)). We denote atomistic extensional mereology with the definition of mereological sum (Df.\(F\)) allowing plural quantification by \(\mathsf{AEM}_{\mathsf{pl}}\)
\[\mathsf{AEM}_{\mathsf{pl}}=\mathsf{EM}_{\mathsf{pl}}+(\mathtt{AT2}_{P})+( \mathtt{Df.}A_{P})+(\mathtt{Df.}F).\]
As we mentioned, in \(\mathsf{EM}_{\mathsf{pl}}\) formulas characterizing mereological fusion (Df.\(F^{\star}\)) and mereological sum (Df.\(F\)) are equivalent. So, in \(\mathsf{AEM}_{\mathsf{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at}_{P} ^{\mathsf{ind}})\) it does not really matter which definition of composition we take: (AT2\({}_{P}\)) implies (AT1\({}_{\mathcal{F}}\)), and the converse implication follows just from the definition of mereological sum.
Before we move on to the main considerations, we would like to discuss the issue of the primitive notions that are used to axiomatize mereology. As we
have said, popular approaches to mereology use the primitive notion of being a part/proper part. Possible axiomatizations with this notions is still being discussed [5, 11, 25]. The rich catalog of axiomatizations with the primitive notions of _being external_, _overlapping_, as well as attempts to base mereology on other primitive concepts is given in [18]. In the latter mereology is expressed on the basis of Lesniewski's ontology. The only known axiomatics with the primitive notion of mereological composition is Lejewski's single axiom for general extensional mereology [18, 222]. In this case, the quantification over function symbols is used, which is allowed in the full original version of Lesniewski's ontology. Thus, this axiom is not translatable into the language of modern mereologies, in particular also into the language that we use in our approach. We also believe that our axiomatics has the advantage of being more intuitive than Lejewski's equivalence axiom, which in the Polish notation has seventy-seven symbols.
## 2 Theory ATC. Axiomatization
Now we formulate an axiomatic theory of atomism in which the primitive notion is composition. We do not assume the existence of any specific compositions: our theory is open to axiomatic strengthening as needed. We express it on the basis of a very small fragment of the logic of plurals PFO[9, 15-19] without comprehension schema, and without assumption of non-emptiness of plurals. In other words, in our proposal, one can accept that axioms or reject them.
The only non-logical symbol of our theory is \(F\) for composition, used in the context with individual variables, and plural terms. Plural terms are: plural variables, \(aa\) for 'atoms', and \(\mathfrak{at}_{\boldsymbol{xx}}\) for 'atoms of which \(\boldsymbol{xx}\)'s are composed of'.
The following two are specific axioms of our theory:
\[\forall x\exists_{zz\prec aa}(F_{zz}x\wedge\forall_{yy\prec aa}(F_ {yy}x\leftrightarrow zz\approx yy)\wedge\forall y(F_{zz}y\to x=y)),\] (ATC1) \[F_{zz}x\leftrightarrow F_{\mathfrak{at}_{zz}}x,\] (ATC2)
where :
\[x\prec aa\leftrightarrow\forall yy\forall_{z\prec yy}(F_{yy}x \to z=x),\] (Df.aa \[{}_{F}\] ) \[x\prec\mathfrak{at}_{zz}\leftrightarrow\exists_{y\prec zz}\exists _{yy\prec aa}(F_{yy}y\wedge x\prec yy),\] (Df.at \[{}_{F}^{\mathfrak{pl}}\] ) \[xx\prec yy:=\forall z(z\prec xx\to z\prec yy)\wedge\exists xx \prec xx\] ( \[\prec\] ) \[zz\approx yy:=\forall x(x\prec zz\leftrightarrow x\prec yy).\] ( \[\approx\] )
Let us briefly comment on our axiomatization. (ATC1) states that everything is a unique composition of unique atoms. _Atom_ is defined as an object that cannot be a composition of two or more objects, and _atoms of zz_'s are defined as the sum of all atoms of all individuals that are \(\boldsymbol{zz}\). The axiom (ATC1) is intended to capture (AT1), but (ATC1) itself does not guarantee that \(F\) is the mereological sum in Lesniewski's sense. This is why we need one more axiom. (ATC2) states that the composition of some objects is equivalent to the composition of the
atoms of these objects. Formulas \((\mathtt{Df}.aa_{F})\) and \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{pl}})\) are instantiations of comprehension schema of axioms.3 Pluralities \(aa\) and \(\mathfrak{at}_{\boldsymbol{\pi}\boldsymbol{\pi}}\) are not primitive, as they are defined using the primitive mereological composition \(F\).
Footnote 3: In \(\mathtt{PPO}\), the comprehension schema is the following: \(\exists x(A(x))\to\exists zz\forall x(x\prec zz\leftrightarrow A(x))\), because in \(\mathtt{PPO}\), all plurals must be non-empty. We do not assume either the full comprehension schema or that every plural must be non-empty. Our instantiations of comprehension schema \((\mathtt{Df}.aa_{F})\) and \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{pl}})\) do not have in the predecessor non-emptiness assumption. However, our axioms guarantee that \(aa\) is non-empty, and if \(F_{zz}x\), then also \(zz\) and \(\mathfrak{at}_{zz}\) are non-empty.
We denote our axiomatic theory of atomistic compositions as \(\mathsf{ATC}\), so
\[\mathsf{ATC}=(\mathtt{ATC1})+(\mathtt{ATC2})+(\mathtt{Df}.aa_{F})+(\mathtt{ Df}.\mathfrak{at}_{F}^{\mathsf{pl}})\]
As we show, this axiomatic theory of atomism is definitionally equivalent to atomistic extensional mereology with plural quantification.
## 3 Definitional equivalence of \(\mathsf{ATC}\) and \(\mathsf{AEM}_{\mathsf{pl}}\)
From axiom (\(\mathsf{ATC1}\)) we know that each individual there is exactly one plurality that is a unique composition of unique atoms
\[\forall x\exists^{1}zz(zz\prec aa\wedge(F_{zz}x\wedge\forall_{yy\prec aa}(F_{ yy}x\leftrightarrow zz\approx yy)\wedge\forall y(F_{zz}y\to x=y)),\]
where \(\exists^{1}zzA(zz)\leftrightarrow\exists zzA(zz)\wedge\forall xx\forall yy(A( xx)\wedge A(yy)\to xx\approx yy)\).
We can, thus, introduce one place operation that to every \(x\) attributes the plurality of all atoms of \(x\). For convenience, we use in \(\mathsf{ATC}\) the same constant symbol \(\mathfrak{at}\) as it is used for the atoms of pluralities. We have
\[\mathfrak{at}_{x}\prec aa\wedge F_{\mathfrak{at}_{x}}x\wedge\forall_{yy\prec aa }(F_{yy}x\leftrightarrow yy\approx\mathfrak{at}_{x})\wedge\forall y(F_{ \mathfrak{at}_{x}}y\to x=y).\ \ (\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{ind}})\]
Let us note, that theorem \(\forall x(\mathfrak{at}_{x}\prec aa)\) guarantee that pluralities \(aa\), and \(\mathfrak{at}_{x}\) are always non-empty.
To prove definitional equivalence, we conservatively extend \(\mathsf{ATC}\) by a definition of being a part. We do this using many-many inclusion among atoms of two given objects as
\[Pxy\leftrightarrow\mathfrak{at}_{x}\preccurlyeq\mathfrak{at}_{y},\] ( \[\mathtt{Df}.P_{F}\] )
which may be read as: \(x\) is a part of \(y\) iff all atoms of \(x\) are atoms of \(y\).
Now we turn to the main goal. We formulate proofs using natural deduction. First, we note that \(\mathsf{ATC}\) characterizes being a part as a partial order:
**Lemma 1**.: \((\mathtt{ref})\)_, \((\mathtt{trans})\), and \((\mathtt{ants})\) are provable in \(\mathsf{ATC}+(\mathtt{Df}.P_{F})\)._
Proof.: To obtain \((\mathtt{ref})\) we note that from \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{ind}})\) we have \(\exists x(x\prec\mathfrak{at}_{x})\), thus \(\mathfrak{at}_{x}\preccurlyeq\mathfrak{at}_{x}\), from reflexivity of \(\preccurlyeq\) i.e. \(Pxx\). \((\mathtt{trans})\) we obtain directly from transitivity of \(\preccurlyeq\). To prove \((\mathtt{ants})\) we assume \(Pxy\wedge Pyx\) and by \((\mathtt{Df}.P_{F})\) we obtain \(\mathfrak{at}_{x}\preccurlyeq\mathfrak{at}_{y}\wedge\mathfrak{at}_{y}\preccurlyeq \mathfrak{at}_{x}\) i.e. \(\mathfrak{at}_{x}\approx\mathfrak{at}_{y}\), by \((\approx)\). We have \(F_{\mathfrak{at}_{y}}y\) from \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{ind}})\), so using it with \(\mathfrak{at}_{x}\approx\mathfrak{at}_{y}\) we obtain and \(F_{\mathfrak{at}_{x}}y\), by \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{ind}})\), and next again using \((\mathtt{Df}.\mathfrak{at}_{F}^{\mathsf{ind}})\) we get \(x=y\).
In the proofs of (ssp) and (Df.\(F\)) we use the following \(\mathsf{ATC}+(\texttt{Df}.P_{F})\) theses:
\[z\prec aa\wedge\ z\prec\mathfrak{at}_{x}\to\mathfrak{at}_{z} \preccurlyeq\mathfrak{at}_{x},\] (T1) \[z\prec aa\wedge Ozy\to z\prec\mathfrak{at}_{y}.\] (T2)
Proof.: For (T1) we assume \(z\prec aa\wedge z\prec\mathfrak{at}_{x}\), and we show \(\mathfrak{at}_{z}\prec\mathfrak{at}_{x}\). \(\mathfrak{at}_{z}\) is non-empty, so fix any \(y\prec\mathfrak{at}_{z}\). Then from \(y\prec\mathfrak{at}_{z}\wedge z\prec aa\wedge F_{\mathfrak{at}_{z}}z\) using the definition of atoms (Df.\(aa_{F}\)) we have \(y=z\). The latter with \(z\prec\mathfrak{at}_{x}\) yields \(y\prec\mathfrak{at}_{x}\), so \(\mathfrak{at}_{z}\preccurlyeq\mathfrak{at}_{x}\).
For (T2) we assume \(z\prec aa\) and \(Pcz\wedge Pcy\) for some \(c\), and we prove \(z\prec\mathfrak{at}_{y}\). From (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)) we have \(F_{\mathfrak{at}_{z}}z\) and so using \(z\prec aa\) and identity we have \(z\prec\mathfrak{at}_{z}\wedge\forall y(y\prec\mathfrak{at}_{z}\to y=z)\), so the only atom of \(\mathfrak{at}_{z}\) is \(z\). Using this and \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{z}\) taken from \(Pcz\) and (Df.\(P_{F}\)) we obtain \(z\prec\mathfrak{at}_{c}\). The latter with \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{y}\) taken from \(Pcy\) and (Df.\(P_{F}\)) yields \(z\prec\mathfrak{at}_{y}\).
We take the definition \((O)\) of overlapping as in \(\mathsf{AEM}_{\textsf{pl}}\) and we show that the strong supplementation principle is a thesis of \(\mathsf{ATC}+(\texttt{Df}.P_{F})\).
**Lemma 2**.: (ssp) _is provable in \(\mathsf{ATC}+(\texttt{Df}.P_{F})\)._
Proof.: Assume that \(\forall z(Pzx\to Ozy)\). We aim to show that \(Pxy\), that is, \(\forall_{u}(u\prec\mathfrak{at}_{x}\to u\prec\mathfrak{at}_{y})\). Thus, fix an arbitrary \(u\prec\mathfrak{at}_{x}\). Thus, since \(u\prec aa\)\(u\prec\mathfrak{at}_{x}\) gives \(\mathfrak{at}_{u}\preccurlyeq\mathfrak{at}_{x}\) by (T1) i.e., \(Pux\), by (Df.\(P_{F}\)). So \(Ouy\) by the assumption. We have \(u\prec aa\) and \(Ouy\) thus \(u\prec\mathfrak{at}_{y}\) by (T2). In consequence we have \(\mathfrak{at}_{x}\preccurlyeq\mathfrak{at}_{y}\), i.e. \(Pxy\).
Now we have to prove that notion of composition axiomatized in \(\mathsf{ATC}\) is the mereological sum in the Lesniewski sense, i.e. we have to show that (Df.\(F\)) is a thesis of \(\mathsf{ATC}\).
**Lemma 3**.: \(\forall_{z\prec zz}Pzx\wedge\forall y(Pyx\to\exists_{z\prec zz}\ Ozy)\to F_{ zz}x\) _is a thesis of \(\mathsf{ATC}+(\texttt{Df}.P_{F})\)._
Proof.: We assume \(\forall z(z\prec zz\to Pzx)\wedge\forall y(Pyx\to\exists z(z\prec zz\wedge\ Ozy))\). From \(\forall z(z\prec zz\to Pzx)\) by (Df.\(P_{F}\)) we obtain \(\forall z(z\prec zz\to\mathfrak{at}_{z}\preccurlyeq\mathfrak{at}_{x})\) thus by (Df.\(\mathfrak{at}_{F}^{\textsf{pl}}\)) and (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)) we have \(\mathfrak{at}_{zz}\preccurlyeq\mathfrak{at}_{x}\). We assume additionally that \(\mathfrak{at}_{zz}\neq\mathfrak{at}_{x}\), so \(c\prec\mathfrak{at}_{x}\wedge\neg c\prec\mathfrak{at}_{zz}\). From \(c\prec\mathfrak{at}_{x}\), (Df.\(aa_{F}\)), and (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)) we obtain \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{x}\), so \(Pcx\) by (Df.\(P_{F}\)). Thus, from assumption, we have \(\exists z(z\prec zz\wedge\ Ozc)\). This with \(c\prec aa\), symmetry of \(O\) and (t2) from earlier lemma yields \(\exists z(z\prec zz\wedge c\prec\mathfrak{at}_{z})\), \(\forall z(F_{\mathfrak{at}_{z}}z)\) we have from (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)) thus \(c\prec\mathfrak{at}_{zz}\) by (Df.\(\mathfrak{at}_{F}^{\textsf{pl}}\)) which is false by assumption so \(\mathfrak{at}_{zz}=\mathfrak{at}_{x}\). As always, we have \(F_{\mathfrak{at}_{x}}x\) from (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)), so using \(\mathfrak{at}_{zz}=\mathfrak{at}_{x}\) and (Df.\(\mathfrak{at}_{F}^{\textsf{ind}}\)) we get \(F_{\mathfrak{at}_{z}x}x\) and so using '\(\leftarrow\)' of (ATC2) we finally obtain \(F_{zz}x\) which we wanted to prove.
**Lemma 4**.: \(F_{zz}x\to\forall_{z\prec zz}Pzx\wedge\forall y(Pyx\to\exists_{z\prec zz}\ Ozy)\) _is a thesis of \(\mathsf{ATC}+(\texttt{Df}.P_{F})\)._
Proof.: We assume \(F_{zz}x\) and proceed indirectly. From \(`\rightarrow\)' of (4) we obtain \(F_{\mathfrak{at}_{zz}}x\). Using \((\mathtt{Df.at_{F}^{ind}})\) we get \(\forall_{yy}(F_{yy}x\to yy\approx\mathfrak{at}_{x})\), so we take \(\mathfrak{at}_{zz}/yy\) and we obtain \(\mathfrak{at}_{x}\approx\mathfrak{at}_{zz}\). If \(\neg\forall z(z\prec zz\to Pzx)\), then we take \(c/z\colon c\prec zz\wedge\neg Pcx\). From \(c\prec zz\) we obtain that \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{zz}\) by \((\mathtt{Df.at_{F}^{pl}})\) and \((\mathtt{Df.at_{F}^{ind}})\). The latter with \(\mathfrak{at}_{x}\approx\mathfrak{at}_{zz}\) yields \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{x}\), i.e. \(Pcx\), which is false by assumption. Now, if \(\neg\forall y(Pyx\rightarrow\exists z(z\prec zz\wedge\ Ozy))\), then we take \(c/y\colon Pcx\wedge\ \forall z(z\prec zz\rightarrow\neg Ozc)\). From \(Pcx\) we have \(\mathfrak{at}_{c}\preccurlyeq\mathfrak{at}_{x}\) and from (4) we have that some object is in atoms of both \(c\) and \(x\): \(d\prec\mathfrak{at}_{c}\wedge d\prec\mathfrak{at}_{x}\). From \(d\prec\mathfrak{at}_{x}\) and \(\mathfrak{at}_{x}=\mathfrak{at}_{zz}\) we have \(d\prec\mathfrak{at}_{zz}\). From the latter and \((\mathtt{Df.at_{F}^{pl}})\) there is \(e\prec zz\) such that \(d\prec\mathfrak{at}_{e}\). From \(\forall z(z\prec zz\rightarrow\neg Ozc)\) with \(z/e\) we get \(e\prec zz\rightarrow\neg Oec\), so \(\neg Oec\), i.e. \(\forall y(Pye\rightarrow\neg Pyc)\) so using \((\mathtt{Df.}P_{F})\) and \(d/y\) we have \(\mathfrak{at}_{d}\preccurlyeq\mathfrak{at}_{e}\rightarrow\neg\mathfrak{at}_{d} \preccurlyeq\mathfrak{at}_{c}\). We have \(d\prec\mathfrak{at}_{e}\) and \(d\prec aa\), so by \((\mathtt{Df.at_{F}^{ind}})\) we have \(\mathfrak{at}_{d}\preccurlyeq\mathfrak{at}_{e}\) and thus \(\neg\mathfrak{at}_{d}\preccurlyeq\mathfrak{at}_{c}\) but we have \(d\prec\mathfrak{at}_{c}\) so with \(d\prec aa\) and \((\mathtt{Df.at_{F}^{ind}})\) yields \(\mathfrak{at}_{d}\preccurlyeq\mathfrak{at}_{c}\).
Now we conservatively extend by adding the predicate for being an atom
\[Ax\leftrightarrow x\prec aa.\]
It is clear that in 4, formula \((\mathtt{AT2}_{P})\) is a thesis because of (4) and \((\mathtt{Df.}aa_{F})\). Moreover, having \((\mathtt{Df.}F)\) in 4 we can easily prove that equivalence \((\mathtt{Df.}aa_{P})\) is a thesis, and \((\mathtt{Df.at_{F}^{ind}})\) may be proved with the use of \((\mathtt{Df.at_{F}^{ind}})\). Thus, using lemmas 1-4 we obtain that \(\mathtt{AEM_{pl}}\) is a subtheory of 4 conservatively extended by definitions: \((\mathtt{Df.}P_{F})\) of being a part and \((\mathtt{Df.}A_{F})\) of predicate \(A\):
**Theorem 1**.: \(\mathtt{AEM_{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at_{F}^{ind}}) \subseteq\mathtt{ATC}+(\mathtt{Df.}P_{F})+(\mathtt{Df.}A_{F})\)_._
Now we are going to prove converse dependency. First, we show that the mereological sum in \(\mathtt{AEM_{pl}}\) is extensional
**Lemma 5**.: \(\forall xx\forall yy\forall z(xx\approx yy\wedge F_{xx}z\to F_{yy}z)\) _is provable in \(\mathtt{AEM_{pl}}\)._
Proof.: We assume \(F_{zz}x\wedge zz\approx yy\) and \(\neg F_{yy}x\) We obtain (a): \(\forall z(z\prec zz\to Pzx)\) (b): \(\forall y(Pyx\rightarrow\exists z(z\prec zz\wedge Ozy))\), (c): \(\neg\forall z(z\prec yy\to Pzx)\vee\neg\forall y(Pyx\rightarrow\exists z(z \prec yy\wedge\ Ozy))\). If \(\neg\forall z(z\prec yy\to Pzx)\) then we take \(c/z\colon c\prec yy\wedge\neg Pcx\), so \(c\prec zz\) by \(zz\approx yy\) but then using (a) we have \(Pcx\) which is false. Therefore, we have \(\forall z(z\prec yy\to Pzx)\) so using (c) we obtain \(\neg\forall y(Pyx\rightarrow\exists z(z\prec yy\wedge\ Ozy))\) we take \(d/y\colon Pdx\wedge\forall z(z\prec yy\rightarrow\neg Ozd)\). From \(Pdx\) and (b) we have that \(\exists z(z\prec zz\wedge\ Ozd)\) so we take \(e/z\colon e\prec zz\wedge Oed\). From \(e\prec zz\) and \(zz\approx yy\) we have \(e\prec yy\) thus using \(\forall z(z\prec yy\rightarrow\neg Ozd)\) we obtain \(\neg Oed\) which yields a contradiction.
As we see, all we needed was \(\approx\) and logical axioms. Now we show that in \(\mathtt{AEM_{pl}}\) with appropriate definitions everything is a unique composition of unique atoms:
**Lemma 6**.: \((\mathtt{ATC1})\) _is provable in \(\mathtt{AEM_{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at_{F}^{ind}})\)._
Proof.: From Varzi's analysis we have \((\mathtt{AT1_{\mathcal{F}}})\) i.e. \(F_{\mathfrak{at}_{x}}x\) and \(\mathfrak{at}_{x}\preccurlyeq aa\). The uniqueness of a mereological sum holds in extensional mereology, as we note
in the section 1, thus \(\forall y(F_{\mathtt{at}_{x}}y\to x=y)\). From \(F_{\mathtt{at}_{x}}x\) and lemma 5 we obtain \(\forall_{yy\prec aa}(yy\approx\mathtt{at}_{x}\to F_{yy}x)\), thus all we need to show is \(\forall_{yy\prec aa}(F_{yy}x\to yy\approx\mathtt{at}_{x})\). We assume indirectly that \(\neg\forall_{yy\prec aa}(F_{yy}x\to yy\approx\mathtt{at}_{x})\) take \(cc/yy\) and obtain \(cc\prec aa\wedge F_{cc}x\wedge cc\neq\mathtt{at}_{x}\). From \(cc\neq\mathtt{at}_{x}\) and \((\approx)\) we have \(\exists z(z\prec cc\wedge\neg z\prec\mathtt{at}_{x}\vee\neg z\prec c\wedge z \prec\mathtt{at}_{x})\). We take \(c/z\) and we have two possibilities (a): \(c\prec cc\wedge\neg c\prec\mathtt{at}_{x}\) or (b): \(\neg c\prec cc\wedge c\prec\mathtt{at}_{x}\). We start with (a). From \(F_{cc}x\) and the definition of mereological sum (\(\mathtt{Df.}F\)) we obtain \(\forall z(z\prec cc\to Pzx)\) thus using \(c\prec cc\) we have \(Pcx\). From \(F_{\mathtt{at}_{x}}x\) and (\(\mathtt{Df.}F\)) we have \(\forall y(Pyx\rightarrow\exists z(z\prec\mathtt{at}_{x}\wedge\ Ozy)\) so using \(Pcx\) we have \(\exists z(z\prec\mathtt{at}_{x}\wedge\ Ozc)\) we take \(d/z:d\prec\mathtt{at}_{x}\wedge Odc\). Both \(c\) and \(d\) are atoms, so from \(Odc\) with the use of (\(\mathtt{Df.}aa_{F}\)) we have \(c=d\). Thus from \(d\prec\mathtt{at}_{x}\) we obtain \(c\prec\mathtt{at}_{x}\) which yields a contradiction. In the case of (b) from \(c\prec\mathtt{at}_{x}\) and (\(\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{pl}}\)) we obtain \(Pcx\). Next from \(F_{cc}x\) and definition of sum (\(\mathtt{Df.}F\)) we obtain \(\forall y(Pyx\rightarrow\exists z(z\prec cc\wedge Ozy))\). We take \(c/z\) and using \(Pcx\) we obtain \(\exists z(z\prec cc\wedge Ozc)\) We take \(d/z\) and so \(d\prec cc\wedge Odc\). We know that \(c\) is an atom because of \(c\prec\mathtt{at}_{x}\), and \(d\) is an atom because of \(d\prec cc\) and \(cc\prec aa\), so \(Ocd\) yields \(c=d\) and this with \(d\prec cc\) yields \(c\prec cc\) which yields a contradiction with assumption.
To prove (\(\mathtt{ATC2}\)) in \(\mathtt{AEM}_{\mathsf{pl}}\) we need to introduce the notion of atoms of \(\boldsymbol{xx}\)'s and we take the following instantiation of comprehension schema
\[y\prec\mathtt{at}_{xx}\leftrightarrow\exists_{z\prec\prec xx}(Pyz\wedge y \prec aa).\] ( \[\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{pl}}\] )
Now we prove that, in atomistic extensional mereology with plural quantification, being the mereological sum of some objects is equivalent with mereological sum of atoms of these objects. Implication '\(\leftarrow\)' requires the use of (ssp).
**Lemma 7**.: \(F_{\mathtt{at}_{z_{x}}}x\to F_{zz}x\) _is provable in \(\mathtt{AEM}_{\mathsf{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathtt{at}_{P}^{ \mathtt{ind}})+(\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{pl}})\)._
Proof.: We assume \(F_{\mathtt{at}_{z_{x}}}x\) and indirectly we assume \(\neg F_{zz}x\). We obtain the following (a): \(\forall z(z\prec\mathtt{at}_{zz}\to Pzx)\), (b): \(\forall y(Pyx\rightarrow\exists z(z\prec\mathtt{at}_{zz}\wedge\ Ozy))\) and (c): \(\neg\forall z(z\prec zz\to Pzx)\vee\neg\forall y(Pyx\rightarrow\exists z(z \prec zz\wedge Ozy))\). If it were the case that \(\neg\forall z(z\prec zz\to Pzx)\) then we take \(c/z\) and we have \(c\prec zz\wedge\neg Pcx\). From \(\neg Pcx\) and (ssp) we obtain that there is \(d\) such that \(Pdc\wedge\neg Odx\). From (\(\mathtt{AT}_{P}\)) there is an atom of \(d\): \(Ae\wedge Ped\). From the latter and \(Pdc\) using (trans) we have \(Pec\). So, we have \(c\prec zz\wedge Pec\wedge Ae\) and thus using (\(\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{pl}}\)) we obtain \(e\prec\mathtt{at}_{zz}\). The latter with (a) yields \(Pex\), which is false because we have \(Ped\wedge\neg Odx\). So \(\forall z(z\prec zz\to Pzx)\) and next by (c): we obtain \(\neg\forall y(Pyx\rightarrow\exists z(z\prec zz\wedge Ozy))\). We take \(f/y\) and so \(Pfx\wedge\forall z(z\prec zz\rightarrow\neg Ozf)\). From \(Pfx\) and (b) we have \(\exists z(z\prec\mathtt{at}_{zz}\wedge Ozf)\) and we obtain \(g/z:g\prec\mathtt{at}_{zz}\wedge Offg\). From \(g\prec\mathtt{at}_{zz}\) and (\(\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{ind}}\)) we obtain that there is \(h\prec zz\) such that \(Pgh\wedge Ag\). We have \(Ofg\) so with \(Ag\) using definitions (\(\mathtt{Df.}A_{P}\)) and (\(O\)) we obtain \(Pgf\). The latter, combined with \(Pgh\) which was obtained earlier yields \(Ohf\). We have \(h\prec zz\) so using \(\forall z(z\prec zz\rightarrow\neg Ozf)\) we obtain \(\neg Ohf\) which yields a contradiction.
**Lemma 8**.: \(F_{zz}x\to F_{\mathtt{at}_{zz}}x\) _is provable in \(\mathtt{AEM}_{\mathsf{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathtt{at}_{P}^{ \mathtt{ind}})+(\mathtt{Df.}\mathtt{at}_{P}^{\mathtt{pl}})\)._
Proof.: We proceed indirectly. We assume \(F_{zz}x\) and \(\neg F_{\mathfrak{at}_{zz}}x\) and we obtain (a): \(\forall z(z\prec zz\to Pzx)\), (b): \(\forall y(Pyx\to\exists z(z\prec zz\wedge Ozy))\) and also we have (c): \(\neg\forall z(z\prec\mathfrak{at}_{zz}\to Pzx)\vee\neg\forall y(Pyx\to\exists z(z \prec\mathfrak{at}_{zz}Ozy))\). If it were the case that \(\neg\forall z(z\prec\mathfrak{at}_{zz}\to Pzx)\) then we take \(c/x\) and so \(c\prec\mathfrak{at}_{zz}\wedge\neg Pcx\). From \(c\prec\mathfrak{at}_{zz}\) and definition of \((\mathtt{Df.at}_{P}^{\mathfrak{pl}})\) we have that there is \(d\prec zz\) with \(\mathit{Pcd}\wedge Ac\). From (a) and \(d\prec zz\) we obtain \(\mathit{Pdx}\), so using \(\mathit{Pcd}\) and (trans) we obtain \(\mathit{Pcx}\), which is false, so \(\forall z(z\prec\mathfrak{at}_{zz}\to Pzx)\) and next by (c) \(\neg\forall y(Pyx\to\exists z(z\prec\mathfrak{at}_{zz}\wedge Ozx))\). We take \(e/y\): \(Pex\wedge\forall z(z\prec\mathfrak{at}_{zz}\to\neg Oze)\). From \(Pex\) and (b) we obtain \(\exists z(z\prec zz\wedge Oze)\) We take \(f/z\) and so \(f\prec zz\wedge Ofe\). From \(Ofe\) we obtain \(Pge\wedge Pgf\). From \((\mathtt{AT2}_{P})\) we obtain that there is an atom of \(g\): \(Ah\wedge Phg\) from (trans) we obtain \(\mathit{Phe}\wedge Phf\). We have \(\mathit{Phf}\wedge f\prec zz\wedge Ah\) and by using \((\mathtt{Df.at}_{P}^{\mathfrak{pl}})\) we get \(h\prec\mathfrak{at}_{zz}\). The latter with \(\forall z(z\prec\mathfrak{at}_{zz}\to\neg Oze)\) yields \(\neg Ohe\), which yields a contradiction with \(Phe\).
To end the proof we need to note a few more things. The notions of being a part \((\mathtt{Df.}P_{F})\), atoms \((\mathtt{Df.}aa_{F})\), and atoms of: \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{ind}})\) and \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{pl}})\) in \(\mathtt{ATC}\) are theses of \(\mathtt{AEM}_{pl}+(\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}})\). \((\mathtt{Df.}P_{F})\) in one direction follows just from (trans) and conversely we have to use (ssp) and \((\mathtt{AT2}_{P})\). \((\mathtt{Df.}aa_{F})\) follows from the defined mereological sum \((\mathtt{Df.}F)\), being an atom \((\mathtt{Df.}aa_{P})\) and the fact \(\forall x(F_{\mathfrak{at}_{z}}x)\) proved by Varzi. We have proved \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{ind}})\) in lemma 6, and \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{pl}})\) can be easily proved with the use of the definition of atom \((\mathtt{Df.}A_{P})\), atoms of \((\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}})\), mereological sum \((\mathtt{Df.}F)\) and the fact that in \(\mathtt{AEM}_{pl}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at}_{P}^{ \mathfrak{ind}})\) we have \(\forall x(F_{\mathfrak{at}_{F}}x)\). Lastly, we note \((\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}})\) is a thesis of \(\mathtt{ATC}\), and it can be proved with the use of \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{ind}})\), \((\mathtt{Df.}F)\), \((\mathtt{Df.}\mathfrak{at}_{F}^{\mathfrak{pl}})\), and \((\mathtt{Df.}P_{F})\). Thus, using lemmas 5-8 we obtain that axiomatic theory of atom \(\mathtt{ATC}\) presented herein, when extended by appropriate definitions is a subtheory of atomistic extensional mereology:
**Theorem 2**.: \(\mathtt{ATC}+(\mathtt{Df.}P_{F})+(\mathtt{Df.}A_{F})\subseteq\mathtt{AEM}_{ \mathfrak{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{ ind}})+(\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}})\)_._
Using theorems 1 and 2 we obtain the final result of the work that out theory and atomistic extensional mereology extended with appropriate instantiations of comprehension schema are definitionally equivalent
\[\mathtt{ATC}+(\mathtt{Df.}P_{F})+(\mathtt{Df.}A_{F})=\mathtt{AEM}_{\mathfrak{ pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{ind}})+( \mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}}).\]
As we mentioned, in extensional mereology fusion \((\mathtt{Df.}F^{\star})\) and sum \((\mathtt{Df.}F)\) are equivalent. So \(\mathtt{ATC}+(\mathtt{Df.}P_{F})+(\mathtt{Df.}A_{F})\) is also equivalent to \(\mathtt{AEM}_{\mathfrak{pl}}+(\mathtt{Df.}aa_{P})+(\mathtt{Df.}\mathfrak{at}_{P }^{\mathfrak{ind}})+(\mathtt{Df.}\mathfrak{at}_{P}^{\mathfrak{pl}})\) with fusion instead of sum.
Perhaps some atomists would prefer to stay in first-order logic without plural quantification. However, plural quantification in the case of atomism has its advantages. First, it is open to the axiomatic characterization of superatomism [7]. Second, as we have shown, atomism does not need to be based on the primitive notion of being a part. Finally, we believe that it captures both formally and intuitively what \((\mathtt{AT1})\) is meant to express.
In conclusion, we also wish to emphasize that though our axiomatic system \(\mathsf{ATC}\) for atomism is equivalent to atomistic extensional mereology with plural quantification, \(\mathsf{ATC}\) it is based on two axioms that cannot be proven in any formulation of _pure_ extensional mereology. However, general extensonal mereology can also be axiomatized, in plural logic, using fusion as the primitive concept [27].
|
2305.11057 | From Assistive Technologies to Metaverse: Technologies in Inclusive
Higher Education for Students with Specific Learning Difficulties | The development of new technologies and their expanding use in a wide range
of educational environments are driving the transformation of higher education.
Assistive technologies are a subset of cutting-edge technology that can help
students learn more effectively and make education accessible to everyone.
Assistive technology can enhance, maintain, or improve the capacities of
students with learning difficulties. Students with learning difficulties will
be greatly benefited from the use of assistive technologies. If these
technologies are used effectively, students with learning difficulties can
compete with their peers and complete their academic tasks. We aim to conduct
this review to better understand the role of assistive technologies in
providing inclusive higher education for students with learning difficulties.
The review begins with the introduction of learning difficulties and their
causes; inclusive education and the need for assistive technologies; the
reasoning for conducting this review; and a summary of related reviews on
assistive technologies for students with learning difficulties in inclusive
higher education. Then, we discuss the preliminaries for the learning
difficulties type and assistive technology. Later, we discuss the effects of
assistive technology on inclusive higher education for students with learning
difficulties. Additionally, we discuss related projects and support tools
available in inclusive higher education for students with learning
difficulties. We also explore the challenges and possible solutions related to
using assistive technology in higher education to provide inclusive education
for students with learning difficulties. We conclude the review with a
discussion of potential promising future directions. | Gokul Yenduri, Rajesh Kaluri, Dharmendra Singh Rajput, Kuruva Lakshmanna, Thippa Reddy Gadekallu, Mufti Mahmud, David J. Brown | 2023-05-05T05:20:21Z | http://arxiv.org/abs/2305.11057v1 | From Assistive Technologies to Metaverse - Technologies in Inclusive Higher Education for Students with Specific Learning Difficulties: A Review
###### Abstract
The development of new technologies and their expanding use in a wide range of educational environments are driving the transformation of higher education. Assistive technologies are a subset of cutting-edge technology that can help students learn more effectively and make education accessible to everyone. Assistive technology can enhance, maintain, or improve the capacities of students with learning difficulties. Students with learning difficulties will be greatly benefited from the use of assistive technologies. If these technologies are used effectively, students with learning difficulties can compete with their peers and complete their academic tasks. We aim to conduct this review to better understand the role of assistive technologies in providing inclusive higher education for students with learning difficulties. The review begins with the introduction of learning difficulties and their causes; inclusive education and the need for assistive technologies; the reasoning for conducting this review; and a summary of related reviews on assistive technologies for students with learning difficulties in inclusive higher education. Then, we discuss the preliminaries for the learning difficulties type and assistive technology. Later, we discuss the effects of assistive technology on inclusive higher education for students with learning difficulties. Additionally, we discuss related projects and support tools available in inclusive higher education for students with learning difficulties. We also explore the challenges and possible solutions related to using assistive technology in higher education to provide inclusive education for students with learning difficulties. We conclude the review with a discussion of potential promising future directions.
Learning Difficulties, Inclusive Higher Education, Assistive Technologies +
Footnote †: INDEX TERMS
## I Introduction
A learning difficulty is any abnormality of the body or mind that hinders a person's ability to do certain tasks and interact with the outside world. According to the World Health Organization (WHO), 15% of the global population is disabled, of whom 2% to 4% have significant learning difficulties. According to the WHO, this global estimate of learning difficulties is rising due to an ageing population, the rapid spread of chronic diseases, and advances in the methodologies used to diagnose learning difficulties. [1]. According to a UNICEF study, around 240 million children worldwide are facing issues with learning difficulties. Most assessments of child well-being indicate that children with learning difficulties have a worse quality of life than children without learning difficulties [2]. Students with learning difficulties may suffer from emotional, mental, physical, or developmental problems. Education will provide students with learning difficulties with a feeling of self-worth. Learning
difficulties are an underrated but major element in educational discrimination. Students with learning difficulties are among the most underserved categories regarding adequate higher education. All students with learning difficulties must receive a proper inclusive education for their growth and social development.
### Causes of Learning Difficulties
With the assistance of modern technologies, researchers could pinpoint all likely causes of learning difficulties. Some learning difficulties are caused by prenatal and neonatal hazards, psychological and physical stress, and environmental exposure. Several risk factors are present at birth and run in families, according to recent studies [3]. Therefore, there is an increased risk of learning difficulties in children of those with learning difficulties. To better understand learning difficulties, it is essential to investigate how students' brains adapt to reading, writing, and solving mathematical problems [4]. More individuals with learning difficulties suffer from physical and psychological illnesses. There are curable conditions associated with a person's learning difficulties. Autism, Attention Deficit Hyperactivity Disorder, Schizophrenia, Mania, and Pica are a few of the conditions that may lead to learning difficulties [5]. Problematic behaviour may also indicate underlying mental or physical health issues. Additionally, family members, educators, and caretakers must assist the student in overcoming these obstacles [6]. Students with learning difficulties have fewer opportunities to get an inclusive technical education, but this barrier may be overcome with the use of assistive technology. Numerous studies have identified methods for assisting individuals with learning difficulties.
### Inclusive Education for Students with Learning Difficulties
Traditionally, students with learning difficulties were seen as inferior and barred from normal classes due to their cognitive disorders. Then, they were educated at specialized institutions of higher learning. Inclusive education allows for integrating students with learning difficulties into regular classrooms with their normally developing classmates and helps them to face real-world problems [7]. 1994's Declarations of Salamanc on Special Education outlined the ideas of inclusive education. The declaration emphasizes with a commitment to education for all students, acknowledging the need and urgency of providing education to all students, adolescents, and adults within the regular education system [8]. Regular education with an inclusive focus is the most effective strategy for eradicating discriminatory attitudes, developing welcoming communities, building an inclusive
\begin{table}
\begin{tabular}{|l|l|} \hline
**Acronyms** & **Description** \\ \hline ADD & Attention Deficit Hyperactivity Disorder \\ \hline AI & Artificial Intelligence \\ \hline AR & Augmented Reality \\ \hline ARC & Augmented Classroom \\ \hline ATIA & Assistive Technology Industry Association \\ \hline BCI & Brain Computer Interface \\ \hline CAPD & Central Auditory Processing Disorder \\ \hline DT & Digital Twin \\ \hline DCD & Developmental Co-ordination Disorder \\ \hline HMD & Head Mount Display \\ \hline HCI & Human-Computer Interaction \\ \hline IoT & Internet of Things \\ \hline IEEE & Institute of Electrical and Electronics Engineers \\ \hline MHD & Mental Health Disorders \\ \hline NDD & Neuro Developmental Disorder \\ \hline NVLD & Nonverbal Learning Disorder \\ \hline NGSS & Next Generation Science Standards \\ \hline PD & Psychological Disorders \\ \hline PSO & Particle Swarm Optimization \\ \hline SVM & Support Vector Machine \\ \hline UNESCO & United Nations Educational, Scientific and Cultural Organization \\ \hline VR & Virtual Reality \\ \hline WHO & World Health Organization \\ \hline WIPO & World Intellectual Property Organization \\ \hline XR & Extended Reality \\ \hline
3D & 3Dimension \\ \hline \end{tabular}
\end{table} TABLE I: List of key acronyms.
society, and achieving education for all [9]. In addition, inclusive education will provide an effective education for most students, hence boosting the efficiency and cost-effectiveness of the education system as a whole [10].
### _Need of Assistive Technologies for Students with Learning Difficulties in Higher Education_
Students with different types of learning difficulties confront various obstacles in higher education. Students with learning difficulties are excluded from school, society, and mainstream development programs due to a lack of crucial support and fair participation opportunities [11]. If technology is implemented properly, students with learning difficulties will be able to participate in the general education curriculum because they will have access to simpler and more adaptable methods of completing their tasks. It is vital to pick assistive technology based on a student's requirements, not their difficulties category. As assistive technology is intended to accomplish a certain objective, it is crucial to choose the appropriate tool for the task [12]. Assistive technology may include hardware, software, and peripherals that aid students with learning difficulties in completing their assignments [13]. Utilizing assistive technology in higher education will aid students with learning difficulties in remaining competitive with their peers, fostering social engagement, boosting self-confidence, and enhancing academic success.
### _Motivation_
In this section, we discuss the motivation for conducting this review, and the motivation for doing this review is depicted in Fig.1. Our study is limited to those assistive technologies that make it possible for students with learning difficulties to overcome the challenges they face while competing with their peers. Our work is restricted in the case of students with physical impairments, as these assistive technologies reviewed in our study may not address all physical disabilities, which require a separate focus. These assistive technologies for physical impairments not only need to focus on alternative or customized devices but also need to address the student's learning difficulties along with their physical impairments as a whole.
#### Iii-C1 To ensure Students with Learning difficulties to have the same access to Higher Education
Students with learning difficulties, such as dyslexia or autism, struggle to adapt to the conventional learning environment. They have difficulty following instructions, course content, and even their textbooks. They need individualized instruction and cannot withstand the pressure of competition. Moreover, these students are often bullied by their classmates, which may further demoralize them [14]. Innovative assistive technology and a personalized learning environment can help students get past these problems and get ready to compete with their classmates and do well in academics [15].
Iii-C2 Deconstructing Challenges faced by the Students with Learning Difficulties in Higher Education
Academically struggling students with learning disabilities may experience stress [16]. Possible stressful sentiments of low self-worth may arise as a result. Many students may not perform in the classroom because they fear making some mistakes in their work [17]. Those with learning difficulties with self-determination and the help of assistive technology can overcome the stigma of having a learning difficulty and deal with the problems of higher education effectively [18].
Iii-C3 To enable Students with Learning Difficulties to participate in Academic Activities and interact with Peers
The use of assistive technology will provide students with chances for experiential learning. Students can engage in independent academic activities or work collaboratively with other students instead of idly waiting for assistance [19]. The use of assistive technology allows students with learning difficulties to learn at their own pace. This self-paced learning results in less pressure and improved communication skills, attention, and behaviour [20]. Assistive technology can help students with learning difficulties engage more readily in cooperative learning activities, as students with learning difficulties may not possess either the needed academic or collaborative skills to participate fully [21]. With the help of assistive technologies students' gaining independence as they were able to complete written assignments with minimal or no assistance [22].
Figure 1: Motivation for Review
### _Related Works and Contributions_
Several researchers have worked on assistive technology for students with learning difficulties in inclusive higher education. The overview of these studies is provided in Table II.
To assist reading and writing, tablets featuring text-to-speech and speech-to-text capabilities have been launched in recent years [27]. Since the 1980s, the challenges they face in written and spoken language, arithmetic, reasoning and memory that are the result of their learning difficulties have been mitigated with the use of various assistive technology. However, few scientific studies have examined the advantages of this method. Idor Svensson et al. evaluated the influence of assistive technology on students with learning difficulties. In their research, 149 individual students participated. The intervention group was provided with 24 training sessions in assistive technology, whereas the control group received routine care. In a single year, the intervention and control groups attained the same degree of improvement as the normative population. However, neither immediately after the intervention nor one year later did gains vary across groups. They found that the use of assistive technology improved reading skills, especially for students with learning difficulties. It was also observed that the intervention boosted motivation leading to task completion. Furthermore, their research demonstrated the challenges of assessing individuals with learning difficulties in terms of their capacity to comprehend and interpret information [23].
The landscape of higher education is constantly changing as a result of the quick adoption and spread of new technologies in teaching and learning methods [28]. Assistive technologies, which include a variety of specialised tools, are used to help students access education and participate freely and actively in the learning process, which improves learning and promotes the educational system [29]. Pritika Reddy et al. examined students' perceptions of the use of various assistive technologies, including mobile learning, tablet learning, lecture capture, gamification, and online intelligent systems for learning and student assistance in higher education in mathematics teaching at the university level. They also assessed the opinions of assistive technology among mathematics students in online mode. Their findings concluded that assistive technologies helped students with learning disabilities to understand mathematics better, and the students showed a positive outlook on the use of these technologies in mathematical education [24].
Neurodevelopmental disorders (NDDs), which also include developmental disabilities and specific learning difficulties like attention deficit hyperactivity disorder (ADHD), dyslexia, and autism spectrum disorders (ASD), as well as a wide range of mental health disorders (MHDs), including stress, anxiety, psychotic disorders, and severe depression, are frequently associated with psychological disorders (PDs) that first emerge in adolescence or early childhood. Over the last 20 years, there have been notable increases in the diagnosis of mental diseases on a global scale as well as rapid growth in the rate of a number of mental health problems. Depending on the kind of PD, students may struggle with socialisation, communication, and adapting to changes in their environment, which could make it difficult for them to concentrate effectively. To improve outcomes for students, treatment has to be carried out quickly and effectively [30]. In order to address learning difficulties in students with a
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}|p{142.3pt}} \hline
**Ref** & **Contributions** & **Limitations** \\ \hline \hline
[23] & This study investigated the effectiveness of assistive technology for students with learning difficulties. & This study limits the most recent technical developments in assistive technology, which can help in inclusive higher education. \\ \hline
[24] & This study explored the impact of assistive technology on mathematics in higher education. & This study did not provide insights into how this assistive technology can help students with learning difficulties in inclusive higher education. \\ \hline
[25] & This study explored the role of artificial intelligence in personalized assistive technology for neurodevelopmental disorders students in their education. & This study did not address the problems associated with incorporating AI as an assistive technology for students with learning difficulties in inclusive higher education. \\ \hline
[26] & This study compared an assisted e-learning interface amongst students with and without visual and auditory impairments. & This study did not address the difficulties in adapting the e-learning interface for students with visual and hearing difficulties. \\ \hline Our work & Our work explored the role of assistive technologies in providing inclusive education for students with learning difficulties in higher education & – \\ \hline \end{tabular}
\end{table} TABLE II: Summary of Related Reviews on Assistive Technologies for Students with Learning Difficulties in Inclusive Higher Education.
variety of NDDs, Prabal Datta Barua et al. examined the complexity and effectiveness of AI-assisted solutions created using machine learning models. Their study provided a summary of the data showing how AI technology may be utilised to improve social interaction and support training. They concluded that AI solutions aren't completely effective at resolving the problems associated with learning difficulties. They recommended that in the future, AI technologies may be improved with an emphasis on assisting students with NDDs [25].
In summary, researchers used a variety of technology to assist students with a variety of learning difficulties. To our knowledge, no research has been undertaken on the usage of different assistive technologies to aid higher education students with learning difficulties which provided the motivation for this review. This research examines the role that assistive technology in providing inclusive education to students with learning difficulties in higher education.
### _Systematic Literature Review_
The following phases constitute the literature review that was used in this study to investigate the role of assistive technology for students with learning difficulties in higher education. First, we discuss the shortcomings of previous review articles and the reasons for conducting this study. Investigating relevant scientific and research publications on the use of assistive technology for impaired students in higher education is the next step. We place a strong emphasis on peer-reviewed, high-quality papers published in reputable books, conferences, seminars, symposiums, and journals. The references utilized in this study were found on well-known archive services including Google Scholar and arXiv as well as well-regarded publications like Springer Nature, Wiley, Elsevier, Taylor and Francis, MDPI, and IEEE. Additionally, the keywords AI, XR, computer vision, the meta-verse, HCI, and digital twins are utilised to identify relevant references and publications about assistive technology for learning difficulties in inclusive higher education such as Dyslexia, Dyspraxia, Dyscalculia, Dysgraphia, Auditory Processing Disorder, Visual Processing Disorder, Nonverbal Learning Disorder, and Apraxia of Speech. The retrieved articles are all screened based on their titles in the next phase. We didn't include any papers with poor-quality material from predatory journals. After that, we reviewed the abstracts of the papers to determine their contributions. The data needed for our analysis of the use of assistive technology for students with learning difficulties in higher education is extracted in the final step [31].
### _Paper Organization_
Section II presents the preliminaries of assistive technologies and types of learning difficulties. In Section III, we discuss the impact of assistive technologies in providing inclusive education for students with learning difficulties in higher education, which includes AI, XR, the metaverse, HCI, and digital twins. Then we discuss the projects working toward inclusive education for students with learning difficulties in higher education in Section IV. Next, Section V is an overview of assistive technology tools. To drive further studies on assistive technologies for learning difficulties students in higher education, in section VI, we discuss challenges in integrating assistive technologies for slow learners in higher education and future directions. Section VII is the road map. Finally, we conclude the paper in Section VIII. For clarity, the organisation of this review is presented in Fig. 2, and a list of frequently used acronyms is listed in Table 1.
## II Preliminaries
This section provides an overview of learning difficulties, which are identified based on a systematic literature review, followed by a discussion of assistive technology for learning difficulties in higher education.
### _Types of Learning Difficulties_
The type of various learning difficulties and their effects are depicted in Fig. 3
#### Ii-A1 Dyslexia
Dyslexia is a learning disorder. Identifying speech sounds, reading, and decoding letters and words can be challenging for students with dyslexia. Students with dyslexia may also have trouble speaking and expressing themselves, and their ideas [32]. They may also find it hard to organise their thoughts during talks. Despite the impairment in language processing regions of the brain caused by dyslexia, students with dyslexia can compete with their peers with the aid of assistive technologies and appropriate intervention.
#### Ii-A2 Dyspraxia
Dyspraxia is also known as developmental coordination disorder or DCD. Dyspraxia is a motor disorder based on the brain. It influences large and fine motor skills, motor planning, and coordination [33]. Despite the fact that it can influence cognitive skills, it is unrelated to intelligence. A student with dyspraxia has difficulty with movement and coordination. Students may have difficulty handling objects and may also tend to bump into things. The student may also have trouble speaking, be sensitive to light, touch, taste, or smell, and have trouble moving his or her eyes.
#### Ii-A3 Dyscalculia
Dyscalculia makes it challenging to understand, and process arithmetic [34]. In addition to counting and simple mental math difficulties, students have trouble telling time and following directions. The following abilities are affected by dyscalculia: problems with comprehending how numbers operate and their relationships, mathematical problem-solving difficulties, difficulty with learning basic calculations, and they also face difficulty in compiling and documenting data.
#### 3.2.4 Dysgraphia
The neurological condition known as dysgraphia impairs writing and fine motor skills. It is a learning difficulty that affects almost every element of writing, including spelling, legibility, word size, and expression of a tense grip that might result in a hurting hand. Bad spatial planning, inconsistent writing, poor spelling and missing or incomplete words are all symptoms of dysgraphia [35].
#### 3.2.5 Auditory Processing Disorder
It is also known as Central Auditory Processing Disorder (CAPD) because it affects a person's ability to detect, understand, and identify sounds while having normal hearing. Significant difficulty understanding speech, particularly in noisy environments; difficulty following multi-step spoken instructions delivered without visual aids; distraction by loud or unexpected sounds; difficulty paying attention to lengthy lectures or other extended listening sessions; difficulty remembering and/or efficiently summarising verbally delivered information; and difficulty reading, spelling, and/or writing are some of the symptoms of CAPD [36].
#### 3.2.6 Visual Processing Disorder
A student with Visual Processing Disorder(VPD) has difficulty comprehending visual information. The student may struggle with reading or distinguishing between similar-looking objects. Those with visual processing problems may experience difficulties with hand-eye coordination. Students with visual processing disorder face difficulties like Confuse words that look similar, reverse letters or numbers, Lacking adequate reading comprehension, make copying errors and frequently forgetting letters, numbers, and words, being bad spellers, having uneven or poorly spaced handwriting and having difficulty in understanding multi-step instructions, and have trouble telling time and comprehending the idea of time [37].
#### 3.2.7 Nonverbal Learning Disorder
The most underdiagnosed, misunderstood, and ignored learning impairment is nonverbal learning disorder (NVLD). Impaired visual, spatial, and organisational skills, difficulty recognising and interpreting nonverbal signals, and poor motor function are all symptoms of the neurological condition. The symptoms of NVLD also include problems related to social interactions, reading nonverbal signs, understanding facial expressions, using appropriate language in social situations, coordination of the body, and fine motor abilities. Students affected by NVLD may face difficulties with organisation, planning, and concentration, as well as reading comprehension and writing expression at a higher educational level [38].
#### 3.2.8 Apraxia of Speech
A student with apraxia of speech struggles to talk clearly and make appropriate gestures. In apraxia of speech, the speech muscles are not weak. Rather, the brain has problems
Figure 3: Types of Learning Difficulties
Figure 2: The Schematic Organisation of the Role of Assistive Technologies in Providing Inclusive Education for Students with Learning Difficulties in Higher Education
directing and/or coordinating the motions, so the muscles do not function appropriately. Apraxia in students may cause difficulty in imitating and producing sounds on their own, may add new sounds, omit sounds, or pronounce sounds incorrectly, and may pronounce something correctly one time and incorrectly the next [39].
### _Assistive Technologies_
Anything software, hardware or peripherals that aid students with learning difficulties in overcoming their educational obstacles and developing new skills fall under the umbrella of assistive technology. Students with learning difficulties need assistive technology to improve their abilities. They will be able to receive a high-quality education on par with their peers using assistive technology.
**Definition 1:** According to WHO, the systems and services involved in providing assistive products and services are collectively referred to as assistive technology. [40].
**Definition 2:** According to the Assistive Technology Industry Association (ATIA), any tool, piece of equipment, piece of software, or product used to enhance, maintain, or strengthen the functional capacities of individuals with disabilities is known as assistive technology. [41].
**Definition 3:** The federal defines assistive technology as any tool, apparatus, or system, whether purchased commercially off-the-shelf, adapted, or customized, that can be used to enhance, maintain, or improve the functional capacity of people with impairments [42].
**Definition 4:** According to the United Nations Educational, Scientific and Cultural Organization (UNESCO), anything that is utilized to enhance, maintain, or improve the functional capacities of people with impairments is considered assistive technology [43].
**Definition 5:** According to the Institute of Electrical and Electronics Engineers (IEEE), anything that aids a person in achieving increased performance, function, or quicker access to information is considered assistive technology. [44].
**Definition 6:** According to The European Accessibility Act (EU, 2019), assistive technology refers to any device, appliance, service, or combination of processes, including computer programs, that is used to maximize, maintain, replace, or enhance the functional skills of people with disabilities. [45].
**Definition 7:** According to the International Organisation for Standardisation's (ISO9999(2022)) standard on assistive products, assistive technology is any item that was produced with a specific focus on serving the needs of people with disabilities or that is generally available and used by or for people with disabilities [46].
**Definition 8:** According to the World Intellectual Property Organization (WIPO), the term "assistive technology" refers to a broad range of technologies and goods, from relatively simple gadgets like a walking stick or reading glasses to sophisticated, high-tech systems like assistive robots or software that recognizes gestures or emotions. [45].
III The Significance of Assistive Technology in Higher Education in Ensuring Inclusive Education for Individuals with Learning Difficulties
Based on the systematic literature review, it is understood that the development of assistive tools in future will be heavily reliant on contemporary technologies like AI, XR, IoT, HCI, digital twins, and the metaverse. There are numerous studies that concentrate on lower-end assistive technology for learning difficulties [47]. To the best of our knowledge, there is no review that addresses all of these cutting-edge technologies for assisting students with learning difficulties. In this section, we address the role of AI, XR, IoT, HCI, digital twins, and the metaverse as assistive technologies in inclusive education that assist students to overcome their learning difficulties and Fig. 4 depicts the assistive technologies covered in this study.
### _Artificial Intelligence_
Artificial intelligence (AI) represents significant advances in computer science and data processing that are rapidly revolutionizing numerous sectors [48]. AI refers to the simulation of human intelligence in machines that are programmed to replicate human thought and behavior [49]. The use of AI makes inclusive education a reality. Students with learning difficulties can be integrated into regular classrooms and educated alongside their peers [50]. AI is advancing rapidly in the education sector and reducing the gap between students and teachers. Learning-difficulty-specific cognitive systems are also being developed by researchers using AI [51]. Google and Microsoft are developing AI-based tools such as the immersive reader and google docs that can assist students with learning difficulties. The early identification of students with learning difficulties can be facilitated by AI [52]. AI can assist students in enhancing their reading comprehension by reading the text aloud and also provide comprehensive feedback on written work. AI enhances reading fluency in reading for students with learning difficulties. Students with autism struggle with both verbal and nonverbal communication. Social skill development can be difficult for them. To solve this problem, QTrobot was created. This humanoid robot was developed to teach social skills to autistic children. The robot NAO and the virtual assistant Siri are two more examples that help students with autism spectrum disorder(ASD) learn social skills. Tools like ActiveMath employs techniques of AI to allow students greater flexibility in finding a convenient learning environment. The AI-enabled Widex's Evoke will help students in improving their hearing capabilities. Grammarly is an AI-based writing assistant which will improve the writing skills of students with learning difficulties.
AI can help in the early detection of learning difficulties. This can help students understand their condition and prepare for future challenges. A. Jothi Prabha et al. created an eye movement analysis model for the detection of dyslexia. They used an eye tracker to observe eye movement. The eye movement data includes fixations, saccades, transients, and
distortions of the participants. Principal component analysis was used to identify high-level properties from raw eye tracker data. For the diagnosis of dyslexia in students, a PSO-based Hybrid Kernel (SVM-PSO) was created. Their results showed the proposed model's prediction accuracy was 95% more accurate than that of the Linear SVM model. The model was validated over 187 individuals. They concluded that with the use of eye movement data and machine learning, the development of very precise prediction models was possible. Their method was also offered as a screening tool for dyslexia diagnosis [53].
Students with learning difficulties have more emotional and behavioral challenges in the classroom than their peers without learning problems. In order to understand this issue, Nihal Ouherrou et al. conducted research on the benefits of using information and communication technology to understand the emotional factors of students with learning difficulties in online classrooms. In order to analyse the effects of the virtual learning environment, 42 students were divided into two groups. They considered seven basic facial expressions (fear, anger, disgust, surprise, happiness, neutral, and sorrow ) and assessed students' emotions using AI while they played a learning game. The results indicated that students with learning difficulties experience the same range of emotions as children without learning difficulties. Furthermore, they concluded that students with learning difficulties report fewer negative emotions compared to their peers in virtual learning environments [54].
Tools and intelligent learning environments based on AI can be used to create successful individualised educational techniques for children with learning difficulties.
Students with special needs are not as likely to use e-learning as students without special needs. Not much is known about the barriers and facilitators that cause this difference. The opinions of 21 teachers who took part in preliminary trials of an adaptive learning system based on multimodal affect recognition for students with learning disabilities and autism were gathered by Penny J. Standen et al. through focus groups and interviews. The adaptive selection of learning materials is driven by the system's multimodal detection of emotional state and scoring of performance. The teachers' thoughts on the possible effects of the system were summed up in five themes. These themes focused on learning, engagement, and factors that might affect adoption. These were how the system could change the way they taught, how it could affect how well students learned, how it could affect the relationships between teachers and students and between students, how easy it was to use, and how it could be set up. Even though the teachers who volunteered as testers were very interested, they pointed out barriers to adoption that needed to be fixed. Their finding showed how important it is for teachers and students to be involved in the process of design and development [55].
Phaedra S. analysed the historical development of intelligent learning environments. In addition, they reported the significant challenges that arise with the inclusion of intelligent learning environments. Furthermore, they examine a variety of novel strategies for addressing these issues, such as teacher training, the employment of instructional robots, and responsive systems. They concluded that AI in education is fast altering traditional views on teaching and learning. They also stated that traditional school and classroom models will change a lot over the next few years and decades as technology improves and spreads throughout educational institutions [56].
Using multimodal sensor data and machine learning, Penelope J. Standen et al. found that learning is linked to three emotional states: engagement, frustration, and boredom. Then, they figured out how to present the learning content so that the learner stays in the best emotional state and learns as quickly as possible. 67 people between the ages of 6 and 18, who were their own controls, took part in a
Figure 4: Assistive Technologies for Students with Learning Difficulties in Higher Education
series of sessions using the adaptive learning system so that it could be evaluated. Sessions alternated between using the system to choose the learning content based on how the learner felt and how well they learned (intervention) and just on how well they learned (control). Lack of boredom was the state with the strongest link to achievement, with both frustration and engagement positively related to achievement. They concluded that the intervention sessions were much more interesting and less boring than the control sessions, but the amount of work done did not change much. Their results suggest that activities that match the needs and emotions of the learner do increase engagement and that the system promotes emotional states that help learning. They suggested that longer exposure is also needed to figure out what effect it has on learning [57].
In summary, AI has the ability to transform education by creating cutting-edge teaching tools for students with special needs. In order to provide more intelligent solutions, the industry is expanding, demanding more study and cooperation between educators, app developers, and engineers. When AI is used as assistant in making decisions, it is challenging to explain the decisions made by AI because of its black-box nature. The students' personal data may be required for training these AI models, the protect the privacy of the students data is also a concern.
### _Extended Reality_
The combination of virtual environments, human-machine interactions, and wearable technologies is collectively referred to as extended reality (XR). The term "XR" encompasses both virtual reality (VR) and augmented reality (AR). Through the use of VR, users may interact with objects and other people in a manner that seems authentic. To add digital elements to a live, real-world situation, AR uses a smartphone, tablet, or headset. Mixed reality (MR) brings together the real and virtual worlds by using powerful computer technology, images, and input methods [58].
VR and AR can assist students with learning difficulties in many ways in their academic endeavors [59]. The students with learning difficulties are immersed in a 3D world like CoSpaces Edu containing auditory, touch, smell, and gustatory inputs. Students can interact by wearing a head-mounted display (HMD) and a haptic glove or by using a regular desktop PC and VR software [60]. By adding 3D effects to actual information, AR applications such as Google Lens allow users to remain unbiased observers and recognise the augmented effects. A VR environment such as Google Expeditions will allow students with learning difficulties to participate in activities that are unrestricted by their impairment and allows them to study in the most effective way possible. VR can also help promote positive views toward people with learning difficulties among their peers [61]. Individualised VR settings provide autistic children with the opportunity to learn social interaction and nonverbal cues [62]. Virtual environments or input stimulation are adaptable to student preferences [63]. VR and AR can promote motivation, facilitate engagement, strengthen cognitive skills, and improve memory in students with learning difficulties. They can also improve communication skills, especially among students with hearing problems [64]. VR can help autistic students with social interaction [65]. AR can improve language through the use of games provided by applications like Assemblr [66]. Narrator AR applications can inspire students with learning difficulties to improve their handwriting.
Students with learning difficulties have physical, mental, and communicative limitations [67]. Teaching individuals with learning difficulties requires a specific blend of methods and tools. VR is one of these technologies that will serve as an effective learning aid for students with learning difficulties. Arik Kurniawati et al. offered these students VR games. This game motivates students to acquire the items in a VR-based educational environment. The game helps students practise identifying, selecting, and pointing at things in response to visual and audio stimulation. Students with learning difficulties, autism, and learning difficulties participated in their studies. The results indicated that the game is completely simple and accessible for students with learning difficulties. They determined that all participants understood the instructions with little assistance [68].
ADHD, sometimes known as hyperactivity among students, is a prevalent neurodevelopmentalism disorder. ADHD is characterised largely by hyperactivity, inattention, and behavioural impulsivity. Traditional treatments often rely on clinicians and parents who may observe and analyse a patient's behaviour using behavioural scales; however, these treatments are time-consuming and ineffective in measuring behaviour [69]. In their experiment, Yunchuan Tan et al. combined several sensor technologies, such as eye movement sensors and electroencephalography sensors (EEG), and used virtual reality technology to develop an assessment and diagnostic approach for ADHD. This system provided a virtual classroom environment and included a number of activities, including an audio exam. The continuous performance tasks and the Wisconsin card sorting test was used to evaluate the students' attention, ability to think abstractly, and cognitive capability. They introduced distracting elements to their experiment and analysed the test participants' attention. In order to assess the subject's sustained attention and attention shift, they combined their test results with physiological data such as head and eye movements and EEG [70]. They concluded that their approach can improve the levels of concentration in students with learning difficulties.
Jorge Fernandez Herrero et al. suggest a concept and application for an immersive VR system with a head-mounted display to develop and teach the emotional and social abilities of students with autism spectrum disorders. They chose two groups of seven high-functioning ASD students with comparable intellectual aptitudes. On the first group, they used immersive virtual reality as a pedagogical tool to recreate virtual socialising contexts over the course of 10 sessions, using their own intervention design to address social and emotional competencies. As the control group, the second
group is not subjected to any intervention throughout the intervention period. The degrees of adaption and the observed improvements suggested that immersive virtual reality in the format described is consistent with the sensory preferences and visuospatial abilities of the ASD children who participated in this research. They concluded that immersive VR can be utilised effectively as a teaching tool for children with ASD [71].
In summary, XR can be used to successfully improve the skills and abilities of people with dyslexia, social anxiety, ADHD, linguistic impairments, physical or motor disabilities, Down syndrome, and cognitive deficits. The compatibility of these XR devices with other assistive technologies and devices remains a challenge due to their major design flaws and lack of standards.
### _Human-computer interaction_
Human-computer interaction (HCI) is a field of study that looks at the design of computer technology and, in particular, the users interact with computers. In HCI, cognitive science principles and methods allow software engineering and the human aspects of computing systems to work well together. HCI includes using gesture, touch, and even affords interaction via brain signals [72].
Universal design for HCI promotes inclusive education that produces accessible products for all students, including those with learning difficulties. The products accommodate individual preferences and abilities and efficiently transmit essential information independent of environmental conditions or the user's sensory capabilities. They are manipulable, reachable, approachable, and usable regardless of body size, posture, or mobility [73]. These design principles result in products that are compatible with assistive technology and are more usable for everyone [74]. Students with learning difficulties struggle with conventional forms of expression, such as writing, but can demonstrate their comprehension in a number of innovative HCI ways by using video or screen-casting tools including Clips, iMovic, Audacity, and others. HCI provides a readily accessible collaborative environment in which students may produce and share project-based and other related materials using online storage tools such as Dropbox. HCI also helps in the creation of student-response tools which improve student-teacher interaction. OneNote Web Clipper is a highly adjustable immersive reader that has options for changing the text size and line spacing, showing the parts of speech, and more. A student with a visual difficulty may use VoiceOver to describe what is on the screen using synthetic voice or braille (with a linked braille display). Students may draw thoughts with the aid of HCI visual tools like Draw.io and Popplet. These HCI tools with universal design will support inclusive education and assist students in overcoming their learning challenges.
The intelligent math e-tutoring system developed by Zikai Alex Wen et al. attempts to eliminate students' negative emotional responses. The technology identifies potential negative emotional actions by evaluating gaze, touchscreen inputs, and reaction time. The program then employs intervention strategies to prevent students from becoming irritated. Formative research carried out with five instructors of students with learning difficulties helped to develop this design. Teachers believed that the establishment of these intervention strategies would benefit students with learning difficulties and stated that among the intervention strategies available to them, giving students 'brain breaks' is the newest and most beneficial. Additionally, the instructors proposed that the system could be customised to identify negative emotional responses, in order to assist students with learning difficulties [75].
Brain-Computer Interface technology (BCI) is an important aspect of HCI. Bio-signals acquired by wearable sensors are attracting significant interest beyond the traditional medical arena, in new paradigms such as education [76]. Attention is a bio-signal that may be detected and analysed using BCI technology by measuring the frequency of alpha (8-13 Hz) and beta (14-30 Hz) waves. Attention and learning are highly interdependent. Typically, students with attentional difficulties also have learning difficulties. According to several instructors and professional experts, students' attention spans are decreasing [77]. To address this issue, Mohammed Serhiri et al. evaluated students' attention in online education. During the learning process, students' attention is maintained by an EEG-based attention evaluation system. Attention data is maintained in a database and used by signal-processing algorithms to comprehend student knowledge growth. They concluded that BCI can be used to enhance learning levels in computer-based education [78].
The use of manual sign systems is a means of communication between students with learning difficulties and their teachers. Due to a lack of learning support resources, instructors experience several practical obstacles while instructing children in manual sign language. To address these concerns, Youjin Choi et al. teamed up with instructors to design the Sondam Rhythm Game, a gesture-based rhythm game that aids the instruction of manual sign language. They conducted a four-week study with five teachers and eight students with learning difficulties. Based on video annotation and interviews, their game-based method to teach manual sign language has shown significant results. Their method increased children's attention span and motivation, as well as the number of spontaneous motions performed without prompting. In order to enhance teaching paradigms for eight students with learning difficulties, additional practical concerns and learning obstacles were identified. Based on their outcomes, they concluded that their proposed model method could be used to help learners improve their sign language abilities [79].
In summary, by enabling students to engage with technology using gestures, touch, and even brain signals, HCI can assist students in overcoming their learning challenges [80]. Personalised user interfaces and design issues continue to be challenging issues for HCI domain.
### _Internet of Things_
A system for connecting computers, mechanical and digital devices, things, or people with distinctive identities and the ability to transfer data via a human-to-human or computer-to-human interface is known as the Internet of Things (IoT). Both computers and humans can use IoT devices. IoT devices can transfer data over a network [81]. The IoT is considered by some to be a transformative force in education. The application of the digital technologies is not just making education omnipresent but also it is about making conventional systems of education more efficient and inclusive. IoT plays an important role in making education more interactive, collaborative, and accessible to all. IoT devices affords students reliable access to all learning resources, communication channels, and helps teachers keep track of student learning and progression in real-time [82]. Smart boards may be used in a similar manner to a blackboard for writing with a marker and can also show topic-related visuals and images to students. Global Positioning System (GPS) tracker-equipped school buses, smart security cameras, cellphones, and tablets with instructional applications will alter how schools and educational institutions have traditionally functioned. Teachers are concerned about student attendance, which is a daily requirement in schools. IoT can provide a solution to tracking and analysing student attendance for several purposes. Students are more engaging in virtual classrooms facilitated by smartphone applications. When they are able to comprehend more and more clearly, as described above, they are also able to think outside the confines of the classroom and to communicate and express their learning and questions [83]. The integration of IoT tools and smart devices will enable the educational curriculum is being changed and classroom surroundings are being made sound and light-sensitive to accommodate children with sensory disorders. In academia, IoT sensors collect data and automatically propose academic topics of interest to students for future learning processes [84]. These qualities make it more practical, and accessible for the students, instructors, and parents. It is a well-known reality that a quick transition of teaching methods and techniques cannot be done, but these gadgets are being customised and updated with the necessary software gradually and over time.
According to statistics, ASD has resulted in serious learning disorders. To address this issue, Raja et al. used Raspberry Pi to construct a system for evaluating the effectiveness of a smart monitor in assisting students with autism to learn and enhance their quality of life. They suggested a framework to support students with autism by assisting them in making choices, responding by telling parents what they are interested in and identifying their needs. They implement and evaluate a novel IoT-based system for supporting learning and enhancing the quality of life for students with autism. They claimed that their system could assist students with autism in understand any subject [85].
Anna Lekova et al. designed, produced, and experimentally validated a Speech and Language Therapy (SLT) system for students with communication impairments. Their approach is able to interact with the IoT in order to assist SLT services in many educational and social contexts. It can link various assistive devices, APIs, online services, and agents to meet the particular requirements of each student using the intervention. Node-RED is used to link a humanoid NAO-type robot, an Emotiv EPOC+ brain headset, an emotionally expressive EmoSan robot, and a Kinect depth sensor. It is a flow-based tool for visual programming without the need to write code, and it can operate locally or on the IoT. The proposed system is sufficiently broad to be adapted to various kinds of therapy and to enable additional assistive devices and cloud services [86].
Permanent or temporary Vision impairments present a number of obstacles in the daily living of a student with a learning difficulties. A student with vision impairment may be unable to distinguish between colours, which is a crucial aspect of work in various sectors. Humayun Rashid et al. developed a colour-detecting system for the visually impaired. They addressed two extremely crucial difficulties for visually impaired individuals overcoming obstacles and falling. The proposed system combines the most recent hardware components, including an improved central processing unit, sensors for IoT and cloud-based architecture that effectively detects colour and obstacles. Moreover, it also alerts visually impaired individuals about colour and obstacles in multiple languages. In the event of fall detection, this proposed system also transmits a fall notice to the caretaker of the visually impaired individual, which is a major component of this work.
In summary, IoT can transform conventional learning methods and help students overcome their learning difficulties [87]. Security, standards, and dependence on AI judgements are challenges associated with the use of IoT in providing support for students with specific learning difficulties.
### _Digital Twins_
The idea of using "digital twins" comes from NASA's Apollo program, in which at least two identical space vehicles were built. This allowed engineers to replicate the conditions of the spaceship during the trip, and the vehicle that stayed on Earth was called the twin [88]. In 2002, Michael Grieves was the first person to talk about a Digital Twin (DT) [89]. Previous research on DT definitions has shown that each system is made up of two parts: the physical system and a virtual system that includes all of the physical system's knowledge. Siemens describes it "A digital twin is a digital copy of a real product or process that is used to study and predict how it will work in the real world." Throughout the life-cycle of a product, digital twins are used to predict, simulate, and improve the product and manufacturing system before investing in real prototypes and assets [90].
A DT can help teachers to understand their students better and reduce the integration problems of a students with learning difficulties into inclusive higher education. Students with learning difficulties may find it difficult if they are
repeatedly challenged to modify their classroom behaviour. The teachers may not have a good understanding of these students' behaviour. If a DT is built for a student, it enables teachers to carry out analysis of behaviour on the DT to give student related insights and help teachers to best support the academic and behavioural outcomes for their students. DT's also enable students with learning difficulties to work on digitally depicted scenarios and prepare them for the challenges of the real world. The combination of DT and VR will assist students in overcoming the challenges they face that are a result of their learning difficulties.
In conclusion, DT offers significant potential to address educational and behavioural issues relevant to students with learning difficulties. The application of DTs as an assistive technology remains largely unexplored territory. A major challenge for the widespread application of DTs is the inaccurate representation of the twin, since the construction of the twin relies on many different technologies, including AI, IoT, among others, and any errors in those interconnected technologies will result in the definition of an incorrect twin.
### _The Metaverse_
The word "meta" is a Greek word that signifies "more complete" or "transcending" and "Verse" is an abbreviated form of "universe." In his famous science fiction novel 'Snow Crash' published in 1992, the idea of the metaverse was first introduced by Neal Stephenson, in which people use digital avatars to control and compete with one another in order to advance their position. The use of VR and AR equipment helps the metaverse become more widespread [91]. The metaverse is often described as a collection of socially conscious 3D virtual worlds [92].
The metaverse is a near-term technology offering significant opportunities for affording inclusive higher education. Regardless of their actual location and learning difficulties in the metaverse, students and teachers can meet in the digital world using their virtual reality headsets, promoting inclusive higher education. This capability can improve the teaching of individuals with learning difficulties. Inclusive design of the metaverse is however crucial to long term adoption in a number of domains including education [93]. The metaverse with XR has endless possibilities, with a potential influence on higher education that is especially significant for students with learning difficulties [94]. An inclusive metaverse-based school would allow teachers to not only speak about their discoveries but also demonstrate them in a 3D environment. Together with their classmates, students with learning difficulties can engage in serious questioning and use firsthand experiences to assist in their academic development. No longer will students with learning disabilities be forced to sit in a traditional classroom with nothing to do. They can instead learn alongside their peers due to the educational and social opportunities afforded by the metaverse [95].
The use of the metaverse in inclusive education offers several potential advantages over traditional models, allowing students with learning difficulties to experience historical sites or conduct dangerous experiments in a secure, virtual setting alongside their classmates [96]. Moreover, the metaverse learning environments can encourage safety in a manner that traditional classrooms cannot. Educators will have total control over student interactions in the metaverse and will be able to prohibit bullying. Thus, students with learning difficulties may concentrate on their education without worrying about bullies or other disruptions [97]. Roblox, Minecraft, Decentraland, Sandbox, Axie Infinity and Fornite are some of available the metaverse projects [98].
In conclusion, the metaverse offers significant potential for supporting students with learning challenges since it is a hybrid of cutting-edge and established technologies. The standardisation and interoperability of multiple technologies enable the metaverse to present a significant opportunity as an assistive technology for students with learning difficulties.
## IV Tools
In this section, we discuss a variety of tools for assisting students with various learning difficulties. An overview of these tools are shown in Table 3.
### _Kuhzweil 3000_
It is educational software designed to assist children with reading difficulties at home, at school, or in the workplace. This freeware includes OpenDyslexic typeface and text magnification to improve the readability of text for dyslexic students. Its 31 Natural Text-to-Speech voices are accessible in 18 dialects and languages, allowing students to access the same materials as their classmates. For a more meaningful learning experience, a test-preparation toolbar is accessible to students who wish to build their own evaluation [99].
### _Qtrobot_
QTrobot is a small, expressive AI-enabled humanoid developed for use by therapists and teachers. Children with autism spectrum disorder are taught communication, emotions, and social skills through the use of facial expressions, gestures, and games [100].
### _Activemath_
ActiveMath is a web-based AI-learning system that develops interactive (mathematics) courses depending on the student's goals, preferences, skills, and prior knowledge. The material is delivered in an XML-based, semantic manner. The required information is acquired from a knowledge base for each student, and the course is constructed based on educational guidelines. The student is then provided with the course using a conventional web browser. ActiveMath is distinguished by its incorporation of independent mathematical service systems. This facilitates exploratory learning, realistically complicated activities, and the acquisition of proof methods [101].
### _Widex's EVOKE_
Windex's EVOKE is the first hearing aid in the world to incorporate machine learning. Every day, it improves the audio experience of every user. In addition, it learns from the listening preferences of users throughout the world, bringing the development toward improved hearing out of the laboratory and into the real world [102].
### _Microsoft immersive reader_
The free AI tool Microsoft's Immersive Reader enhance reading and writing for people of all ages and abilities. It is integrated into Word, OneNote, Outlook, Office Lens, Microsoft Teams, Reading Progress, Forms, Flipgrid, Minecraft Education Edition, and the Edge web browser [103].
### _Grammarly_
Grammarly is a typing assistant that uses AI and the cloud to check for problems in spelling, grammar, punctuation, clarity, engagement, and delivery. AI is used to recognise faults and look for a suitable replacement. Additionally, it gives users the option to customise their language, tone, and context [104].
### _Google Glass_
Google Glass will provide basic voice or vision commands for online and Internet engagement. Google Glasses can easily power mobile devices like smartphones or tablets and may be thought of as wearable computers in this fundamental sense. The basic identification tool for blind and visually impaired people using the Google Glass camera. This viewpoint makes it possible to describe Google Glass as an assistive technology [105].
### _Google Expeditions_
Google Expeditions is essentially a collection of Augmented and Virtual Reality experiences and "field excursions" offered by Google. Some of the 'expeditions' are more supported than others with lesson plans, supporting links, and background information [106].
### _Merge Cube_
The Merge Cube makes it possible to study and interact with the digital world in a whole new manner by letting you manipulate 3D digital things. In addition to many other things, students may inspect a DNA molecule, research the Earth's core, dissect a virtual frog, hold and share their own 3D creations, and explore a galaxy in the palm of their hand [107].
### _Cospaces Edu_
Users of CoSpaces, a web-based XR tool, may create and engage with interactive media content. CoSpaces allows students to demonstrate their knowledge in fresh ways by building interactive virtual settings that might be simple or complex but are still user-friendly for beginners [108].
### _Assemblr_
Assemblr will enable educators to construct 3D objects and scenarios for classroom usage. Students will have a better learning experience while utilising the Assemblr application to engage in AR, and VR [109].
### _Narrator Ar_
An augmented reality software, Narrator AR helps students improve their handwriting. When a child's handwritten name is scanned, the application superimposes an animation showing the name blasting off the paper in the form of a rocket or a rainbow unicorn trail. Using the Narrator AR Mobile application, students can make a connection with their writing as virtual letters are lifted from the paper [110].
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Toral Name** & **Building Learned** & **AI** & **Fundability** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \cline{1-1} \cline{5-5} & & & & \\ \hline \end{tabular}
\end{table} TABLE III: An Overview of Tools for Assisting Students with Learning Difficulties
### Augmented classroom
The purpose of the CleverBooks Augmented Classroom (ARC) is to let teachers provide their students access engaging, interactive courses by delivering content in fully immersive, 3D augmented settings. When used in the classroom, ARC's 3D environment has been shown to boost student engagement and motivation, leading to better academic outcomes [111].
### Roblox
Roblox Studio is a free, family-friendly resource for teaching students about computer programming, animation, 3D design, and application development. Using Roblox Studio in the classroom boosts students' self-esteem by giving them a real-world platform on which to practice the scientific inquiry skills outlined in the Next Generation Science Standards (NGSS). Students can travel across time and space in Roblox adventures that are set up for exploration, investigation, and experimentation. Students may experience and analyse scientific events, go back in time to ancient Rome, and even construct virtual robots to compete against one another in friendly or hostile team challenges [112].
### Minecraft
The educational version of Minecraft is a digital world designed to foster innovation, teamwork, and problem-solving through gaming. Teachers worldwide can use Minecraft education edition to capture their students' interest in a wide range of disciplines and make abstract concepts more concrete [113].
### Voicecover
The iOS operating system has a built-in screen reader called VoiceOver. It can describe what's on the screen in synthetic voice or braille through a linked braille display for the visually impaired [114].
### Voice Dream Reader
Voice Dream Reader is a fully functional document manager that enhances the built-in text-to-speech functionality with additional personalisation options. These options include masking to display only a small portion of the text, support for dyslexia-friendly fonts, fully customisable colours for word and sentence highlighting, and more. Documents may be imported from a number of sources, such as Google Drive, Dropbox, and Bookshare, a programme that provides students with qualifying reading difficulties with free access to books in accessible formats [115].
### TouchCast Studio
TouchCast Studio is a free iPad application that provides students with all the tools they need to create interactive films with hyperlinked hotspots. Students can utilise a variety of video applications to link to supporting research on the Internet, ask questions through polls, link to an accessible script of their film, and more. The application has many complicated features, like a built-in telepromter and green-screen features that let students work themselves from different places. It also works with multiple iPhone cameras [116].
### Book Creator
The Book Creator provides a blank canvas on which students may generate an ebook to demonstrate their comprehension and incorporate all of their media. Each book may have text, photographs with descriptive captions, audio, and video. The free, fully-functional version of Book Creator for iPad may be used to produce one book. Upgrade to the premium version to have access to limitless publishing and comic book templates. There is also a Chromebook-compatible web-based version of Book Creator; with this edition, users may generate up to 40 free booklets [117].
## V Projects
In this section, we discuss some related projects that work toward students with learning difficulties and inclusive education. The overview of the related projects that work towards students with learning difficulties and inclusive education is shown in Table 4
### Inced
This project is managed by the private organisation View tool MTU and funded by Erasmus+, the European Union's education, training, youth, and sports programme. This project focuses on inclusion, fostering equality and non-discrimination, and using modern teaching-learning techniques. The project's objectives are to better understand concepts of mixed-ability and inclusive collaboration and to highlight it as a possibility rather than a challenge, to adapt current systems and build innovative educational games and methodologies for collaborating in mixed-ability groups that can be used in non-formal and formal learning contexts for students aged 13 to 18 years old, to promote inclusive education among various types of stakeholders, and to improve the youth workforce [118].
### Fairness Inclusive
This project is handled by the Kreisau Initiative association and funded by Erasmus+, the European Union's programme for education, training, youth, and sport. This project focuses on disability and special needs, equality and access for the underprivileged. One of the primary objectives of the initiative is to engage disadvantaged youth in international educational activities. This engagement help the emancipation of young people so that they may lead more independent lives and treat others with tolerance, solidarity, and respect in the future. The underlying belief of this objective is that it increases their social inclusion and minimises their marginalisation [119].
### _Inclusion Team_
This project is managed by Fyllingsdale High School in Norway. The partners include universities, secondary schools, a public teacher training centre, and educationally focused non-governmental organisations. Erasmus+, the European Union's programme for education, training, youth, and sport, is funding this project. This project focuses on new technologies, digital skills, access for the disadvantaged, and particular requirements for those with impairments. The project aims to establish a learning community in which universities, schools, teacher training centres, and non-governmental organisations share best practices for teaching ICT to students and individuals with special needs. The collaborating parties intend to equip educators with training and expand access to high-quality learning tools, and materials [120].
### _Ellen_
This project is managed by Goethe University Frankfurt and funded by Erasmus+, the European Union's education, training, youth, and sport programme. In this project, a teacher will teach students how to research the needs of a certain group of learners by interviewing those learners. In other words, neurodegenerative learners will be treated as experts in their own learning process. By helping pre-service teachers build up their Inquiry-based learning skills and competencies, this initiative also shows how to partnership with neurodiverse learners, the strengths and needs of the target population's learners can be identified, evaluated, and met in learning environments. [121].
### _Accessible Peer Interaction with Disabled Youth_
The National Association of Professionals Working with People with Disabilities manages this project (NARHU, Bulgaria). This initiative is supported by Erasmus+, the education, training, youth, and sports programme of the European Union. This project targets youth workers and leaders, student leaders, student bodies, youth organisation leaders, and representatives of disabled youth organisations. The work aims to assist youth workers in developing and disseminating effective strategies for reaching out to marginalised youth, refugees, asylum seekers, and migrants, as well as combating racism and intolerance among young people [122].
## VI Challenges and Future Directions
This section discusses challenges and future directions of assistive technologies for students with learning difficulties in higher education. An overview of these challenges and future directions is depicted in Fig 5.
### _Personalization of Assistive Technologies_
**Challenge:** It is not possible for a single assistive technology to represent a universal solution to the learning challenges faced by all learning difficulties. The need for these assistive technologies varies with the needs and preferences of each student with learning difficulties. Some students may or may not need the full functionality of a particular assistive technology, while others may require a combination of the functionality of many assistive technologies. Personalisation of these assistive technologies is very important for inclusive education, but which still remains a challenge that must be addressed.
**Potential Solution:** Students with learning difficulties may customise their assistive technology to meet their needs in a virtual world by using automated 3D modelling and printing. The organisations must also permit mass customisation of educational aids that help students overcome their learning difficulties [123].
### _Design Issues_
**Challenge:** A variety of computer interfaces and technologies will be used for academic purposes by students without learning difficulties. While individuals with learning difficulties need a personalised environment, this may or may not be available in their classroom environments. If the instructor does not provide such assistive technology to the students, the inclusive education concept would be compromised. The challenge with HCI still exists, and it not feasible for students with learning difficulties to use the same computer interfaces and technology as effectively as their classmates.
**Potential Solution:** The usage of personas in designing an HCI takes into account a variety of characters with and without learning difficulties, which aids in the better design of services and products that can be utilised by everyone regardless of any difficulties [124].
### _Interoperability Problem Among Assistive Technologies_
**Challenge:** The devices and technologies used in the creation of various assistive technologies may not be developed by a single organisation. There are currently no interoperability standards or regulations in place for assistive technology like the metaverse. As a result, these assistive technologies will raise integration and interoperability challenges. This could
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Project Name** & **Key Address** & **Adline Type** & **Connections** \\ \hline Incl.D & Participating the components and components of components & Compatiations pertaining to youth & Latents, Portugal, Enema, and Spain & Essentials, \\ \hline Erasmuselhouse & Learningability of Individuals & Youth mobility & Essentials, Enema, France, and Germany & Essentials \\ \hline Incl.C.T.
result in the student losing focus in an inclusive educational environment.
**Potential Solution:** Instead of only providing best practices, there is a need for standards that address these issues relating to the creation of assistive technologies. These guidelines need to specify the industry standards for developing assistive technologies. Furthermore, verification and certifications from the scientific community are required before an assistive technology is delivered to the market. There should be specific guidelines for utilising them while adapting to the real world. This will allow enterprises to build assistive devices more quickly and at a lower cost. Additionally, this will also solve the interoperability problems with different assistive technologies [125].
### _Challenge with Traditional Classroom Environments_
**Challenge:** Traditional classroom environments may have a negative impact on all students to some extent, at some time, and in some way, but students with learning difficulties are more vulnerable. This difficulty stems from their difficulties with speaking, writing, or thinking. Incorporating assistive technology into a conventional classroom environment exacerbates the challenge by requiring students to bring gadgets to class and use them to overcome their difficulties. These students' gadgets are unusual among their classmates, and if any technical difficulties arise, they further exacerbate their problems.
**Potential Solution:** A Virtual Interactive Inclusive Classroom (VIC) can help students with learning difficulties overcome the challenges they face in traditional classroom settings. Students can interact with one another in this class while taking classes from a convenient location. The virtual interactive, inclusive classrooms will use cutting-edge technology such as Explainable Artificial Intelligence (XAI) to help students make decisions, sophisticated Mixed Reality (MR) for in-person interactions, and advanced robotic assistance that can be linked with the brain of a student with a learning difficulties to help them do academic tasks. Thus, VIC can help students overcome their disabilities and compete with their peers [126].
### _Privacy and Justification Issues_
**Challenge:** AI can be combined with other technologies, such as digital twins, IoT, HCI, and the metaverse, to produce better outcomes. Students with learning difficulties must share personal data to receive decision-making assistance from AI models that support other technologies. These AI models cannot provide privacy for data provided by students with learning difficulties, which may create distrust of assistive technologies. The recommendation from AI-based assistive technologies cannot be completely depended on as they are black box in nature [127].
**Potential Solution:** Federated learning can help with the problem of data privacy in assistive technologies. Federated learning addresses data ownership and privacy by guaranteeing that data never leaves dispersed node devices. Simultaneously, the global model is updated and distributed to all network nodes. This ensures the privacy of the students' data [128]. XAI can help AI-based decisions more justifiable and accountable. XAI is a set of procedures and techniques that allow users to understand and rely on the outcomes. XAI will increase the accountability of recommendations. XAI can explain the anticipated outcomes and any potential biases. It
Figure 5: Challenges and Future Directions of Assistive Technologies for Students with Learning Difficulties in Higher Education
ads in describing a suggestion's accuracy, fairness, and transparency and improves decision-making. XAI increases trust and confidence in assistive technology suggestions [129].
## VII Roadmap
Based on the findings of the systematic literature review that we carried out, AI, XR, IoT, HCI, digital twins, and the metaverse are the potential technologies that can assist students with learning difficulties. In the past few decades, AI and HCI have played a significant role in supporting students who struggle with a variety of learning difficulties. As research progresses, AI and HCI as forms of assistive technology will continue to develop. IoT as an assistive technology enhances the voice and vision and also provides real-time data on various challenges faced by students with learning difficulties using sensors. AR and VR, as assistive technologies, will effectively assist students with learning difficulties in engaging in activities and social integration. Even though the IoT, AR, and VR are already being used as assistive technologies in developed countries, it will take a significant amount of time for these technologies to catch on in developing or underdeveloped countries. Assistive technologies like digital twins require a huge number of IoT sensors to work effectively, and these digital twins, which are replicas of real-world objects, can help experts mitigate the challenges of learning difficulties virtually before being applied to the real world. The metaverse as assistive technology, which is enabled by various technologies like blockchain, edge computing, quantum computing, 3D modelling, XR, IoT, 6G, AI, and others, will make students with learning difficulties equivalent to their peers. The digital twins and the metaverse will be the future of assistive technologies that will help students overcome the challenges raised by their learning difficulties.
## VIII Conclusion
This survey aims to understand the potential role of a range of recent developments in technology in providing inclusive education for students with learning disorders in higher education. Throughout this process, we have analysed the learning and support needs that arise as a result of a student's learning difficulties and how these recently technologies developed can support inclusive education for students with learning difficulties. We have searched a range of online digital libraries to locate journal articles and conference papers relevant to our study. According to our examination of the relevant literature, there has been little research on assistive technologies related to XR, IoT, digital twins, and the metaverse. The use of AI and HCI as assistive technologies is more prevalent than other technologies. It is also understood that research on aiding students with learning difficulties in primary education is more widespread than in higher education. The selection of these technologies within this review is also supported by recent reports identifying them as important developments in educational technology for higher education in the near, mid and longer term future. Our review has also highlighted projects to assistive technology for the inclusive education students with learning difficulties in higher education. We also proposed strategies that may aid individuals with learning difficulties in higher education. Moreover, we highlighted the challenges of using assistive technology in providing inclusive education for students with learning difficulties in higher education and providing potential solutions. We aim to provide a road map to describe an accessible and inclusive higher education system using important and highly promising technologies to support students with learning difficulties.
|
2305.16738 | Flow characterisation and power consumption in an inline high shear
rotor-stator mixer using CFD | The aim of this paper is two-fold: (1) to provide a detailed investigation of
the turbulent flow in an inline high-shear rotor stator mixer; (2) to provide a
comparison of two different classes of turbulence models and solution methods
currently available. The widely used multiple reference frame (MRF) method is
contrasted against a more recently developed sliding mesh method. The sliding
mesh algorithm accounts for rotation of the blades and is able to capture the
transient effects arising from the rotor-stator interaction. The choice of
turbulence model is shown to have a significant impact, with second moment
closures able to capture best the hydrodynamics. With an appropriate choice of
turbulence model and solution algorithm, we thus demonstrate the capacity of
CFD to provide accurate and computationally cost effective characteristic power
curve predictions. | Vipin Michael, Umair Ahmed, Mahmoud Assad, Robert Prosser, Adam Kowalski | 2023-05-26T08:40:58Z | http://arxiv.org/abs/2305.16738v1 | # Flow characterisation and power consumption in an inline high shear rotor-stator mixer using CFD
###### Abstract
The aim of this paper is two-fold: (1) to provide a detailed investigation of the turbulent flow in an inline high-shear rotor stator mixer; (2) to provide a comparison of two different classes of turbulence models and solution methods currently available. The widely used multiple reference frame (MRF) method is contrasted against a more recently developed sliding mesh method. The sliding mesh algorithm accounts for rotation of the blades and is able to capture the transient effects arising from the rotor-stator interaction. The choice of turbulence model is shown to have a significant impact, with second moment closures able to capture best the hydrodynamics. With an appropriate choice of turbulence model and solution algorithm, we thus demonstrate the capacity of CFD to provide accurate and computationally cost effective characteristic power curve predictions.
keywords: In-line mixers, rotor-stator mixers, Silverson, Power number, Turbulence modelling, CFD +
## 1 Introduction
The mixing process plays a significant role in improving the homogeneity and quality of a wide range of products in the fast moving consumer goods industries (i.e. pharmaceutical, biomedical, agricultural, cosmetic, health care and food processing). Inline rotor-stator mixers are widely used in processing due to their high efficiency and their capacity to accelerate the mixing process by providing a focussed delivery of energy [1]. However, the high energy dissipation rates and short residence times within the mixer limits current understanding of the fluid dynamics within these devices and consequently their relationship to overall mixer performance [2].
Rotor-stator mixers consist of high speed rotors surrounded by close fitting stator screens. The typical tip speeds during operation range from \(10-50\)m/s, and the gaps between the rotor and stator range between \(100-3000\mu\)m [3], generating high shear rates in the rotor-stator gap ranging from \(20,000\)s\({}^{-1}-100,000\)s\({}^{-1}\)[2]. The high kinetic energy imparted to the fluid by the rotating blades is mainly dissipated local to the stator screen; the high rate of energy dissipation makes such devices advantageous for physical processes such as mixing, dispersion, dissolution, emulsification and de-agglomeration [4].
The power curve is one of the main tools used to characterise high shear mixers, since power consumption is intimately linked to the overall energy dissipation and thus provides a comparative basis for the mixer performance. The power curve is also useful for scale up calculations [5]. Recently, efforts based on experimental methods have been made to characterise and predict the power consumption of inline Silverson mixers [4; 5; 6; 7; 8]. However, investigations of the detailed flow structures and mixing within these devices are still limited. Baldyga et al. [9; 10] and Jasinska et al. [1; 11] have carried out CFD simulations of an inline Silverson 150/250 MS in-line mixer focussing on estimating the product yield during chemical reaction, distribution of particle aggregates and droplet size distributions. The details of the fluid dynamics within the mixer were limited; simulations were carried out using the standard \(k-\epsilon\) turbulence clo
sure via a multiple reference frame (MRF) model. Qualitative agreement was found between the experimental and simulation results although details of the transient flow (due to periodic passing of the blades in the front of the stator cavities) were lost due to the inability of MRF to simulate the rotor rotation. In addition, standard eddy viscosity closures are not sensitive to fluid rotation and streamline curvature, and hence their use limits the predictive capability of CFD simulations in these mixers [12]. In Michael et al [13] Unsteady Reynolds Averaged Navier-Stokes (URANS) simulations on a sliding mesh were performed for the fluid dynamics, linking the \(k-\omega\) SST turbulence model to the population balance equations. Drop dispersion and non-Newtonian rheology of dense emulsions in the mixer was investigated using a combined CFD-PBM approach.
This paper builds upon these earlier CFD investigations by presenting a detailed investigation into the turbulent flow dynamics arising in the inline Silveron 150/250MS mixer. A sliding mesh algorithm is used to capture the interaction between the rotating and stationary volumes within the mixer. Turbulence is modelled using both rotation-curvature compensated eddy viscosity models (EVMs), and second moment closures (Reynolds stress models or RSMs). The latter class of models are able to account for rotation and curvature effects in a systematic manner, due to the presence of exact production terms containing mean flow gradients and system rotation, but they come at a higher computational cost. The ability to predict power consumption, strongly swirling turbulent flow, and mixing, using both EVMs and RSM closures forms the major output of this work.
The paper is organised as follows: in the next section we briefly describe the test configuration investigated. Section 3 outlines the numerical procedure and in section 4 the different turbulence models are described. Results are presented and discussed in section 5, with the conclusions summarised in the last section.
## 2 Test configuration
The Silverson double screen 150/250MS in-line mixer has been experimentally studied in several works [4; 5; 6; 7; 8], measuring the power consumption and mixer performance at different operating speeds. The mixer has two rotors which rotate together within closely fitted stator screens. The rotors and stator screens of the mixer are shown in Figure 1. The inner screen consist of 6 rows of \(50\times 1.59\,\mathrm{mm}\) diameter circular holes on a triangular 2.54 mm pitch. The outer screen consist of 7 rows of \(80\times 1.59\,\mathrm{mm}\) diameter circular holes on a triangular 2.54 mm pitch [4]. The inner rotor has four blades with inner diameter of 26.2 mm and an outer diameter of 38.1 mm, while the outer rotor has eight blades with an inner diameter of 49.9 mm and an outer diameter of 63.5 mm. The gap between the rotors and stator screens is 0.24 mm. The mixer usually operates over a range of speeds varying from 3000 to 12000 rpm with the fluid flowing through the device at different flow rates.
## 3 Numerical method and computational configuration
The simulations were performed using \(Code\_Saturne\), an open-source CFD code developed by EDF [14] (see [http://www.code-saturne.org](http://www.code-saturne.org)). \(Code\_Saturne\) is an incompressible solver based on a collocated discretisation of the domain,
Figure 1: Silverson 150/250MS mixer, (a) Rotor (b) Stator.
and is able to treat structured and unstructured meshes with different cell shapes. It solves the Navier-Stokes equations with a fractional step method based on a prediction-correction algorithm for pressure/velocity coupling (SIM-PLEC), and Rhie and Chow interpolation to avoid pressure oscillations. The code uses an implicit Euler scheme for time discretisation, and a second order centred difference scheme is used for the spatial gradients. Rotating meshes are handled via a turbo-machinery module, which solves the transport equations for the initial geometry, updates the geometry and then corrects the pressure as shown in Figure 2. The code has previously been validated to many industrial and academic studies, ranging from simulations of incompressible flows (with and without rotating meshes) [15; 16; 17] to low Mach number variable density reacting flows [18; 19]. A number of RANS turbulence models are available in \(Code\_Saturne\); the standard \(k-\epsilon\) model of Jones and Launder [20] with standard Log-Law wall function, the \(k-\omega\) Shear Stress Transport (SST) model of Menter [21] and the quasi-linear second moment closure model (SSG) of Speziale et al [22].
A 2-D computational domain has been used in the current investigation as shown in Figure 3a; this provides a comparable basis to the 2-D MRF configuration adopted in earlier studies of Jasinska et al [1; 11]. The computational domain is meshed with 180000 cells and shown in figure Figure 3b. The mesh is refined in the regions near to the sliding interface located in the rotor-stator gaps (as shown in Figure 3c and Figure 3d). Grid sensitivity studies have been carried out and the grid size for mesh independent resu
Figure 2: Schematic of mesh handeling in the turbomachinery module of \(Code\_Saturne\).
Jasinska et al [1; 11]. Standard inflow conditions on the inlet faces and pressure outlet conditions on the outlet faces are specified. A no-slip condition is applied to the velocity at the walls along with the appropriate wall treatment through standard wall functions for turbulence and zero normal gradients for scalars. Symmetry conditions are used in the transverse direction. Similar boundary conditions have been used in the earlier study of Jasinska et al [1; 11] for the same Silverson mixer.
Figure 3: 2D computational domain and mesh for the Silverson 150/250MS mixer
## 4 Turbulence models
### Eddy viscosity models
Eddy viscosity models rely on the turbulence viscosity hypothesis introduced by Boussinesq [23]:
\[\overline{u^{\prime}_{i}u^{\prime}_{j}}\propto\mu_{t}\frac{\partial\overline{u^{ \prime}_{i}}}{\partial x_{j}}. \tag{1}\]
The turbulent Reynolds stresses \(\overline{u^{\prime}_{i}u^{\prime}_{j}}\) are assumed to be proportional to the mean rate of strain and the eddy viscosity \(\mu_{t}\) is a product of length and velocity scales. The velocity scale is obtained from solving a transport equation for the turbulent kinetic energy, \(k\). Depending on the choice of length scale, two of the most commonly used eddy viscosity models are the \(k-\epsilon\) model where \(\mu_{t}=\mu_{t}(k,\epsilon)\)[20] and the \(k-\omega\) SST model where \(\mu_{t}=\mu_{t}(k,\omega)\)[21]. Here \(\epsilon\) is the turbulence energy dissipation rate and \(\omega\) is the specific turbulence energy dissipation rate. The transport equations for both models are given in the appendix.
These models are simple and effective in terms of computational cost, but have some predictive failings, including where flows exhibit strong turbulent stress anisotropy. Flows with strong rotation and curvature effects and flows with complex strain fields (such as those found in the Silverson mixer) historically have been challenging to capture via eddy viscosity models [12]. The problem arises from trying to characterise the complex stress state embodied in \(\overline{u^{\prime}_{i}u^{\prime}_{j}}\) via Eq. (1), even though the turbulent kinetic energy \(k=\overline{u^{\prime}_{i}u^{\prime}_{i}}\) is computed to a reasonable accuracy [12]. To rectify this shortcoming second moment closure models are needed.
### Second moment closure models
Several major drawbacks of the eddy viscosity models are overcome by second moment closures or Reynolds stress transport models (RSM). In these models, transport equations for the six independent components of the Reynolds stress tensor and an additional equation for turbulent dissipation \(\epsilon\) are solved. These models are able to account for anisotropies in the Reynolds stress field
without further modelling. One of the most widely used second moment closure models is the quasi-linear closure model of Speziale et al [22], commonly known as the SSG model. The details of the equations solved in these calculations are given in the appendix.
Second moment closures generally lead to significant improvements in the prediction of mean flow properties and of the Reynolds stresses for simple and complex flows (i.e. wall jets, asymmetric channels and curved flows) [12; 24]. One of the major drawbacks of second moment closure models is their associated computational cost; these models have consequently not been widely used in industrial flows. Additionally, the models' sophistication can lead to numerical convergence problems due to the coupling of the mean velocity and turbulent stress fields through source terms [12]; their use typically requires users with greater degree of technical CFD awareness.
## 5 Results and discussion
The results from using different turbulence models and different algorithms for handling the rotation of the mixer are reported in this section. The simulations are compared with the experimental results of Kowalski et al [4] for power consumption and Cooke et al [7] for power number at different Reynolds numbers.
### Comparison between different solution methods
Two different methods to account for the rotation of mixer are first compared. Figure 4 shows the relative velocity predictions produced using the MRF and sliding mesh methods. It can be seen that the MRF method leads to the formation of an anomalous jet between the inlet and the outer screen (Figure 4a). This leads to the formation of large recirculation zones between the outer screen and the mixer wall as shown in Figure 4a. These jets and recirculation regions have been reported in the earlier simulations of Jasinska et al [1]. These jets are anomalous because they oppose the direction of rotation and lead to the
formation of a recirculation zone on the pressure side of the mixer blade. They form because the fixed rotor-stator configuration in the presence of imposed Coriolis body forces (via MRF) provide a spurious curvilinear leak path for the fluid. The observed recirculation is an equally spurious response to this driver.
The rotating mesh algorithm (Figure 3(b)) overcomes this problem by physically accounting for periodic passing of the rotor blades over the holes of the stator; the size of the recirculation zones between the outer screen and the mixer wall is hence substantially reduced. Figure 3(b) shows recirculation zones forming at the back of the mixer blades. This leads to a pressure drop across the blade which in turn leads to more force on the blade resulting in high power consumption.
The influence of the solution method on the prediction of mixing within the mixer is shown in Figure 5. These results have been obtained using the SSG model. The MRF method (Figure 4(b)) is shown to overestimate mixing by predicting a more homogenous distribution of the scalar within the rotor swept volume compared to the sliding mesh method (Figure 4(c)). The influence of the unphysical preferential leak paths predicted by the MRF can also be observed beyond the outer screen.
Figure 6 shows the power prediction from the MRF method and the rotating
Figure 4: Relative velocity vector (m/s) prediction by using the multiple reference frame and the sliding mesh algorithm. Mixer is rotating at 6000 rpm.
mesh algorithm each using the SSG turbulence model. The predicted power from the simulations is calculated as [7]:
\[P=2\pi NM, \tag{2}\]
where \(N\) is the rotation speed and \(M\) is the torque calculated from the simulations. Both methods predict an increase in power as the flow rate increases which is consistent with the experimental results of Kowalski et al [4]. There is a small discrepancy in the predicted trend at low flow rates (\(Q<500\) kg/hr) where the power in the experiment decreases due to a rapid drop in the mixer pumping efficiency [4]. Both methods in the simulation fail to predict this trend accurately due to the use of a steady flow rate assumption used at the inlet boundary. It can be seen in Figure 6 that there is a very small difference between the two methods at lower flow rates and that this difference increases as the flow rate is increased. The results show that the rotating mesh algorithm improves the power prediction by 5% at lower flow rates and by 12.5% at the
Figure 5: Distribution of scalar concentration after 10 revolutions using different solution methods with SSG model. Mixer is rotating at 6000 rpm with zero inflow.
higher flow rates when compared to the MRF method. Overall it may be seen that the rotating mesh algorithm leads to power predictions that are closer to experimental results. Both CFD methods however underpredict power compared to experiments and this could be related to using a 2-D representation of a 3-D flow field.
### Turbulence model effects
#### 5.2.1 Flow and mixing characterisation
In order to investigate the differences in flow characterisation between the EVMs and RSM, the predicted velocity magnitude and vorticity magnitude are compared in Figure 7 and Figure 8 respectively. These results have been obtained using the sliding mesh method. The results from \(k-\epsilon\) model are not shown here as they are similar to the results obtained from the \(k-\omega\) SST model.
Figure 6: Power curve for Silverson 150/250 mixer using MRF and sliding mesh solution methods.
There are subtle differences in the predicted velocity magnitude at different flow rates by the \(k-\omega\) SST and SSG models as shown in Figure 7. Figure 8 shows the vorticity field predicted by using different turbulence models at different flow rates. High regions of vorticity imply high velocity gradients promoting higher rate of mixing within the mixer. It can be seen in Figure 8b,d & f that the SSG model predicts higher levels of vorticity at all flow rates when compared with the \(k-\omega\) SST model predictions in Figure 8a,c & e.
The influence of turbulence models on the predictions of scalar mixing is investigated in more detail. The closure problem arising from Reynolds averaging the transport equations for the evolution of a passive scalar such as concentration \(Y\) requires modelling of the turbulent flux \(\overline{u_{i}^{\prime}Y}\). In EVMs this is approximated as
\[\overline{u_{i}^{\prime}Y}\propto-\Gamma_{t}\frac{\partial Y}{\partial x_{i}}, \tag{3}\]
where the turbulent diffusivity \(\Gamma_{t}\) is approximated using the eddy viscosity \(\mu_{t}\) and is thus a scalar quantity (scalar gradient diffusion hypothesis, SGDH). With this model the turbulent flux is aligned with the mean scalar gradient. In RSMs, this model may be generalised to obtain
\[\overline{u_{i}^{\prime}Y}\propto-\frac{k}{\epsilon}\overline{u_{i}^{\prime}u _{j}^{\prime}}\frac{\partial Y}{\partial x_{j}}, \tag{4}\]
which is known as the generalised gradient diffusion hypothesis (GGDH). The turbulent diffusivity in this model is a tensor which is an improvement over the SGDH as it allows anisotropy into the scalar flux model and coupling of the scalar flux with the Reynolds stresses. The distribution of the concentration field from its initial condition (Figure 5a) predicted using the SSG model (Figure 9a) is compared to the predictions using SST model (Figure 9b). The results demonstrate that mixing occurring through turbulent diffusion mechanism is better predicted by the SSG model.
The prediction of turbulent kinetic energy \(k\) and turbulent energy dissipation \(\epsilon\) by the different turbulence models are shown in Figure 10. The salient features of the turbulent kinetic energy field, peak regions on the pressure side
Figure 7: Velocity predictions from \(k-\omega\) SST model and SSG model.
Figure 8: Vorticity predictions from \(k-\omega\) SST model and SSG model. The vorticity is normalised by the rotation speed of the mixer.
of the blade and low regions on the suction side, are predicted by the SSG (Figure 9(a)) and SST models (Figure 9(c)). The \(k-\epsilon\) model (Figure 9(e)) performs comparatively poorly in this regard. Turbulent energy dissipation is highest in the high shear regions around the stator screens and on the pressure side of the rotor blades. The \(k-\epsilon\) model (Figure 9(f)) is seen to predict higher dissipation compared to the other models but predicts dissipation occurring in flows emanating from the outer screen which is also predicted by the SSG model (Figure 9(b)) but not the SST model (Figure 9(d)).
The overall distribution of turbulent stresses in the mixer and hence the suitability of an EVM or RSM model can be obtained by examining the normalised Reynolds stress anisotropy tensor \(b_{ij}\) defined as:
\[b_{ij}=\frac{\overline{u_{i}^{{}^{\prime}}u_{j}^{{}^{\prime}}}}{\overline{u_{k }^{{}^{\prime}}u_{k}^{{}^{\prime}}}}-\frac{1}{3}\delta_{ij}. \tag{5}\]
Using Eq. (5) it can be seen that the anisotropy tensor has zero trace. Hence its first principal invariant
\[I_{b}=b_{ii}=0. \tag{6}\]
The state of anisotropy of the turbulent stresses can thus be investigated using its two remaining independent principal invariants. These invariants are defined as:
\[II_{b}=-\frac{1}{2}b_{ii}^{2}, \tag{7}\]
\[III_{b}=\frac{1}{3}b_{ii}^{3}. \tag{8}\]
Figure 9: Distribution of scalar concentration after 10 revolutions using different turbulence models. Mixer is rotating at 6000 rpm with zero inflow. Sliding mesh method is used.
Figure 10: Prediction of turbulence quantities using different turbulence models and sliding mesh method. Mixer is rotating at 6000 rpm with zero flow rate
On evaluating the principal invariants in principal axes, the second invariant defines the normal distance of the deviatoric stress plane from the isotropic vector and together with the third invariant fixes precisely the stress state on this plane. Pope [25] proposes a simpler graphical representation of the anisotropic state of the Reynolds stresses in a turbulent flow using a \(\xi-\eta\) plane, where
\[6\eta^{2}=-2II_{b}=b_{ii}^{2}, \tag{9}\]
and
\[6\xi^{3}=3III_{b}=b_{ii}^{3}. \tag{10}\]
Analysing these invariants allows the turbulent state to be characterised via the Lumley triangle [25] and to identify strongly anisotropic behaviour where EVMs would provide particularly poor predictions. Special states of anisotropy of the Reynolds stress tensor are indicated through lines on the Lumley triangle (Figure 11). The turbulent stresses are fully isotropic wherever \(\eta\) and \(\xi\) are equal
Figure 11: Invariants \(\xi\) and \(\eta\) of the Reynolds stress anisotropy tensor \(b_{ij}\). Special states of the tensor are indicated through the Lumley triangle [25]. Symbols \(\circ\) indicate values of the invariants along x and y axis on the plane of the mixer
to zero with non-zero values implying anisotropic behaviour. Figure 12 shows the distribution of \(\eta\) and \(\xi\) determined from the Reynolds stresses calculated using the SSG model and the sliding mesh method. It can be seen that \(\eta\) takes positive values throughout the mixer especially in narrow passages and close regions of the stator screens. \(\xi\) is positive throughout the domain and is close to \(\eta\) in magnitude. From the Lumley triangle (Figure 11) this indicates that the turbulent stresses are axisymmetric in these regions. Regions of isotropic turbulence can be observed on suction side of the blades of the rotors. This return to isotropy as the turbulence decays in these wake regions is captured by the SSG model. These features of the turbulence will not be predicted by the EVMs and a RSM model is needed to accurately predict the hydrodynamics of the mixer.
#### 5.2.2 Power predictions
Power predictions from the eddy viscosity models (\(k-\epsilon\) and \(k-\omega\) SST) and the second moment closure model (SSG model) are presented in Figure 13. The solutions have been obtained using the sliding mesh method. The biggest discrepancy in the power prediction from the EVMs is at low flow rates (\(Q<1000\) kg/hr). Note that across all flow rates the EVMs tend to predict a consistently lower power; the SSG model is much closer to the experimental data. The main reason for the better performance of the SSG model is due to the fact that the second moment closure models are able to capture the local anisotropy of the
Figure 12: Second and third invariants of the Reynolds anisotropy tensor in mixer at 6000rpm and zero flow rate obtained using sliding mesh method.
Figure 13: Power curve for Silverson 150/250 mixer using RSM and EVM turbulence models alongwith sliding mesh algorithm
Reynolds stresses, thus leading to a better prediction of primary and secondary flows in the mixer.
Cooke et al [26] proposes an expression to calculate power as :
\[P=P_{O_{Z}}\rho N^{3}D^{5}+k_{1}QN^{2}D^{2}, \tag{11}\]
where \(P_{O_{Z}}\) is the power number at zero mass flow rate, \(\rho\) is the density of the fluid, \(D\) is the rotor diameter, \(Q\) is the mass flow rate and \(k_{1}\) is a proportionality constant. To evaluate the power using Eq. (11), the values for \(P_{O_{Z}}\) and \(k_{1}\) are required, and the simulations can be used to calculate these constants for a given Silverson mixer thereby reducing the requirement for physical plant trials. The calculated power vs flow rate data presented in Figure 13 is used to perform a linear fit, with the values for \(P_{O_{Z}}\) and \(k_{1}\) obtained from the y-axis intercept and slope respectively. The resulting values obtained using the different turbulence models are presented in table Table 1. The RSM model (SSG) is able to predict \(P_{O_{Z}}\) to within 12.5% and \(k_{1}\) to within 7.2% of the experimental values. The \(k-\omega\)SST model is able to predict the slope (\(k_{1}\)) with the same accuracy but underpredicts the power at zero flow-rate (\(P_{O_{Z}}\)). The \(k-\epsilon\) model is the poorest performer among the three models in predicting the power constants.
### Prediction of power number at different Reynolds numbers
The power consumption of a mixer using a Newtonian fluid is usually expressed in the form of dimensionless power number (\(P_{0}\)) obtained by setting
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \(P_{O_{Z}}\) & \(k_{1}\) \\ \hline Experiment & 0.475 & 7.611 \\ \hline \(k-\epsilon\) model & 0.232 & 6.676 \\ \hline \(k-\omega\) SST model & 0.254 & 7.069 \\ \hline SSG model & 0.416 & 7.061 \\ \hline \end{tabular}
\end{table}
Table 1: Constants for power prediction obtained by using different turbulence models with the sliding mesh method.
\(k_{1}=0\) in Eq. (11) [7]:
\[P_{0}=\frac{P}{\rho N^{3}D^{5}}. \tag{12}\]
This expression provides a characteristic power curve that depends only on the swept diameter of the rotor and can be used to predict power requirements for any given fluid, rotor diameter, and rotational speed. Figure 14a shows the Reynolds number dependence of the power number as predicted using the SSG model and sliding mesh method. The Reynolds number in this case is defined as:
\[Re=\frac{\rho ND^{2}}{\mu}, \tag{13}\]
Note that the results at different Reynolds numbers for a given working fluid presented in Figure 14 are obtained by varying the rotor speed of the mixer. Linear variation of the power number with Reynolds number in the laminar regime and the invariance with Reynolds number in the turbulent regime are captured. The predicted power numbers are in good agreement with the experimental data of Cooke et al [7] as shown in Figure 14b. The laminar power number is predicted to within 8% and the turbulent power number to within 27% of the experimental values.
## 6 Conclusions
It has been shown in this paper that computational fluid dynamics (CFD) simulations are a valuable tool in understanding the hydrodynamics in high shear rotor-stator mixers, and can be used to calculate the constants required for the prediction of power in these mixers. A Silverson 150/250 MS in-line mixer is used as a representative configuration for this investigation. Comparisons between solution methods using a sliding mesh and multiple reference frame (MRF) algorithms are made. The sliding mesh method is better able to capture the hydrodynamics within the mixer, resulting in improved power predictions. The choice of turbulence model used in the simulations is found to be critical. Two different classes of turbulence models are compared; the eddy viscosity models (\(k-\epsilon\) and \(k-\omega\) SST models) and the second moment closure model
Figure 14: Full power curve for Silverson 150/250 mixer.
(SSG model). It is shown that the SSG model is required to accurately capture the salient flow and mixing features within the mixer. This also results in improved prediction of the variation of power consumption with flow rate. CFD simulations have been conducted to capture the full characteristic power curve of the Silverson mixer and it is found that the second moment closure model coupled with the sliding mesh algorithm leads to results which are in good agreement with the experimental data. CFD simulations can therefore be a valuable tool for scale up calculations.
## Appendix
### Implementation of turbulence models
\(k-\epsilon\) model
The most common form of the model developed by Jones and Launder [20] is used here. The transport equations used in \(k-\epsilon\) model are:
\[\frac{\partial\overline{k}}{\partial t}+\overline{u_{i}}\frac{\partial \overline{k}}{\partial x_{i}}=\frac{\partial}{\partial x_{j}}\left[\left(\nu+ \frac{\nu_{t}}{\sigma_{k}}\right)\frac{\partial\overline{k}}{\partial x_{j}} \right]+P_{k-\epsilon}-\overline{\epsilon}, \tag{14}\]
\[\frac{\partial\overline{\epsilon}}{\partial t}+\overline{u_{i}}\frac{\partial \overline{\epsilon}}{\partial x_{i}}=\frac{\partial}{\partial x_{i}}\left[ \left(\nu+\frac{\nu_{t}}{\sigma_{\epsilon}}\right)\frac{\partial\overline{ \epsilon}}{\partial x_{i}}\right]+C_{\epsilon 1}\frac{\overline{\epsilon}}{ \overline{k}}P_{k-\epsilon}-C_{\epsilon 2}\frac{\overline{\epsilon}^{2}}{\overline{k}}, \tag{15}\]
where
\[P_{k-\epsilon}=-\overline{u_{i}^{\prime}u_{j}^{\prime}}\frac{\partial \widetilde{u}_{i}}{\partial x_{j}}. \tag{16}\]
The turbulent viscosity \(\mu_{t}\) is calculated as :
\[\nu_{t}=C_{\mu}\frac{\overline{k}^{2}}{\overline{\epsilon}} \tag{17}\]
The model constants \(C_{\mu},C_{\epsilon 1}\) and \(C_{\epsilon 2}\) in eq.(Eq. (14)) and eq.(Eq. (15))are given in table Table 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(C_{\mu}\) & \(\sigma_{k}\) & \(\sigma_{\epsilon}\) & \(C_{\epsilon 1}\) & \(C_{\epsilon 2}\) \\ \hline
0.09 & 1.0 & 1.3 & 1.44 & 1.92 \\ \hline \end{tabular}
\end{table}
Table 2: Values of the empirical constants in the \(k-\epsilon\) model
\(k-\omega\) SST model
The standard \(k-\omega\) SST model proposed by Menter [21] is also used for comparison. It blends the \(k-\omega\) formulation in the boundary layer and the free stream independence of the \(k-\epsilon\) model in the far field. The governing equations for \(k-\omega\) SST model are :
\[\frac{\partial\overline{k}}{\partial t}+\overline{u}_{i}\frac{\partial \overline{k}}{\partial x_{j}}=\frac{\partial}{\partial x_{j}}\left[\left(\nu+ \frac{\nu_{t}}{\sigma_{k}}\right)\frac{\partial\overline{k}}{\partial x_{j}} \right]+P_{k-\omega}-\beta^{*}\overline{\omega}\overline{k}, \tag{18}\]
\[\frac{\partial\overline{\omega}}{\partial t}+\frac{\partial \overline{u}_{j}\overline{\omega}}{\partial x_{j}} =\frac{\partial}{\partial x_{j}}\left[\left(\nu+\frac{\nu_{t}}{ \sigma_{\omega}}\right)\frac{\partial\overline{\omega}}{\partial x_{j}} \right]+\gamma\left\|\overline{S}\right\|^{2}-\beta\overline{\omega}^{2}\] \[+2\left(1-F_{1}\right)\frac{1}{\sigma_{\omega_{2}}\overline{ \omega}}\frac{\partial\overline{k}}{\partial x_{j}}\frac{\partial\overline{ \omega}}{\partial x_{j}}, \tag{19}\]
where
\[P_{k-\omega}=min\left(-\overline{u_{i}^{{}^{\prime}}u_{j}^{{}^{\prime}}}\frac {\partial\overline{u}_{i}}{\partial x_{j}},10\beta^{*}\overline{k}\overline{ \omega}\right). \tag{20}\]
Any coefficient \(\alpha\) in this model is calculated from
\[\alpha=F_{1}\alpha_{1}+\left(1-F_{1}\right)\alpha_{2}, \tag{21}\]
where subscript 1 corresponds to the coefficients in the \(k-\omega\) model and subscript 2 corresponds to the coefficients in the \(k-\epsilon\) model. \(F_{1}\) is the blending function in eq.(Eq. (19)) defined as:
\[F_{1}=tanh\left(arg_{1}^{4}\right), \tag{22}\]
where
\[arg_{1}=min\left[max\left(\frac{\sqrt{\overline{k}}}{\beta^{*}\overline{ \omega}y};\frac{500\nu}{y^{2}\overline{\omega}}\right);\frac{4\overline{k}}{ \sigma_{\omega_{2}}CD_{k\omega}y^{2}}\right] \tag{23}\]
\[CD_{k\omega}=max\left(2\frac{1}{\sigma_{\omega_{2}}\overline{\omega}}\frac{ \partial\overline{k}}{\partial x_{j}}\frac{\partial\overline{\omega}}{ \partial x_{j}},10^{-20}\right). \tag{24}\]
\(y\) in eq.(Eq. (24)) represents the the distance to the nearest wall, and \(CD_{k\omega}\) is the positive part of the cross diffusion term [21]. The eddy viscosity is calculated as [21]:
\[\nu_{t}=\frac{\overline{k}a_{1}}{max\left(a_{1}\overline{\omega};\left\|S \right\|F_{2}\right)} \tag{25}\]
where
\[\left\|\overline{S}\right\|=\sqrt{2S_{ij}S_{ij}} \tag{26}\]
\[F_{2}=tanh\left(arg_{2}^{2}\right) \tag{27}\]
\[arg_{2}^{2}=max\left(\frac{2\sqrt{k}}{\beta^{*}\overline{\omega}y};\frac{500 \nu}{y^{2}\overline{\omega}}\right) \tag{28}\]
The model constants for the \(k-\omega\) SST model are given in table(Table 3).
_SSG model_
The standard SSG model proposed by Speziale et al [22] is used as the second moment closure model. This model uses six Reynolds stress transport equations and a turbulent dissipation transport equation. The governing equations for the model are :
\[\frac{\partial\overline{u_{i}^{{}^{\prime}}u_{j}^{{}^{\prime}}}}{\partial t}+ \overline{u_{k}}\frac{\partial\overline{u_{i}^{{}^{\prime}}u_{j}^{{}^{\prime}}} }{\partial x_{k}}=D_{ij}+P_{ij}+\phi_{ij}-\epsilon_{ij}, \tag{29}\]
where
\[D_{ij}=\frac{\partial}{\partial x_{k}}\left[\nu\frac{\partial\overline{u_{i}^ {{}^{\prime}}u_{j}^{{}^{\prime}}}}{\partial x_{k}}-C_{s}\frac{\overline{k}}{ \overline{\epsilon}}\overline{u_{k}^{{}^{\prime}}u_{l}^{{}^{\prime}}}\frac{ \partial\overline{u_{i}^{{}^{\prime}}u_{j}^{{}^{\prime}}}}{\partial x_{l}}\right] \tag{30}\]
\[P_{ij}=-\overline{u_{i}^{{}^{\prime}}u_{k}^{{}^{\prime}}}\frac{\partial \overline{u_{j}}}{\partial x_{k}}-\overline{u_{j}^{{}^{\prime}}u_{k}^{{}^{ \prime}}}\frac{\partial\overline{u_{i}}}{\partial x_{k}} \tag{31}\]
\[\phi_{ij} = -C_{1}\overline{\epsilon}\overline{b_{ij}}+C_{1}^{{}^{\prime}} \overline{\epsilon}\left(\overline{b_{ik}}\ \overline{b_{kj}}-\frac{1}{3}\overline{b_{mn}}\ \overline{b_{nm}}\right)+C_{2}\overline{k}\ \overline{S_{ij}} \tag{32}\] \[+C_{3}\overline{k}\left(\overline{b_{ik}}\ \overline{S_{jk}}+ \overline{b_{jk}}\ \overline{S_{ik}}-\frac{2}{3}\overline{b_{mn}}\ \overline{S_{mn}}\delta_{ij}\right)\] \[+C_{4}\overline{k}\left(\overline{b_{ik}}\ \overline{\Omega_{jk}}+ \overline{b_{jk}}\ \overline{\Omega_{ik}}\right).\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\sigma_{k_{1}}\) & \(\sigma_{\omega_{1}}\) & \(\beta_{1}\) & \(a_{1}\) & \(\beta^{*}\) & \(\kappa\) & \(\gamma_{1}\) & \(\sigma_{k_{2}}\) & \(\sigma_{\omega_{2}}\) & \(\beta_{2}\) & \(\gamma_{2}\) \\ \hline
1.176 & 2.0 & 0.075 & 0.31 & 0.09 & 0.41 & \(\frac{\beta_{1}}{\beta^{*}}-\frac{\kappa^{2}}{\sigma_{\omega_{1}}\sqrt{\beta ^{*}}}\) & 1.0 & 1.168 & 0.0828 & \(\frac{\beta_{2}}{\beta^{*}}-\frac{\kappa^{2}}{\sigma_{\omega_{2}}\sqrt{\beta ^{*}}}\) \\ \hline \end{tabular}
\end{table}
Table 3: Model constants for the \(k-\omega\) SST model
\(b_{ij}\), \(\overline{\Omega_{ij}}\) and \(\overline{S_{ij}}\) in Eq. Eq. (32) are defined as :
\[\overline{b_{ij}}=\frac{\overline{a_{ij}}}{2\overline{k}}-\frac{1}{3}\delta_{ij}, \tag{33}\]
where \(a_{ij}\) is the anisotropy tensor,
\[\overline{S_{ij}}=\frac{1}{2}\left(\frac{\partial\overline{u_{i}}}{\partial x _{j}}+\frac{\partial\overline{u_{j}}}{\partial x_{i}}\right), \tag{34}\]
\[\overline{\Omega_{ij}}=\frac{1}{2}\left(\frac{\partial\overline{u_{i}}}{ \partial x_{j}}-\frac{\partial\overline{u_{j}}}{\partial x_{i}}\right). \tag{35}\]
\(\epsilon_{ij}\) in Eq. Eq. (29) is closed under the isotropic assumption and the transport equation proposed by Hanjalic and Launder [27] is used:
\[\frac{\partial\overline{\epsilon}}{\partial t}+\overline{u_{k}}\frac{ \partial\overline{\epsilon}}{\partial x_{k}}=\frac{\partial}{\partial x_{j} }\left(C_{\epsilon}\frac{\overline{k}}{\overline{\epsilon}}\overline{u_{i}^{ \prime}u_{j}^{\prime}}\frac{\partial\overline{\epsilon}}{\partial x_{j}} \right)+C_{\epsilon 1}\frac{P_{ii}\overline{\epsilon}}{2\overline{k}}-C_{ \epsilon 2}\frac{\overline{\epsilon}^{2}}{\overline{k}}. \tag{36}\]
The constants in the above equations are given in table Table 4.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(C_{1}\) & \(C_{1}^{{}^{\prime}}\) & \(C_{2}\) & \(C_{3}\) & \(C_{4}\) & \(C_{\epsilon 1}\) & \(C_{\epsilon 2}\) \\ \hline \(3.4+1.8P_{ii}/2\epsilon\) & \(4.2\) & \(0.8-1.3\left(b_{ij}b_{ij}\right)^{0.5}\) & \(1.25\) & \(0.4\) & \(1.44\) & \(1.83\) \\ \hline \end{tabular}
\end{table}
Table 4: Coefficients of the SSG model |
2307.00723 | Normalized clustering peak solutions for Schrödinger equations with
general nonlinearities | We are concerned with the normalized $\ell$-peak solutions to the nonlinear
Schr\"{o}dinger equation
\[
-\varepsilon^2\Delta v+V(x)v=f(v)+\lambda v,\quad
\int_{\mathbb{R}^N}v^2 =\alpha \varepsilon^N.
\] Here $\lambda \in \mathbb{R}$ will arise as a Lagrange multiplier, $V$ has
a local maximum point, and $f$ is a general $L^2$-subcritical nonlinearity
satisfying a nonlipschitzian property that $\lim_{s\to0} f(s)/s=-\infty$. The
peaks of solutions that we construct cluster near a local maximum of $V$ as
$\varepsilon\to0$. Since there is no information about the uniqueness or
nondegeneracy for the limiting system, a delicate lower gradient estimate
should be established when the local centers of mass of functions are away from
the local maximum of $V$. We introduce a new method to obtain this estimate,
which is significantly different from the ideas in del Pino and Felmer (Math.
Ann. 2002), where a special gradient flow with high regularity is used, and in
Byeon and Tanaka (J. Eur. Math. Soc. 2013 \& Mem. Amer. Math. Soc. 2014), where
an extra translation flow is introduced. We also give the existence of ground
state solutions for the autonomous problem, i.e., the case $V\equiv0$. The
ground state energy is not always negative and the strict subadditive property
of ground state energy here is achieved by strict concavity. | Chengxiang Zhang, Xu Zhang | 2023-07-03T03:05:30Z | http://arxiv.org/abs/2307.00723v1 | # Normalized clustering peak solutions for Schrodinger equations with general nonlinearities
# Normalized clustering peak solutions for Schrodinger equations with general nonlinearities
Chengxiang Zhang\({}^{\ast}\), Xu Zhang\({}^{\dagger}\)
\({}^{\text{a}}\)Laboratory of Mathematics and Complex Systems (Ministry of Education), School of Mathematical Sciences,
Beijing Normal University, Beijing 100875, P. R. China
\({}^{\text{b}}\)School of Mathematics and Statistics, Central South University, Changsha 410083, P. R. China
[email protected]@163.com, corresponding author
**Abstract:** We are concerned with the normalized \(\ell\)-peak solutions to the nonlinear Schrodinger equation
\[\begin{cases}-\varepsilon^{2}\Delta v+V(x)v=f(v)+\lambda v,\\ \int_{\mathbb{R}^{N}}v^{2}=\alpha\varepsilon^{N}.\end{cases}\]
Here \(\lambda\in\mathbb{R}\) will arise as a Lagrange multiplier, \(V\) has a local maximum point, and \(f\) is a general \(L^{2}\)-subcritical nonlinearity satisfying a nonlipschitzian property that \(\lim_{s\to 0}f(s)/s=-\infty\). The peaks of solutions that we construct cluster near a local maximum of \(V\) as \(\varepsilon\to 0\). Since there is no information about the uniqueness or nondegeneracy for the limiting system, a delicate lower gradient estimate should be established when the local centers of mass of functions are away from the local maximum of \(V\). We introduce a new method to obtain this estimate, which is significantly different from the ideas in del Pino and Felmer [22] (Math. Ann. 2002), where a special gradient flow with high regularity is used, and in Byeon and Tanaka [7, 8] (J. Eur. Math. Soc. 2013 & Mem. Amer. Math. Soc. 2014), where an extra translation flow is introduced. We also give the existence of ground state solutions for the autonomous problem, i.e., the case \(V\equiv 0\). The ground state energy is not always negative and the strict subadditive property of ground state energy here is achieved by strict concavity.
**Keywords:** Nonlinear Schrodinger equation; Semiclassical stationary states; Normalized solutions.
**Mathematics Subject Classification:** 35J20 - 35J15 - 35J60
## 1 Introduction and main Results
We study the semiclassical states of the following logarithmic Schrodinger equation
\[\begin{cases}-\varepsilon^{2}\Delta v+V(x)v=f(v)+\lambda v,\\ \int_{\mathbb{R}^{N}}v^{2}=\alpha\varepsilon^{N},\end{cases} \tag{1}\]
where \(N\geq 1\), \(\varepsilon>0\) is a small parameter, \(f\) is a general nonlinearity, and \(V\) is a function having a local maximum point. The problem comes from the study of stationary states for the time-dependent nonlinear Schrodinger equation
\[i\hbar\frac{\partial\psi}{\partial t}=-\frac{\hbar^{2}}{2m}\Delta\psi+V(x)\psi -g(|\psi|)\psi=0. \tag{2}\]
Note that a stationary state possesses the form \(\psi(x,t)=v(x)e^{-\frac{i\lambda t}{\hbar}}\). Then \(\psi\) is a stationary solution to (2) if and only \((\lambda,v)\) is a solution to (1) with \(\varepsilon=\frac{\hbar}{\sqrt{2}m}\), \(f(u)=g(u)u\). The \(L^{2}\) constraint in (1) comes from the mass conservation property of the stationary state. Remark that solutions under the \(L^{2}\) constraint are usually referred to as the normalized solutions.
In the autonomous case \(V\equiv V_{0}\), by a transformation of variable \(u(x)=v(\varepsilon x)\), and by replacing the unknown number \(\lambda\) by \(\lambda+V_{0}\), problem (1) is equivalent to
\[\begin{cases}-\Delta u=f(u)+\lambda u,\\ \int_{\mathbb{R}^{N}}u^{2}=\alpha.\end{cases} \tag{3}\]
This autonomous problem has been extensively studied since [12, 32] in the \(L^{2}\)-subcritical case and [30] in the \(L^{2}\)-supercritical case. The existence results are built for more general nonlinearities in [31, 36, 27] recently. On the other hand, the solvability of (1) with various nonconstant potentials and general nonlinearities is rather poorly understood so far. When \(\varepsilon=1\), [41, 29] give the existence of solutions for \(L^{2}\)-subcritical case under the assumption \(\lim_{x\to\infty}V(x)=V_{\infty}\geq V\not\equiv V_{\infty}\); [23] considered with similar potential assumption and Ambrosetti-Rabinowitz type conditions on nonlinearity in the \(L^{2}\)-supercritical case; and [3] studied the \(L^{2}\)-supercritical problem with a power type nonlinearity \(f(u)=u^{p-1}\) and a positive potential vanishing at infinity. We also note that [1] studied solutions of multibump type with periodic assumptions under a strict nondegeneracy condition. Considering \(\varepsilon\) as a small parameter, [44, 2] studied the problem with \(V=0\) and a potential \(K\) on the nonlinearity, i.e., \(K(x)f(u)\) or \(K(x)u^{p-1}\). If \(K\) has local maximum points [2] showed the existence of local minimizers for \(L^{2}\)-subcritical problem ; [44] constructed multibump solutions with each bump concentrate to a local maximum point of \(K\) in the \(L^{2}\)-subcritical and \(L^{2}\)-supercritical case by a local deformation argument. We also refer to [37] in which the authors studied problems in bounded set with several component and problems in the whole space with a steep well potential. As the mass tends to some limit, the problems are transformed to singular perturbed type with two parameter similar to (1). However, there is few result for (1) with a potential \(V\) having local maximum points.
For singular perturbation problems without an \(L^{2}\) constraint, that is, for the following equation
\[-\varepsilon^{2}\Delta v+V(x)v=f(v),\quad v\in H^{1}(\mathbb{R}^{N}),\]
there have been many studies on constructing solutions concentrated near local critical points of the potential following the pioneering work of Floer and Weinstein [25]. In [25], they found a positive solution which concentrates at a nondegenerate critical point of \(V\) by the Lyapunov-Schmidt reduction method which requires some nondegeneracy conditions on the limiting problem. For problems with no uniqueness or nondegeneracy condition assumed on the limiting problem, the solutions are usually found as critical points of corresponding functionals through the variational approach, in which basic strategy is to obtain a Palais-Smale sequence through a deformation generated by a descending flow, usually the negative gradient flow. This method was initiated by Rabinowitz in [34]. See also [7, 8, 9, 10, 14, 17, 18, 19, 20, 21, 22] for further studies.
The motivation for this paper is that the known studies on (1) or (3) are mainly based on an assumption that \(f(s)/s\to 0\) as \(s\to 0\). This excludes some nonlipschitzian nonlinearity such as \(s\log s+u^{p-1}\), or \(-u^{q-1}+u^{p-1}\), where \(q\in(1,2)\) and \(p\in(2,2+\frac{4}{N})\). We first study the autonomous problem and consider a general class of nonlinearities such that \(f(s)/s\to-\infty\) as \(s\to 0\). More precisely, we impose the following assumptions on \(f\):
1. \(f\in C(\mathbb{R},\mathbb{R})\) and \(f(0)=0\).
2. \(\lim_{s\to 0^{+}}f(s)/s=-\infty\).
3. \(\limsup_{s\to+\infty}f(s)/s^{1+4/N}=c_{0}\).
4. \(s^{-1}f(s)\) is strictly increasing for \(s>0\).
Note that \((F4)\) implies \(c_{0}\geq 0\). Ground states are usually found by the following minimization problem
\[E_{\alpha}=\inf\Big{\{}J(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}- \int_{\mathbb{R}^{N}}F(u)\ \Big{|}\ u\in\mathcal{M}_{\alpha}\Big{\}}, \tag{4}\]
where \(\mathcal{M}_{\alpha}=\{\,u\in H^{1}(\mathbb{R}^{N})\mid\int_{\mathbb{R}^{N}}u ^{2}=\alpha\,\}\), \(F(s)=\int_{0}^{|s|}f(\tau)d\tau\). It is well known that the following Gagliardo-Nirenberg inequality plays an important role to determine whether the given infimum is well-defined,
\[|u|_{2+4/N}^{2+4/N}\leq S(N)|\nabla u|_{2}^{2}|u|_{2}^{4/N},\quad u\in H^{1}( \mathbb{R}^{N}), \tag{5}\]
where \(S(N)>0\) is the optimal constant for the Gagliardo-Nirenberg inequality. We set
\[\alpha_{N}:=\begin{cases}(2c_{0}S(N))^{-\frac{N}{2}},&\text{if}\quad c_{0}>0, \\ +\infty,&\text{if}\quad c_{0}=0.\end{cases}\]
**Theorem 1.1**.: _Assume (F1)-(F4). For each \(\alpha\in(0,\alpha_{N})\), (3) has a solution \((\lambda,u)\), such that \(u\) is a nonnegative nontrivial function, and is a global minimizer for \(E_{\alpha}\). Moreover,_
1. \(E_{\alpha}\) _is continuous and strictly concave._
2. \(\lim_{\alpha\to 0}E_{\alpha}=0\) _and_ \(E_{\alpha}>0\) _for small_ \(\alpha\)_._
3. _Assume further_ \(c_{0}=0\)_. Then_ \(E_{\alpha}\) _has a unique zero in_ \((0,+\infty)\) _and_ \(\lim_{\alpha\to+\infty}E_{\alpha}=-\infty\) _if_ \(f\) _admits a zero in_ \((0,+\infty)\)_; and_ \(E_{\alpha}\) _is strictly increasing in_ \((0,+\infty)\) _if_ \(f\) _is negative in_ \((0,+\infty)\)_._
By the classical result of [12] for \(L^{2}\) subcritical problems, the attainability for the minimization problem (4), in some sense, is equivalent to the strict subadditive inequality
\[E_{\alpha+\beta}<E_{\alpha}+E_{\beta}.\]
In [36], for a class of general Berestycki-Lions type nonlinearities ([4, 5]) such that \(f(s)/s\to 0\) as \(s\to 0\), the energy proved to be nonpositive and nonincreasing. Moreover, it seems that the strict subadditivity holds only when the energy \(E_{\alpha+\beta}\) is negative. In our setting, there is a difference that \(E_{\alpha}\) is positive and strict increasing for small \(\alpha\). Our strategy to obtain the strict subadditivity is to use the strong concavity of \(E_{\alpha}\), that is the merit of (F4).
Next we study (1). We will construct normalized solutions with \(\ell\)-peaks if the potential \(V\) has a local maximum point. In light of [44] and [37], the following limiting system for (1) is important
\[\begin{cases}-\Delta u_{j}=f(u_{j})+\lambda u_{j}\ \ \text{in}\ \ \mathbb{R}^{N},\\ u_{j}(x)>0,\ \lim_{|x|\to\infty}u_{j}(x)=0,\quad i=1,2,\cdots,\ell\\ \sum_{i=1}^{\ell}|u_{j}|^{2}_{2}=\alpha.\end{cases} \tag{6}\]
It is clear that this system (6) has a solution \((\lambda,u_{1},\cdots,u_{\ell})\) by Theorem 1.1 by setting \(u_{i}\equiv u_{0}\) for a solution \((\lambda,u_{0})\) to (3) with \(\int_{\mathbb{R}^{N}}u_{0}^{2}=\ell^{-1}\alpha\). However, there is no uniqueness or nondegeneracy result for this solution. In fact, we are even not sure that whether a solution \((\lambda,u_{1},\cdots,u_{\ell})\) to (6) would satisfy \(\int_{\mathbb{R}^{N}}u_{i}^{2}=\ell^{-1}\alpha\) for each \(i=1,\cdots,\ell\). To state our result, we give the assumptions on \(V\) precisely,
1. \(V(x)\in C(\mathbb{R}^{N})\) and \(\liminf_{|x|\to\infty}V(x)|x|^{-2}>-\infty\);
2. There is a bounded domain \(\Omega\subset\mathbb{R}^{N}\) such that \(V\in C^{1}(\overline{\Omega})\) and \[V_{0}:=\max_{x\in\overline{\Omega}}V(x)>\max_{x\in\partial\Omega}V(x);\]
3. Let \(\mathcal{V}=\left\{x\in\Omega\ |\ V(x)=V_{0}\right\}.\) Then for any open neighborhood \(\widetilde{O}\) of \(\mathcal{V}\), there exists an open set \(O\subset\widetilde{O}\) such that \[\mathcal{V}\subset O\subset\overline{O}\subset\widetilde{O}\cap\Omega\quad \text{and}\quad\inf_{x\in\partial O}|\nabla V(x)|>0.\]
To construct solutions with \(\ell\)-peaks, we need another technical condition on the nonlinearity.
1. \(f\in C^{1}(0,+\infty)\), and for some \(\sigma>0\) there hold \[\limsup_{s\to 0^{+}}\Big{|}f^{\prime}(s)-\sigma\log s\Big{|}<+\infty.\]
We show the following result.
**Theorem 1.2**.: _Suppose that (F1)-(F5) and (V1)-(V3) hold. For any \(\alpha\in(0,\alpha_{N})\), \(\ell\in\mathbb{N}\setminus\{0\}\), there exists \(\varepsilon_{\ell}>0\) such that for each \(\varepsilon\in(0,\varepsilon_{\ell})\), equation (1) admits a solution \((\lambda_{\varepsilon},v_{\varepsilon})\) satisfying_
1. \(v_{\varepsilon}>0\) _has exact_ \(\ell\) _peaks_ \(x_{\varepsilon}^{1},\cdots,x_{\varepsilon}^{\ell}\in\mathbb{R}^{N}\) _satisfying_ \(\lim_{\varepsilon\to 0}\operatorname{dist}\left(x_{\varepsilon}^{j},\mathcal{V} \right)=0\) _for all_ \(j\in\{1,\cdots,\ell\}\,,\)__
2. _setting_ \(u_{\varepsilon}(x)=v_{\varepsilon}(\varepsilon x),\) _there exist a subsequence_ \(\varepsilon_{j}\to 0\) _such that_ \[\lambda_{\varepsilon}\to\lambda+V_{0},\quad\text{and}\quad\left\|u_{\varepsilon _{j}}-\sum_{k=1}^{\ell}u_{j}\left(\cdot-x_{\varepsilon_{j}}^{k}/\varepsilon_{j }\right)\right\|_{H^{1}}\to 0\quad\text{as }j\to\infty,\] _where_ \((\lambda,u_{1},\cdots,u_{\ell})\in\mathbb{R}\times H^{1}(\mathbb{R}^{N})^{\ell}\) _is a solution to the system (_6_)._
3. _there exist_ \(C,c>0\) _such that_ \[v_{\varepsilon}(x)\leq C\sum_{j=1}^{\ell}e^{-c\varepsilon^{-2}|x-x_{\varepsilon} ^{2}|^{2}}\quad\text{for}\;\;x\in\mathbb{R}^{N}.\]
To find critical points in a neighborhood of the approximate solutions, following the idea of [15, 16, 35], a crucial step to make deformation is to obtain a uniform gradient estimate for the functional in an annular domain, i.e., a uniform lower bound for the norm of gradient of the functional in an annular domain. The uniform gradient estimate can be obtained when we search for critical points near local minimum points of \(V\). This is because, by the characteristic of local minimum and monotonicity property of least energy for the limiting problem, the functions near the approximate solutions with energy no greater than the least energy will concentrate to the local minimum of \(V\). The situation becomes more complicated for general saddle points or maximum points. Actually, the repelling property of such critical points makes it impossible to obtain the uniform gradient estimate since the barycenters (or local centers of mass) of functions near the approximate solutions will tend to deviate from the critical points to decrease its energy. Here we refer to [8, CHAPTER 6] for a counterexample in this case. Therefore, another much more delicate lower gradient estimate should be obtained for functions whose barycenters are away from the critical point, in order that barycenters of functions along the negative gradient flow would not move too far away before the energy is deformed to a given lower level.
We explain two methods from [22] and [7, 8] to deal with this difficulty in nonlinear Schrodinger equations without \(L^{2}\) constraint. In del Pino and Felmer [22], that lower gradient estimate is obtained for the energy functional only at functions having a uniform \(H^{2}\) bound. Thus, the authors defined a special negative gradient flow on the Nehari manifold, and they are able to show the uniform \(H^{2}\) bounds for functions along the flow if the flow starts from a suitable test path with a well-chosen set of initial conditions. Another method is developed by Byeon and Tanaka in [7, 8]. They introduced another decreasing flow, i.e., the translation flow generated by \(-\nabla V\). They are able to bypass the obstacle in obtaining the lower gradient estimate through several steps of iterations among the negative pseudogradient flow of the energy functional, the tail-minimizing operator that keeps tails small, and the translation flow.
There are essential difficulties in applying those two methods in our setting. First, it is important to obtain the global \(H^{2}\) regularity uniformly for the special flow in [22]. This relies on some stronger conditions on the nonlinearity \(f\). However, the nonlinearity in this paper having non-lipschitzian properties is not good enough for us to obtain the global \(H^{2}\) regularity uniformly along the pseudogradient flow. See Remark 4.3 (i) for more discussions. On the other hand, although the arguments in [7, 8] works well for nonlinear Schrodinger equations without an \(L^{2}\)-constraint in very weak conditions, it heavily depends on the use of an tail-minimizing operator, which is defined by solving a local minimization problem in an exterior domain with some prescribed boundary condition. However, in the situation with an \(L^{2}\) constraint, it is difficult to perform the minimization argument locally on the \(L^{2}\) spheres.
In this paper, we will develope another approach to deal with this problem. In fact, at every function whose local centers of mass are away from the maximum points of \(V\), we are able to obtain the desired lower gradient estimate, without assuming uniform \(H^{2}\) bounds on these functions. We explain our strategy as follows. We first introduce a new penalization functional so that a priori decay estimate in some exterior region away from the local centers of mass of the functions can be obtained. This estimate implies that the exterior norms of the functions can be controlled by the gradient of the energy functional. Thus, we can get rather fine decay for the functions which do not meet the desired lower gradient estimate. We are able to find good replacements of these functions by introducing an elliptic equation which is defined by a minimization problem in a hyperplane of the Sobolev space. Then a contradiction could be obtained by a check of balance of the elliptic equation. We remark that our idea applies likewise to nonlinear Schrodinger equations without \(L^{2}\)-constraints under very weak conditions on the nonlinearity. In fact, it works well to the situation of [7, 8]. We will explain it later in Remark 4.3 (ii).
At last, we mention that our assumptions (F1)-(F5) cover the nonlinearity of logarithmic type. By shift invariant property of the logarithmic Schrodinger equation (see [28, 38]), we can give a multiplicity result in the setting without an \(L^{2}\)-constraint. By Theorem 1.2, it is easy to verify that \(w_{\varepsilon}=e^{\lambda_{\varepsilon}/2}v_{\varepsilon}\) is a solution to the following logarithmic Schrodinger equation (without an \(L^{2}\) constraint condition):
\[-\varepsilon^{2}\Delta w+V(x)w=w\log w^{2},\quad w\in H^{1}(\mathbb{R}^{N}). \tag{7}\]
**Corollary 1.3**.: _Assume (V1)-(V3). Then for any \(\ell\in\mathbb{N}\setminus\{0\}\), there is \(\varepsilon_{\ell}>0\) such that for each \(\varepsilon\in(0,\varepsilon_{\ell})\), equation (7) admits a solution \(v_{\varepsilon}\) positive solution with \(\ell\) peaks, which concentrate to \(\mathcal{V}\) as \(\varepsilon\to 0\)._
We also comment that our assumptions on \(V\) includes a class of strong repulsive potentials, for example, \(V(x)=-|x|^{2}\) (see [11, 43]). Corollary 1.3 in fact gives the existence result of multiple nonradial solutions for such repulsive potentials when \(\varepsilon\) is small.
**Notation**.: _Throughout this paper, \(2^{*}=+\infty\) for \(N=1,2\) and \(2^{*}=\frac{2N}{N-2}\) for \(N\geq 3\); \(L^{p}(\mathbb{R}^{N})\)\((1\leq p<+\infty)\) is the usual Lebesgue space with the norm \(|u|_{p}^{p}=\int_{\mathbb{R}^{N}}|u|^{p}\); \(H^{1}(\mathbb{R}^{N})\) denotes the Sobolev space with the norm \(\|u\|^{2}=\int_{\mathbb{R}^{N}}(|\nabla u|^{2}+|u|^{2})\); \(o_{n}(1)\) (resp. \(o_{\varepsilon}(1)\)) will denote a generic infinitesimal as \(n\to\infty\) (resp. \(\varepsilon\to 0^{+}\)); \(B(x,\rho)\) denotes an open ball centered at \(x\in\mathbb{R}^{N}\) with radius \(\rho>0\). \(a^{\pm}=\max\{0,\pm a\}\) for \(a\in\mathbb{R}\). Unless stated otherwise, \(C,C^{\prime}\) and \(c\) are general constants._
## 2 The least energy for the autonomous problem
In this section, we solve the following minimization problem
\[E_{\alpha}=\inf\Big{\{}J(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}- \int_{\mathbb{R}^{N}}F(u)\Bigm{|}u\in\mathcal{M}_{\alpha}\Big{\}}, \tag{8}\]
where \(\mathcal{M}_{\alpha}=\{\,u\in H^{1}(\mathbb{R}^{N})\mid\int_{\mathbb{R}^{N}}u ^{2}=\alpha\,\}\), \(\alpha\in(0,\alpha_{N})\), \(F(s)=\int_{0}^{|s|}f(\tau)d\tau\), and \(f\) satisfies (F1)-(F4). We first note that under the assumption (F1), either of the following conditions is equivalent to (F4).
* The function \(t\mapsto F(\sqrt{t})\) is strictly convex for \(t>0\).
* \(F(\sqrt{1-s}u)+F(\sqrt{1+s}u)>2F(u)\) for \(s\in(0,1)\), \(u\neq 0\).
In fact, if (F4) holds, we have
\[\frac{\mathrm{d}}{\mathrm{d}s}\left(F(\sqrt{1-s}u)+F(\sqrt{1+s}u)\right)= \frac{u^{2}}{2}\left(\frac{f(\sqrt{1+s}u)}{\sqrt{1+s}u}-\frac{f(\sqrt{1-s}u)} {\sqrt{1-s}u}\right)>0. \tag{9}\]
Then (F4") follows from (F4). On the other hand, (F4") implies that the function \(t\mapsto F(\sqrt{t})\) is strictly midpoint convex. Thus, it is strictly convex by continuity. Hence, (F4") implies (F4"). At last, (F4") implies that \(\frac{\mathrm{d}}{\mathrm{d}t}F(\sqrt{t})=\frac{f(\sqrt{t})}{2\sqrt{t}}\) is strictly increasing, which is exactly (F4).
By (F2) and (F4), \(f\) admits at most one zero in \((0,+\infty)\). Hence, we set \(t_{0}=+\infty\) if \(f\) is negative in \((0,+\infty)\), and \(t_{0}\) to be the unique zero of \(f\) if \(f\) changes its sign in \((0,+\infty)\). We set
\[f_{1}(t)=\begin{cases}f^{-}(t),&t\geq 0\\ -f^{-}(-t),&t<0,\end{cases}\quad f_{2}(t)=\begin{cases}f^{+}(t),&t\geq 0,\\ -f^{+}(-t),&t<0,\end{cases}\]
\[F_{1}(t)=\int_{0}^{t}f_{1}(s)\mathrm{d}s,\quad F_{2}(t)=\int_{0}^{t}f_{2}(s) \mathrm{d}s.\]
Then
\[F_{1}(t)=\begin{cases}-F(t),&|t|\in[0,t_{0}),\\ -F(t_{0}),&|t|\in[t_{0},+\infty),\end{cases}\quad F_{2}(t)=\begin{cases}0,&|t| \in[0,t_{0}),\\ F(t)-F(t_{0}),&|t|\in[t_{0},+\infty).\end{cases} \tag{10}\]
Remark that \(F_{1}(t)=-F(t)\) and \(F_{2}(t)=0\) in the case \(t_{0}=+\infty\).
**Lemma 2.1**.: _Assume (F1)-(F4), The following statements hold._
* _For_ \(t>0\)_,_ \(F_{1}(\sqrt{t})\) _is nondecreasing and concave, and_ \(F_{2}(\sqrt{t})\) _is nondecreasing and convex._
* _There is_ \(C>0\) _such that for each_ \(t>0\)__ \[f(t)\leq f_{2}(t)\leq Ct^{1+\frac{4}{N}}\quad\text{and}\quad F(t)\leq F_{2}(t) \leq Ct^{2+\frac{4}{N}}.\] (11) _Moreover, for any_ \(\tau>0\) _there is_ \(C_{\tau}>0\) _such that_ \[f(t)\leq(c_{0}+\tau)t^{1+\frac{4}{N}}+C_{\tau}t\quad\text{and}\quad F(t)\leq(c_ {0}+\tau)t^{2+\frac{4}{N}}+C_{\tau}t^{2}.\] (12)
* \(t\mapsto F(t)/t^{2}\) _is strictly increasing for_ \(t>0\) _and_ \(f(s)s>2F(s)\) _a.e._ \(s\in\mathbb{R}\setminus\{0\}\)_. Similarly,_ \(t\mapsto F_{1}(t)/t^{2}\) _is nonincreasing for_ \(t>0\) _and_ \(f_{1}(s)s\leq 2F_{1}(s)\)_,_ \(s\in\mathbb{R}\)_._
Proof.: (i) follows from the definition of \(F_{1}\) and \(F_{2}\), and (F4'). (ii) follows from (F2)(F3). By (F4'), we have
\[t^{2}F(u)=t^{2}F(\sqrt{t^{-2}(tu)^{2}+(1-t^{-2})0})<F(tu)+(t^{2}-1)F(0)=F(tu), \quad\text{for}\quad t>1,u\neq 0, \tag{13}\]
implying that \(t\mapsto F(t)/t^{2}\) is strictly increasing for \(t>0\). Differentiating \(F(t)/t^{2}\), we know \(f(t)t>2F(t)\) a.e. \(t>0\). This inequality holds almost everywhere in \(\mathbb{R}\) by symmetry.
By (12) and the Gagliardo-Nirenberg inequality (5), it is clear that \(E_{\alpha}\) is well-defined for each \(\alpha\in(0,\alpha_{N})\), where \(\alpha_{N}:=(2c_{0}S(N))^{-\frac{N}{2}}\) if \(c_{0}>0\), \(\alpha_{N}=+\infty\) if \(c_{0}=0\).
**Lemma 2.2**.: _If \(E_{\alpha}\) is attained by some \(u\), then \(f(u)u\in L^{1}(\mathbb{R}^{N})\) and \(u\) satisfies_
\[-\Delta u=f(u)+\lambda u,\]
_where_
\[\lambda=\alpha^{-1}\left(\int_{\mathbb{R}^{N}}|\nabla u|^{2}-\int_{\mathbb{R }^{N}}f(u)u\right).\]
Proof.: By Lemma 2.1 (iii), \(\int_{\mathbb{R}^{N}}f(u)u\geq 2\int_{\mathbb{R}^{N}}F(u)>-\infty\). By this and (11), \(f(u)u\in L^{1}(\mathbb{R}^{N})\). Note that
\[E_{\alpha}\leq E_{n,\alpha}:=\left\{\,J(u)\,\,\bigg{|}\,u\in H^{1}_{0}(B_{n}), \int_{B_{n}}u^{2}=\alpha\,\,\right\}.\]
Taking \(\varphi\in C^{\infty}_{0}(B_{1})\) with \(0\leq\varphi\leq 1\) in \(B_{1}\), \(\varphi=1\) in \(B_{1/2}\), we set
\[u_{n}=\alpha^{1/2}|\varphi(n^{-1}\cdot)u|_{2}^{-1}\varphi(n^{-1}\cdot)u.\]
Then it is easy to check that
\[u_{n}\to u\quad\text{in }H^{1}(\mathbb{R}^{N}),\quad\frac{1}{2}\int_{ \mathbb{R}^{N}}|\nabla u_{n}|^{2}-\int_{\mathbb{R}^{N}}F_{2}(u_{n})\to\frac{1 }{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}-\int_{\mathbb{R}^{N}}F_{2}(u).\]
On the other hand, since \(|u_{n}|\leq 2|u|\), we have \(F_{1}(u_{n})\leq F_{1}(2u)\). Similarly to (13), \(F_{1}(2u)\leq 4F_{1}(u)\in L^{1}(\mathbb{R}^{N})\). Hence, by the Lebesgue convergence theorem,
\[E_{n,\alpha}\leq J(u_{n})\to E_{\alpha}.\]
By the Ekeland variational principle, there is \(\lambda_{n}\in\mathbb{R}\) such that
\[\|J^{\prime}(u_{n})-\lambda_{n}u_{n}\|_{H^{-1}(B_{n})}\to 0.\]
Since \(f_{1}(u_{n})u_{n}\leq 2F_{1}(u_{n})\), we can conclude that
\[\lambda_{n}|u_{n}|_{2}^{2}=J^{\prime}(u_{n})u_{n}+o_{n}(1)\to\frac{1}{2}\int_ {\mathbb{R}^{N}}|\nabla u|^{2}-\int_{\mathbb{R}^{N}}f(u)u.\]
Hence,
\[\lambda_{n}\to\lambda=\alpha^{-1}\left(\frac{1}{2}\int_{\mathbb{R}^{N}}| \nabla u|^{2}-\int_{\mathbb{R}^{N}}f(u)u\right).\]
On the other hand, for any \(\varphi\in C^{\infty}_{0}(\mathbb{R}^{N})\), we have \(\operatorname{supp}\varphi\subset B_{n}\) when \(n\) is sufficiently large, and hence
\[J^{\prime}(u_{n})\varphi-\lambda_{n}\int_{B_{n}}u_{n}\varphi\to 0.\]
Thus, \(u\) solves \(-\Delta u=f(u)+\lambda u\)
Note also that
\[J(u)\geq\frac{1}{2}|\nabla u|_{2}^{2}-\int_{\mathbb{R}^{N}}F_{2}(u).\]
Therefore,
\[E_{\alpha}\geq\widehat{E}_{\alpha}:=\inf\Big{\{}\frac{1}{2}\int_{\mathbb{R}^{N}} |\nabla u|^{2}-\int_{\mathbb{R}^{N}}F_{2}(u)\;\Big{|}\;u\in H^{1}(\mathbb{R}^{N }),\quad\int_{\mathbb{R}^{N}}u^{2}=\alpha\Big{\}}.\]
**Lemma 2.3**.: _The following statements hold._
1. \(E_{\alpha}\) _is nonnegative for small_ \(\alpha\)_._
2. \(\alpha\mapsto E_{\alpha}\) _is midpoint concave in_ \((0,\alpha_{N})\)_, i.e., for any_ \(\alpha\in(0,\alpha_{N})\) _and_ \(\theta\in(0,1)\) _with_ \(\alpha+\theta\alpha\in(0,\alpha_{N})\)_,_ \[\frac{1}{2}(E_{\alpha-\theta\alpha}+E_{\alpha+\theta\alpha})\leq E_{\alpha}.\] (14) _If_ \(E_{\alpha}\) _is attained for some_ \(\alpha_{0}>0\)_, then the inequality (_14_) is strict for_ \(\alpha_{0}\) _and every_ \(\theta\in(0,1)\) _with_ \(\alpha_{0}+\theta\alpha_{0}\in(0,\alpha_{N})\)_._
3. \(\alpha\mapsto E_{\alpha}\) _is continuous and concave in_ \((0,\alpha_{N})\)_, and_ \(\lim_{\alpha\to 0}E_{\alpha}=0\)_._
Proof.: (i) By (11) and the Gagliardo-Nirenberg inequality, we have
\[J(u)\geq\frac{1}{2}|\nabla u|_{2}^{2}-C\int_{\mathbb{R}^{N}}|u|^{2+\frac{4}{N }}\geq(\frac{1}{2}-C(N)\alpha^{\frac{4}{N}})|\nabla u|_{2}^{2},\quad\text{ where }|u|_{2}^{2}=\alpha.\]
Then we can conclude that \(E_{\alpha}\geq 0\) if \(\alpha\) is sufficiently small.
(ii) Let \(\alpha>0\) and \(\theta\in(0,1)\). Assume \(\{u_{n}\}\subset H^{1}(\mathbb{R}^{N})\) is such that
\[J(u_{n})\leq E_{\alpha}+\frac{1}{n},\quad|u_{n}|_{2}^{2}=\alpha.\]
Then by (F4"),
\[E_{\alpha-\theta\alpha}+E_{\alpha+\theta\alpha}\leq J\left(\sqrt{1-\theta}u_{n}\right)+J\left(\sqrt{1+\theta}u_{n}\right)\] \[= |\nabla u|_{2}^{2}-\int_{\mathbb{R}^{N}}\left(F\left(\sqrt{1- \theta}u_{n}\right)+F\left(\sqrt{1+\theta}u_{n}\right)\right)\] \[< 2J(u_{n})\leq 2E_{\alpha}+\frac{2}{n}.\]
Hence, letting \(n\to\infty\), we have the midpoint concavity. Moreover, if \(E_{\alpha}\) is attained, then we just choose \(u\in H^{1}(\mathbb{R}^{N})\) such that \(J(u)=E_{\alpha}\) and \(|u|_{2}^{2}=\alpha\). Hence, the inequality holds strictly.
(iii) To see \(E_{\alpha}\) is continuous and concave, it suffices to show that \(E_{\alpha}\) is bounded on some interval (see [24]). Since \(f_{2}(s)\) either identically zero or satisfies the assumptions in [36, Lemma 2.3], we conclude that the function \(\alpha\mapsto\widehat{E}_{\alpha}\) is continuous in \((0,+\infty)\). On the other hand, let \(u_{\alpha}=\sqrt{\alpha}u_{1}\), where \(u_{1}\in C_{0}^{\infty}(\mathbb{R}^{N})\) is chosen such that \(|u_{1}|_{2}^{2}=1\). We have
\[E_{\alpha}\leq J(u_{\alpha})=\alpha\int_{\mathbb{R}^{N}}|\nabla u_{1}|^{2}- \int_{\mathbb{R}^{N}}F(\sqrt{\alpha}u_{1})\leq\alpha\int_{\mathbb{R}^{N}}| \nabla u_{1}|^{2}+\int_{\mathbb{R}^{N}}F_{1}(\sqrt{\alpha}u_{1}).\]
Hence, \(E_{\alpha}\) is bounded in any finite subinterval of \((0,+\infty)\). Then \(E_{\alpha}\) must be continuous and concave in \((0,+\infty)\). Note that for \(\alpha\in(0,1)\), \(0\leq F_{1}(\sqrt{\alpha}u_{1})\leq F_{1}(u_{1})\). By Lebesgue convergence theorem \(\lim_{\alpha\to 0^{+}}\int_{\mathbb{R}^{N}}F_{1}(\sqrt{\alpha}u_{1})=0\). Therefore, \(\lim_{\alpha\to 0^{+}}E_{\alpha}=0\).
By Lemma 2.3 (iii), we have:
**Lemma 2.4**.: _Let \(\alpha,\beta>0\) and \(t>1\). Then \(E_{t\alpha}\leq tE_{\alpha}\) for \(t\alpha\in(0,\alpha_{N})\), and \(E_{\alpha+\beta}\leq E_{\alpha}+E_{\beta}\) for \(\alpha+\beta\in(0,\alpha_{N})\). Both inequalities hold strictly if \(E_{\alpha}\) is attained._
Proof.: By concavity, \(E_{\alpha+(1-t^{-1})\beta}\geq t^{-1}E_{t\alpha}+(1-t^{-1})E_{\beta}\) for \(t\geq 1\), \(\alpha,\beta>0\). Letting \(\beta\to 0\), we have \(E_{t\alpha}\leq tE_{\alpha}\). Then setting \(t=1+\frac{\beta}{\alpha}\), we have
\[\alpha E_{\alpha+\beta}\leq(\alpha+\beta)E_{\alpha}. \tag{15}\]
Interchanging \(\alpha\) and \(\beta\), we have \(\beta E_{\alpha+\beta}\leq(\alpha+\beta)E_{\beta}\). Hence, \(E_{\alpha+\beta}\leq E_{\alpha}+E_{\beta}\).
Now we assume further that \(E_{\alpha}\) is attained. We can take \(\delta\in(0,\alpha)\) so that by Lemma 2.3 (ii)
\[E_{2\alpha}\leq E_{2\alpha-\delta}+E_{\delta}<2E_{\alpha}.\]
When \(t=3,4,\cdots\), we have
\[E_{t\alpha}\leq E_{(t-2)\alpha}+E_{2\alpha}<tE_{\alpha}.\]
When \(t\in(1,2)\), we have
\[E_{(2-t)\alpha}+E_{t\alpha}<2E_{\alpha}\quad\text{and}\quad(2-t)E_{\alpha}=(2 -t)E_{(2-t)^{-1}(2-t)\alpha}\leq E_{(2-t)\alpha}.\]
Hence, \(E_{t\alpha}<tE_{\alpha}\) for \(t\in(1,2)\). On the other hand, when \(t\in(k+1,k+2)\), \(k=1,2,\cdots\), we have
\[E_{t\alpha}\leq E_{(t-k)\alpha}+E_{k\alpha}<tE_{\alpha}.\]
Hence, \(E_{t\alpha}<tE_{\alpha}\) for any \(t>0\). As a result, (15) holds strictly. Then \(E_{\alpha+\beta}<E_{\alpha}+E_{\beta}\).
**Lemma 2.5**.: _Assume \(\int_{\mathbb{R}^{N}}F_{1}(u_{n})+F_{2}(u_{2})\) is bounded. The following statements hold_
* _If_ \(\left|u_{n}\right|_{2+\frac{4}{N}}\to 0\)_, then_ \(\left|u_{n}\right|_{2}\to 0\)_._
* _If_ \(\left|u_{n}\right|_{2+\frac{4}{N}}\) _is bounded and_ \(u_{n}\to u\) _a.e., then_ \(F_{1}(u)\in L^{1}(\mathbb{R}^{N})\) _and_ \[\int_{\mathbb{R}^{N}}F_{1}(u_{n})-\int_{\mathbb{R}^{N}}F_{1}(u_{n}-u)\to\int_{ \mathbb{R}^{N}}F_{1}(u),\quad\int_{\mathbb{R}^{N}}F_{2}(u_{n})-\int_{\mathbb{ R}^{N}}F_{2}(u_{n}-u)\to\int_{\mathbb{R}^{N}}F_{2}(u).\]
Proof.: (i) By (F2), for any \(\tau>0\), there is \(\delta>0\) such that \(f_{1}(t)>\tau^{-1}t\) and \(F_{1}(t)>\frac{1}{2}\tau^{-1}t^{2}\) for \(t\in(0,\delta)\). Then
\[\int_{\mathbb{R}^{N}}u_{n}^{2}=\int_{\left|u_{n}\right|<\delta}u_{n}^{2}+\int_ {\left|u_{n}\right|\geq\delta}u_{n}^{2}\leq 2\tau\int_{\mathbb{R}^{N}}F_{1}(u_{n})+ \delta^{-\frac{4}{N}}\int_{\mathbb{R}^{N}}\left|u_{n}\right|^{2+\frac{4}{N}}.\]
Hence,
\[\limsup_{n\to\infty}\int_{\mathbb{R}^{N}}u_{n}^{2}\leq 2\tau\limsup_{n\to \infty}\int_{\mathbb{R}^{N}}F_{1}(u_{n}).\]
This completes the proof.
(ii) We only show the result for \(F_{1}\), because the result for \(F_{2}(\cdot)\) follows directly from the Brezis-Lieb lemma. Since \(F_{1}(\sqrt{\cdot})\) is concave in \((0,+\infty)\) and \(F_{1}(0)=0\), similar to the proof of Lemma 2.4, we have for \(r>1\), \(t>0\) and \(s>0\),
\[F_{1}(\sqrt{\tau t})\leq rF_{1}(\sqrt{t})\quad\text{and}\quad F_{1}(\sqrt{t+ s})\leq F_{1}(\sqrt{t})+F_{1}(\sqrt{s}).\]
Now by the inequalities above and the nondecreasing property, for each \(s,t>0\) and \(\tau\in(0,1)\) we have
\[F_{1}(t+s)=F_{1}(\sqrt{(t+s)^{2}})\leq F_{1}(\sqrt{(1+\tau)t^{2}+(1+\tau^{-1}) s^{2}}))\leq(1+\tau)F_{1}(t)+(1+\tau^{-1})F_{1}(s).\]
Hence, when \(st\geq 0\)
\[0\leq F_{1}(t+s)-F_{1}(t)\leq\tau F_{1}(t)+(1+\tau^{-1})F_{1}(s). \tag{16}\]
If \(st<0\), then
\[F_{1}(t+s)=F_{1}(\sqrt{(|t|-|s|)^{2}})\leq F_{1}(\sqrt{t^{2}+s^{2}})\leq F_{1 }(t)+F_{1}(s).\]
When \(st<0\) with \(|s|\geq|t|\), we have
\[-F_{1}(s)\leq-F_{1}(t)\leq F_{1}(t+s)-F_{1}(t)\leq F_{1}(s). \tag{17}\]
On the other hand, when \(st<0\) with \(|t|>|s|\), we have
\[F_{1}(t)=F_{1}(|t|-|s|+|s|)\leq(1+\tau)F_{1}(|t|-|s|)+(1+\tau^{-1})F_{1}(s)\leq F _{1}(|t|-|s|)+\tau F_{1}(t)+(1+\tau^{-1})F_{1}(s).\]
Then for \(st<0\) with \(|t|>|s|\),
\[-\tau F_{1}(t)-(1+\tau^{-1})F_{1}(s)\leq F_{1}(|t|-|s|)-F_{1}(t)=F_{1}(t+s)-F_ {1}(t)\leq F_{1}(s). \tag{18}\]
By (16), (17) and (18), we have for each \(s,t\in\mathbb{R}\),
\[|F_{1}(t+s)-F_{1}(t)|\leq\tau F_{1}(t)+(1+\tau^{-1})F_{1}(s).\]
Then \(F_{1}\) satisfies the assumption of the general Brezis-Lieb lemma ([6, Theorem 2]).
Now we are ready to show Theorem 1.1.
Proof of Theorem 1.1.: Let \(u_{n}\in H^{1}(\mathbb{R}^{N})\) be such that \(|u_{n}|_{2}^{2}=\alpha\) and \(J(u_{n})\to E_{\alpha}\). Then by Gagliardo-Nirenberg inequality (5) and (F3), \(\{u_{n}\}\) are bounded in \(H^{1}(\mathbb{R}^{N})\). We claim that
\[\limsup_{n\to\infty}\sup_{B_{1}(y)}|u_{n}|^{2}>0.\]
Otherwise, by Lion's lemma, \(|u_{n}|_{2+\frac{4}{N}}\to 0\). However, \(\int_{\mathbb{R}^{N}}F_{1}(u_{n})\leq J(u_{n})+\int_{\mathbb{R}^{N}}F_{2}(u_{n})\) is bounded. Then we have \(|u_{n}|_{2}\to 0\), a contradiction. Now assume, there is \(y_{n}\in\mathbb{R}^{N}\) such that, up to a subsequence, \(u_{n}(\cdot-y_{n})\rightharpoonup u\) for some \(u\in H^{1}(\mathbb{R}^{N})\setminus\{0\}\). Setting \(v_{n}=u_{n}(\cdot-y_{n})-u\), \(\beta=|u|_{2}^{2}\leq\alpha\), we have
\[|v_{n}|_{2}^{2}\to\alpha-\beta,\quad I(v_{n})\to E_{\alpha}-I(u)\leq E_{\alpha }-E_{\beta}.\]
When \(\beta<\alpha\) and \(I(u)=E_{\beta}\), then \(E_{\beta}\) is attained and
\[E_{\alpha-\beta}=\lim_{n\to\infty}E_{|v_{n}|_{2}^{2}}\leq\lim_{n\to\infty}I(v _{n})=E_{\alpha}-E_{\beta}<E_{\alpha-\beta}.\]
When \(\beta<\alpha\) and \(I(u)>E_{\beta}\), then
\[E_{\alpha-\beta}=\lim_{n\to\infty}E_{|v_{n}|_{2}^{2}}\leq\lim_{n\to\infty}I(v _{n})=E_{\alpha}-I(u)<E_{\alpha}-E_{\beta}\leq E_{\alpha-\beta}.\]
In either case, we have a contradiction. Hence, \(\beta=\alpha\), and we have \(u_{n}(\cdot-y_{n})\to u\) in \(L^{2}(\mathbb{R}^{N})\). So \(u_{n}(\cdot-y_{n})\to u\) in \(L^{2+4/N}(\mathbb{R}^{N})\) and \(\int_{\mathbb{R}^{N}}F_{2}(u_{n})\to\int_{\mathbb{R}^{N}}F_{2}(u)\). Then
\[E_{\alpha}\leq J(u)\leq\liminf_{n\to\infty}\frac{1}{2}\int_{\mathbb{R}^{N}}(| \nabla u_{n}|^{2}+F_{1}(u_{n}))-\int_{\mathbb{R}^{N}}F_{2}(u)=\lim_{n\to\infty }J(u_{n})=E_{\alpha}.\]
Therefore, \(E_{\alpha}\) is attained by \(u\) and \(|u|\). Since \(E_{\alpha}\) is attained for each \(\alpha\), it is strictly midpoint concave, and thus strictly convex. (i) holds. (ii) follows from Lemma 2.3 (iii) and (5).
To see (iii), we assume further \(c_{0}=0\). If \(f\) admits a zero, then \(\lim_{s\to+\infty}f(s)=\lim_{s\to+\infty}F(s)=+\infty\). We can find \(u_{0}\in C_{0}^{\infty}(\mathbb{R}^{N})\) such that \(\int_{\mathbb{R}^{N}}F(u_{0})>0\). Hence, when \(c_{0}=0\),
\[J(u_{0}(t^{-\frac{1}{N}}\cdot))=t^{1-\frac{2}{N}}\int_{\mathbb{R}^{N}}|\nabla u _{0}|^{2}-t\int_{\mathbb{R}^{N}}F(u_{0})\to-\infty\text{ as }t\to+\infty.\]
By \(|u_{0}(t^{-\frac{1}{N}}\cdot)|_{2}^{2}=t|u_{0}|_{2}^{2}\), we have \(E_{\alpha}\to-\infty\) as \(\alpha\to+\infty\). By this, the strong concavity and (ii), \(E_{\alpha}\) has a unique zero in \((0,+\infty)\).
If \(f\) is negative in \((0,+\infty)\), then \(F(s)=-F_{1}(s)\) for each \(s\). Hence, \(E_{\alpha}\geq 0\) for each \(\alpha\). By the strong concavity and (ii), \(E_{\alpha}\) is strictly increasing.
Preliminaries for the proof of Theorem 1.2
By a change of scaling, (1) becomes
\[\begin{cases}-\Delta u+V(\varepsilon x)u=f(u)+\lambda u,\\ \int_{\mathbb{R}^{N}}u^{2}=\alpha.\end{cases} \tag{19}\]
We will solve (19) under the assumptions (F1)-(F5) and (V1)-(V3). We assume that \(\Omega\subset B(0,M_{0}/2)\) for some \(M_{0}>0\), and without loss of generality,
\[\inf_{B(0,M_{0})}V=1.\]
For any \(O\) satisfies (V3) such that \(\mathcal{M}\subset O\subset\overline{O}\subset\Omega\), we can fix \(\delta_{0}\in(0,1)\) small such that \(O^{5\delta_{0}}\subset\Omega\) and
\[\inf_{O^{3\delta_{0}}\setminus O^{\delta_{0}}}|\nabla V|\geq\nu_{0},\]
for some \(\nu_{0}>0\), where
\[O^{\delta}:=\{\,x\in\mathbb{R}^{N}\mid\text{dist}(x,O)\leq\delta\,\}\quad \text{for}\quad\delta>0.\]
We now fix
\[\mu_{0}=\min_{x\in O^{3\delta_{0}}}V(x). \tag{20}\]
Let \(\widetilde{V}:\mathbb{R}^{N}\to[1,+\infty)\) be a function such that
\[\widetilde{V}(x)=\begin{cases}V(x),&|x|<M_{0};\\ \max\{V(x),|x|^{2}\},&|x|\geq M_{0}.\end{cases} \tag{21}\]
We will work on the Hilbert space
\[H_{\varepsilon}:=\left\{\,u\in H^{1}(\mathbb{R}^{N})\,\bigg{|}\,\int_{ \mathbb{R}^{N}}\widetilde{V}(\varepsilon x)u^{2}\mathrm{d}x<\infty\,\right\}, \tag{22}\]
with inner product
\[(u,v)_{\varepsilon}:=\int_{\mathbb{R}^{N}}\nabla u\nabla v+\widetilde{V}( \varepsilon x)uv,\]
and norm \(\|u\|_{\varepsilon}:=\sqrt{(u,u)_{\varepsilon}}\). We also denote the norm on dual space by \(\|\cdot\|_{H_{\varepsilon}^{-1}}\). Furthermore, we only prove the existence of \(\ell\)-peak solutions for \(\ell\geq 2\) since the case \(\ell=1\) is much simpler.
### The limit system
We first study the solution \((\lambda,\mathbf{v}):=(\lambda,v_{1},v_{2},\cdots,v_{\ell})\in\mathbb{R}\times H ^{1}(\mathbb{R}^{N})^{\ell}\) to the system
\[\begin{cases}-\Delta v_{i}=f(v_{i})+\lambda v_{i}\,\,\,\text{in}\,\,\,\mathbb{ R}^{N},\\ v_{i}(x)>0,\,\lim_{|x|\to\infty}v_{i}(x)=0,\quad i=1,2,\cdots,\ell,\\ \sum_{i=1}^{\ell}|v_{i}|_{2}^{2}=\alpha.\end{cases} \tag{23}\]
Let \(\ell^{-1}\alpha\in(0,\alpha_{N})\). Then there is a minimizer \(u_{0}\) for \(E_{\ell^{-1}\alpha}\). We may assume \(u_{0}>0\) and \(u_{0}(x)=u_{0}(|x|)\). It is clearly that for some \(\lambda\in\mathbb{R}\), \((\lambda,u_{0},u_{0},\cdots,u_{0})\in\mathbb{R}\times H^{1}(\mathbb{R}^{N})^{\ell}\) is a solution to (23). Problem (23) is related to the following functional
\[\mathbb{J}(\mathbf{v}):=\sum_{i=1}^{\ell}J(v_{i})\]
defined on
\[\mathbb{M}_{\alpha}:=\{\,\mathbf{v}=(v_{1},\cdots,v_{\ell})\in H^{1}(\mathbb{R}^ {N})^{\ell}\,|\,\sum_{i=1}^{\ell}|v_{i}|_{2}^{2}=\alpha\,\}\,.\]
We call \(\mathbf{v}\) a critical point to \(\mathbb{J}\) on \(\mathbb{M}_{\alpha}\) if \((\lambda,\mathbf{v})\) solves (23) for some \(\lambda\). Set
\[S_{\ell-1}:=\left\{\mathbf{s}=(s_{1},\cdots,s_{\ell})\in[0,1]^{\ell}\,\bigg{|}\, \sum_{i=1}^{\ell}s_{i}=1\right\}.\]
For each \(\mathbf{s}\in S_{\ell-1}\), define
\[\gamma_{0}(\mathbf{s})=(\sqrt{\ell s_{1}}u_{0},\cdots,\sqrt{\ell s_{\ell}}u_{0}) \in\mathbb{M}_{\alpha}.\]
**Proposition 3.1**.: _For each closed neighborhood \(S\subset S_{\ell-1}\) of \(\mathbf{s}^{0}:=(\ell^{-1},\cdots,\ell^{-1})\), we define_
\[\Gamma=\{\,\gamma\in C(S,\mathbb{M}_{\alpha})\mid\gamma=\gamma_{0}\text{ on }\partial S\,\}.\]
_There holds_
\[\mathbb{J}(\gamma_{0}(\mathbf{s}))<\ell E_{\ell^{-1}\alpha}\quad\text{if}\quad\bm {s}\neq\mathbf{s}^{0}. \tag{24}\]
_Moreover,_
\[\ell E_{\ell^{-1}\alpha}=\inf_{\gamma\in\Gamma}\sup_{\mathbf{s}\in S}\mathbb{J}( \gamma(\mathbf{s})). \tag{25}\]
Proof.: By the strict convexity of \(F(\sqrt{\cdot})\), for each \(\mathbf{s}\neq\mathbf{s}^{0}\), we have
\[\mathbb{J}(\gamma_{0}(\mathbf{s}))= \frac{\ell}{2}\int_{\mathbb{R}^{N}}|\nabla u_{0}|^{2}-\sum_{j=1}^{ \ell}\int_{\mathbb{R}^{N}}F(\sqrt{\ell s_{j}}u_{0})\] \[= \frac{\ell}{2}\int_{\mathbb{R}^{N}}|\nabla u_{0}|^{2}-\ell\int_{ \mathbb{R}^{N}}\ell^{-1}\sum_{j=1}^{\ell}F(\sqrt{\ell s_{j}u_{0}^{2}})\] \[< \frac{\ell}{2}\int_{\mathbb{R}^{N}}|\nabla u_{0}|^{2}-\ell\int_{ \mathbb{R}^{N}}F\left(\sqrt{\ell^{-1}\sum_{j=1}^{\ell}\ell s_{j}u_{0}^{2}} \right)=\ell E_{\ell^{-1}\alpha}.\]
On the other hand, according to the Brouwer degree theory, for each \(\gamma\in\Gamma_{\alpha}\), there exists \(\mathbf{s}\in S\) such that
\[|(\gamma(\mathbf{s}))_{i}|_{2}^{2}=\ell^{-1}\alpha,\quad i=1,\cdots,\ell.\]
This implies (25).
By Lemma 2.4, \(E_{t\beta}<tE_{\beta}\) for \(t>1\), \(t\beta\in(0,\alpha_{N})\). So we have
\[\ell E_{\ell^{-1}\alpha}<(\ell+1)E_{(\ell+1)^{-1}\alpha}\text{ for each }\ell\geq 1\text{ such that }\ell^{-1}\alpha\in(0,\alpha_{N}). \tag{26}\]
Specially, (26) is true for each \(\ell\geq 1\) if \(\alpha\in(0,\alpha_{N})\).
Let \(\mu_{0}\in[1,V_{0})\) be the constant fixed in (20). For \(\mathbf{\mu}=(\mu_{1},\cdots,\mu_{\ell})\in[\mu_{0},V_{0}]^{\ell}\) and \(\mathbf{v}\in\mathbb{M}_{\alpha}\), we consider the functional \(\mathbb{J}_{\mathbf{\mu}}\) defined by
\[\mathbb{J}_{\mathbf{\mu}}(\mathbf{v})=\mathbb{J}(\mathbf{v})+\sum_{i=1}^{\ell}\frac{1}{2} \mu_{i}|v_{i}|_{2}^{2}.\]
Similarly, we say \(\mathbf{v}\) is a critical point of \(\mathbb{J}_{\mathbf{\mu}}\) on \(\mathbb{M}_{\alpha}\) if there is \(\lambda\in\mathbb{R}\) such that \((\lambda,\mathbf{v})\) solves the following problem:
\[\begin{cases}-\Delta v_{i}=f(v_{i})-\mu_{i}v_{i}+\lambda v_{i}\,\text{ in }\,\mathbb{R}^{N},\\ \lim_{|x|\to\infty}v_{i}(x)=0,\quad i=1,2,\cdots,\ell,\\ \sum_{i=1}^{\ell}|v_{i}|_{2}^{2}=\alpha.\end{cases} \tag{27}\]
**Lemma 3.2**.: _Assume \(\alpha\in(0,\alpha_{N})\). For \(\beta\in[\frac{1}{2}\alpha,\alpha]\), \(\mathbf{\mu}=(\mu_{1},\cdots,\mu_{\ell})\in[\mu_{0},V_{0}]^{\ell}\), let \(\mathbf{v}=(v_{1},\cdots,v_{\ell})\) be a critical point of \(\mathbb{J}_{\mathbf{\mu}}\) on \(\mathbb{M}_{\beta}\) with a corresponding Langrange multiplier \(\lambda\). If \(\mathbb{J}(\mathbf{v})\leq C_{0}\) for some constant \(C_{0}\), then there is \(D_{1}>0\) depending only on \(\alpha,C_{0},\ell,\mu_{0}\) such that_
\[\sum_{i=1}^{\ell}\|v_{i}\|_{H^{1}}+|\lambda|\leq D_{1}.\]
Proof.: By (12), we have
\[C_{0}\geq\sum_{i=1}^{\ell}J(v_{i})= \sum_{i=1}^{\ell}\left(\frac{1}{2}|\nabla v_{i}|_{2}^{2}-\int_{ \mathbb{R}^{N}}F(v_{i})\right)\] \[\geq \sum_{i=1}^{\ell}\left(\frac{1}{2}|\nabla v_{i}|_{2}^{2}-\int_{ \mathbb{R}^{N}}(c_{0}+\tau)|v_{i}|^{2+\frac{4}{N}}-C_{\tau}\int_{\mathbb{R}^{N }}|v_{i}|^{2}\right)\] \[\geq \frac{1}{2}\sum_{i=1}^{\ell}\left(1-2S(N)(c_{0}+\tau)(|v_{i}|_{2} ^{2})^{\frac{2}{N}}\right)|\nabla v_{i}|_{2}^{2}-C_{\tau}\alpha.\]
Here we fix \(\tau>0\) sufficiently small such that
\[1-2S(N)(c_{0}+\tau)(|v_{i}|_{2}^{2})^{\frac{2}{N}}\geq 1-2S(N)(c_{0}+\tau) \alpha^{\frac{2}{N}}>0.\]
Then we see that \(|\nabla v_{i}|_{2}^{2}\) is bounded by some constant depending only on \(\ell\), \(N\), \(\alpha\) and \(C_{0}\).
On the other hand, since \((\lambda-\mu_{i})|v_{i}|_{2}^{2}=|\nabla v_{i}|_{2}^{2}-\int_{\mathbb{R}^{N}}f (v_{i})v_{i}\), \(f(v_{i})v_{i}\geq 2F(v_{i})\), and \(f(v_{i})v_{i}\leq C|v_{i}|_{2+\frac{4}{N}}^{2+\frac{4}{N}}\), we have \(|\lambda-\mu_{i}||v_{i}|_{2}^{2}\leq C\) for some constant \(C>0\). Summing up, we have
\[|\lambda|\alpha\leq\ell C+\max_{1\leq i\leq\ell}|\mu_{i}|\alpha.\]
Then the conclusion follows.
**Lemma 3.3**.: _If \(u\geq 0\) satisfies \(-\Delta u\leq f(u)+tu\), then either \(u=0\) or \(|u|_{2}^{2}\geq 1/C_{t}\) for some \(C_{t}>0\) depending only on \(t\)._
Proof.: The conclusion follows from the Gagliardo-Nirenberg inequality:
\[|\nabla u|_{2}^{2}\leq\int_{\mathbb{R}^{N}}f(u)u+\lambda u^{2}\leq\int_{ \mathbb{R}^{N}}(\frac{f(u)}{u}+t)^{+}u^{2}\leq C_{t}|u|_{2+\frac{4}{N}}^{2+ \frac{4}{N}}\leq C_{t}C(N)|\nabla u|_{2}^{2}|u|_{2}^{\frac{4}{N}},\]
where \(C_{t}>0\) is a constant depending only on \(t\).
For \(\alpha\in(0,\alpha_{N})\), set
\[K_{\alpha}=\left\{\begin{array}{c}\mathbf{v}\in\mathbb{M}_{\alpha}\left|\begin{array} []{c}\mathbf{v}\text{ is a critical point of }\mathbb{J}_{\mathbf{\mu}}\text{ on }\mathbb{M}_{\alpha}\text{ for some }\mathbf{\mu}\in[\mu_{0},V_{0}]^{\ell},\\ v_{i}>0\text{ and }v_{i}(0)=\max_{\mathbb{R}^{N}}v_{i}\text{ for }i=1,2,\cdots,\ell\text{, and }\mathbb{J}_{\mathbf{\mu}}(\mathbf{v})\leq\ell E_{\ell ^{-1}\alpha}+\frac{1}{2}V_{0}\alpha\end{array}\right.\right\}.\]
Clearly, \(K_{\alpha}\neq\emptyset\). Moreover, set
\[\rho_{1}=\frac{1}{2}\min\{C_{t}^{-1},\ell^{-1}\alpha\} \tag{28}\]
where \(C_{t}\) is the constant in Lemma 3.3 with \(t=D_{1}+V_{0}\), and \(D_{1}\) is the constant in Lemma 3.2 for some \(C_{0}\in(\ell E_{\ell^{-1}\alpha},(\ell+1)E_{(\ell+1)^{-1}\alpha})\) (see (26)). We have
**Lemma 3.4**.: _There is \(C,c>0\) such that for each \(\mathbf{v}\in K_{\alpha}\), there hold \(|v_{i}|^{2}\geq 2\rho_{1}\), and_
\[|v_{i}(x)|\leq Ce^{-c|x|^{2}},\quad i=1,\cdots,\ell.\]
_Moreover, \(K_{\alpha}\) is compact in \(H^{1}(\mathbb{R}^{N})^{\ell}\) and \(H_{\varepsilon}^{\ell}\)._
Proof.: The the conclusion follows from Lemma 3.2, Lemma 3.3, and Lemma 6.1.
In what follows, we write \(\mathbf{p}=(p_{1},\cdots,p_{\ell})\in(\mathbb{R}^{N})^{\ell}\), and set
\[\xi(\mathbf{p})=\min_{1\leq i\neq j\leq\ell}|p_{i}-p_{j}|. \tag{29}\]
The following estimate is essential to obtain a minimax geometry for the functional of (19), whose proof will be given in the Appendix.
**Proposition 3.5**.: _Assume (F1)-F(5). Let \(\mathbf{v}=(v_{1},v_{2},\cdots,v_{\ell})\in K_{\alpha}\). Then there is \(C>0\) such that for sufficiently large \(L\),_
\[J(B\sum_{j=1}^{\ell}v_{j}(\cdot-p_{j}))+\frac{V_{0}}{2}\int_{\mathbb{R}^{N}}|B \sum_{j=1}^{\ell}v_{j}(\cdot-p_{j})|^{2}\leq\mathbb{J}(\mathbf{v})+\frac{V_{0}}{2} \alpha-C\xi(\mathbf{p})e^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}},\]
_where \(\mathbf{p}=(p_{1},\cdots,p_{\ell})\in(\mathbb{R}^{N})^{\ell}\) with \(\xi(\mathbf{p})\geq\frac{L}{2}\), and \(B=\alpha^{\frac{1}{2}}|\sum_{j=1}^{\ell}v_{j}(\cdot-p_{j})|_{2}^{-1}\)._
### Local centers of mass
We will introduce \(\ell\) local centers of mass \((\Upsilon_{1}(U),\cdots,\Upsilon_{\ell}(U))\) as in [8]. First by Lemma 3.4, we can find \(R_{0}>1\) such that for each \(\mathbf{U}=(U_{1},\cdots,U_{\ell})\in K_{\alpha}\), there holds
\[\|U_{j}\|_{L^{2}(B(0,R_{0}/2))}>\frac{3}{4}\rho_{1},\quad\|U_{j}\|_{L^{2}(\mathbb{ R}^{N}\setminus B(0,R_{0}))}<\frac{\rho_{1}}{8\ell}. \tag{30}\]
Then we have
**Lemma 3.6**.: _For \(u\in H^{1}(\mathbb{R}^{N}),(y_{1},\cdots,y_{\ell})\in(\mathbb{R}^{N})^{\ell},( U_{1},\cdots,U_{\ell})\in K_{\alpha}\) such that_
\[\xi(y_{1},\cdots,y_{\ell})>12R_{0},\quad\|u-\sum_{j=1}^{\ell}U_{j}(\cdot-y_{j} )\|<\frac{\rho_{1}}{16},\]
_there hold_
\[\int_{B(P,R_{0})}u^{2}\geq\frac{1}{2}\rho_{1}^{2}\quad\text{for}\quad P\in \bigcup_{j=1}^{\ell}\overline{B}(y_{j},R_{0}/2),\quad\int_{B(P,R_{0})}u^{2} \leq\frac{1}{16}\rho_{1}^{2}\quad\text{for}\quad P\notin\bigcup_{j=1}^{\ell}B( y_{j},2R_{0}).\]
We define
\[Z=\left\{\;u\in H^{1}(\mathbb{R}^{N})\;\middle|\;\|u-\sum_{j=1}^{\ell}U_{j}( \cdot-y_{j})\|<\frac{\rho_{1}}{16},\xi(y_{1},\cdots,y_{\ell})\geq 12R_{0},(U_{1}, \cdots,U_{\ell})\in K_{\alpha}\;\right\}. \tag{31}\]
For \(u\in H^{1}(\mathbb{R}^{N})\) and \(P\in\mathbb{R}^{N}\), we define
\[d(u,P)=\psi\left(\int_{B(P,R_{0})}u^{2}\right), \tag{32}\]
with \(\psi\in C_{0}^{\infty}([0,\infty),[0,1])\) satisfying
\[\psi(r)=\begin{cases}0&r\in[0,\frac{1}{16}\rho_{1}^{2}],\\ 1&r\in[\frac{1}{2}\rho_{1}^{2},\infty).\end{cases}\]
By Lemma 3.6, for any \(u\in Z\) there exist \(\ell\) disjoint balls \(B_{j}\) satisfying
\[\begin{cases}\mathrm{diam}B_{j}=5R_{0}&\text{ for all }j\in\{1,2,\cdots,\ell\},\\ d(u,\cdot)\not\equiv 0&\text{ on }B_{j}\text{ for all }j\in\{1,2,\cdots,\ell\},\\ d(u,\cdot)\equiv 0&\text{ on }\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B_{j}. \end{cases}\]
For \(B_{j}\), we define
\[\Upsilon_{j}(u)=\frac{\int_{B_{j}}d(u,P)Pd}{\int_{B_{j}}d(u,P)dP}\in B_{j}. \tag{33}\]
It is clear that \((\Upsilon_{1}(u),\cdots,\Upsilon_{\ell}(u))\) is uniquely determined up to permutation and it is independent of the choice of each \(B_{j}\). Similar to the argument of [8], we can assume that
\[\Upsilon(u)=(\Upsilon_{1}(u),\cdots,\Upsilon_{\ell}(u)),\]
is continuous up to permutations. Note that for a continuous function \(\varphi(\mathbf{p})\) which is independent of permutation of \(p_{i}\), \(\varphi(\Upsilon(u))\) is well defined and continuous. Moreover, similarly to [45, Lemma 2.5], we have the following properties of \(\Upsilon\).
**Lemma 3.7**.: _The following statements hold for true._
1. _For_ \(u\in Z\)_, we have_ \(|\Upsilon_{j}(u)-y_{j}|\leq 2R_{0}\;(j=1,2,\cdots,\ell_{0})\) _up to permutation._
_._
2. \(\Upsilon_{j}(u)\) _is_ \(C^{1}\) _continuous for each_ \(u\in Z\)_. Moreover, there exists a constant_ \(D_{2}>0\) _such that_ \[\sup_{u\in Z}\|\Upsilon_{j}^{\prime}(u)\|\leq D_{2}.\]
3. _if_ \(u,v\in Z\) _satisfy for some_ \(j\in\{1,\cdot\cdot\cdot,\ell\}\) _and_ \(h\in\mathbb{R}^{N}\)__ \[v(x-h)=u(x)\ \ \ \mbox{in}\ B(\Upsilon_{j}(u),4R_{0}),\] _then_ \(\Upsilon_{j}(v)=\Upsilon_{j}(u)-h\)_._
4. \(\Upsilon^{\prime}(u)v=0\) _if_ \(\operatorname{supp}v\subset\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B(\Upsilon _{j}(u),4R_{0})\)_._
### Penalized functional
We use notation \(\frac{1}{\varepsilon}\mathcal{O}^{\delta}=\{\,x\in\mathbb{R}^{N}\mid \varepsilon x\in O^{\delta}\,\}\) for \(\varepsilon,\delta>0\). Take \(\phi\in C_{0}^{\infty}(\mathbb{R}^{N})\) such that \(0\leq\phi\leq 1\) and \(|\nabla\phi|\leq 4/\delta_{0}\) in \(\mathbb{R}^{N}\), \(\phi=1\) for \(|x|\leq\delta_{0}/2\), and \(\phi=0\) for \(|x|\geq\delta_{0}\). Set \(\phi_{\varepsilon}(x)=\phi(\varepsilon x)\). For \(L\geq 100R_{0}\), set
\[Z_{L,\varepsilon}=\left\{\,\sum_{j=1}^{\ell}(\phi_{\varepsilon}U_{j})(\cdot-y _{j})\,\Bigg{|}\,(U_{1},\cdots,U_{\ell})\in K_{\alpha},\,\{\,y_{1},y_{2}, \cdots,y_{\ell}\,\}\subset\frac{1}{\varepsilon}O^{4\delta_{0}},\;\xi(y_{1}, \cdots,y_{\ell})\geq L\,\right\}.\]
By compactness of \(K_{\alpha}\) and the decay estimate of \(U_{j}\in K_{\alpha}\) (see Lemma 6.1), we know that
\[\|\sum_{j=1}^{\ell}(\phi_{\varepsilon}U_{j})(\cdot-y_{j})-\sum_{j=1}^{\ell}U_{ j}(\cdot-y_{j})\|\leq Ce^{-c\varepsilon^{-1}},\]
for some \(C,c>0\) independent of the choice of \(\varepsilon\), \(y_{j}\) and \(U_{j}\), \(j=1,\cdots,\ell\). There for \(\rho\leq\frac{1}{32}\rho_{1}\), if \(u\in H_{\varepsilon}\) is such that \(\operatorname{dist}_{H_{\varepsilon}}(u,Z_{L,\varepsilon})<\rho\), then \(\Upsilon(u)\) is well defined for small \(\varepsilon\), since \(\|w\|\leq\|w\|_{\varepsilon}\) holds for each \(w\in H_{\varepsilon}\).
For \(0<\rho\leq\frac{1}{32}\rho_{1}\), \(\delta\in[\delta_{0},3\delta_{0}]\), set
\[Z(\rho,\delta)=\left\{\,u\in\mathcal{M}_{\alpha}^{\varepsilon}\,\,\bigg{|} \,\operatorname{dist}_{H_{\varepsilon}}(u,Z_{L,\varepsilon})<\rho,\quad\max_{1 \leq j\leq\ell}\operatorname{dist}(\varepsilon\Upsilon_{j}(u),O)<\delta\, \right\}, \tag{34}\]
where \(\mathcal{M}_{\alpha}^{\varepsilon}:=\{\,u\in H_{\varepsilon}\mid\int_{ \mathbb{R}^{N}}u^{2}=\alpha\,\}\). Note that \(Z(\rho,\delta)\) depends on \(L\) and \(\varepsilon\), we omit them for the sake of brevity. It is sufficient to impose
\[0<\varepsilon<\varepsilon_{L}:=(\frac{\delta}{4L})^{4}, \tag{35}\]
so that \(Z(\rho,\delta)\) is nonempty when \(L\) is large enough. In what follows, we will always assume \(\varepsilon\in(0,\varepsilon_{L})\) and \(L\) is sufficiently large.
**Remark 3.8**.: _Let \(\rho<\rho^{\prime}\) and \(\delta<\delta^{\prime}\). Then,_
\[\operatorname{dist}_{H_{\varepsilon}}(\partial Z(\rho^{\prime},\delta^{\prime }),Z(\rho,\delta))>0.\]
_In fact, if \(\operatorname{dist}_{H_{\varepsilon}}(u,Z_{L,\varepsilon})=\rho^{\prime}\), then \(\operatorname{dist}_{H_{\varepsilon}}(u,Z(\rho,\delta))\geq\rho^{\prime}-\rho\). If \(\operatorname{dist}_{H_{\varepsilon}}(u,Z_{L,\varepsilon})\leq\rho^{\prime}\) and \(\operatorname{dist}(\varepsilon\Upsilon_{j}(u),O)=\delta^{\prime}\) for some \(j\), then by Lemma 3.4 and Lemma 3.7 (i), for \(\varepsilon\) sufficiently small, \(\operatorname{dist}_{H_{\varepsilon}}(u,Z(\rho,\delta))>\|U_{j}\|_{ \varepsilon}/2\geq\rho_{1}/2\)._
As in [43], we choose \(H(s)\in C_{0}^{\infty}([-3,3];[0,1])\), with \(H(s)=1\) for \(|s|\leq 1\), \(H^{\prime}(s)\) is odd and \(-1\leq H^{\prime}(s)\leq 0\) for \(s\geq 0\). Denoting
\[\widetilde{V}_{\varepsilon}(x)=\widetilde{V}(\varepsilon x),\quad V_{ \varepsilon}(x)=V(\varepsilon x),\quad\overline{V}_{\varepsilon}=V_{ \varepsilon}-\widetilde{V}_{\varepsilon},\]
we define \(\Psi_{\varepsilon}\) by
\[\Psi_{\varepsilon}(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}\overline{V}_{ \varepsilon}(x)H(e^{\varepsilon|x|^{2}}u)u^{2}\mathrm{d}x. \tag{36}\]
Note that \(\Psi_{\varepsilon}\) is well-defined on \(H_{\varepsilon}\), and if \(u(x)\leq e^{-\varepsilon|x|^{2}}\) for \(x\in\mathbb{R}^{N}\setminus B(0,M_{0}/\varepsilon)\), then
\[\int_{\mathbb{R}^{N}}(\nabla u\nabla v+\widetilde{V}(\varepsilon x)uv)+\Psi_{ \varepsilon}^{\prime}(u)v=\int_{\mathbb{R}^{N}}(\nabla u\nabla v+V(\varepsilon x )uv),\quad u,v\in H_{\varepsilon}. \tag{37}\]
We have the following lemma.
**Lemma 3.9** (Corollary 2.2 of [43]).: _For some \(C,c>0\) independent of \(\varepsilon\), there holds_
\[\sup_{u\in H_{\varepsilon}}|\Psi_{\varepsilon}(u)|+\sup_{u\in H_{\varepsilon}} \|\Psi^{\prime}_{\varepsilon}(u)\|_{H_{\varepsilon}^{-1}}\leq Ce^{-c\varepsilon ^{-1}},\]
_where \(\|\cdot\|_{H_{\varepsilon}^{-1}}\)denotes the norm on the dual space of \(H_{\varepsilon}\)._
Let \(\xi(\boldsymbol{p})\) be the function in (29) for \(\boldsymbol{p}=(p_{1},\cdot\cdot\cdot,p_{\ell})\in(\mathbb{R}^{N})^{\ell}\). We note that \(\boldsymbol{p}\mapsto\min\left\{\,\xi(\boldsymbol{p}),\varepsilon^{-\frac{3} {4}}\,\right\}\) is Lipschitz continuous and independent of permutations of \(p_{i}\).
By the integral convolution with mollifiers, we can find a smooth function \(\xi_{1}(\boldsymbol{p})\in C^{1}((\mathbb{R}^{N})^{\ell})\) independent of permutations of \(p_{i}\), such that for some constant \(C(N,\ell)>0\) depending only on \(N,\ell\),
\[|\xi_{1}(\boldsymbol{p})-\min\left\{\,\xi(\boldsymbol{p}),\varepsilon^{-\frac {3}{4}}\,\right\}|\leq 1\text{ and }|\nabla\xi_{1}(\boldsymbol{p})|\leq C(N,\ell),\quad \boldsymbol{p}\in(\mathbb{R}^{N})^{\ell},\]
Then \(u\mapsto\xi_{1}(\Upsilon(u))\) is well-defined and \(C^{1}\) continuous. Take \(\chi\in C^{\infty}(\mathbb{R}^{N};[0,1])\) such that
\[\chi=1\text{ in }\mathbb{R}^{N}\setminus B(0,\tfrac{1}{5})\text{, }\chi=0\text{ in }B(0,\tfrac{1}{10})\text{ and }|\nabla\chi|\leq 20.\]
Setting \(\chi_{u}(x)=\Pi_{j=1}^{\ell}\chi\left(\tfrac{x-\Upsilon_{j}(u)}{\xi_{1}( \Upsilon(u))}\right)\), we note that \(\chi_{u}\) is independent of permutations of \(\Upsilon_{j}(u)\). Define
\[\Phi_{\varepsilon}(u)=\left(\xi_{1}(\Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u} u^{2}\mathrm{d}x-1\right)_{+}^{2}.\]
Then, by Lemma 3.7, we can check that
**Lemma 3.10**.: _There is \(C_{0}>0\) independent of \(L,\varepsilon\) such that for \(u\in Z(\tfrac{\rho_{1}}{32},3\delta_{0})\) and any \(v\in H_{\varepsilon}\),_
\[\left|\Phi^{\prime}_{\varepsilon}(u)v-4\Phi_{\varepsilon}(u)^{\frac{1}{2}}\xi _{1}(\Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u}uv\right|\leq C_{0}\Phi_{ \varepsilon}(u)^{\frac{1}{2}}\|v\|_{\varepsilon}\int_{\mathbb{R}^{N}\setminus \cup_{j=1}^{\ell}B(\Upsilon_{j}(u),\tfrac{1}{10}\xi_{1}(\Upsilon(u)))}u^{2}.\]
_Moreover, if \(\operatorname{supp}v\subset\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B( \Upsilon_{j}(u),4R_{0})\), then_
\[\Phi^{\prime}_{\varepsilon}(u)v=4\Phi_{\varepsilon}(u)^{\frac{1}{2}}\xi_{1}( \Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u}uv.\]
We also modified the nonlinearity term. Recalling the definition of \(f_{1}\) and \(f_{2}\), we define odd function
\[f_{2,K}(t):=\min\{f_{2}(t),f_{2}(K)\}\ \text{ for any }K>0\text{ and }t\geq 0.\]
Set \(f_{K}(t):=-f_{1}(t)+f_{2,K}(t)\), \(F_{2,K}(t):=\int_{0}^{t}f_{2,K}(s)\mathrm{d}s\) and \(F_{K}(t):=-F_{1}(t)+F_{2,K}(t)\). Then necessarily,
\[f_{K}(t)=\min\{f(t),f(K)\}.\]
Define the functional:
\[\Gamma_{\varepsilon,K}(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}(|\nabla u|^{2}+ \widetilde{V}(\varepsilon x)u^{2})-\int_{\mathbb{R}^{N}}F_{K}(u)+\Phi_{ \varepsilon}(u)+\Psi_{\varepsilon}(u),\quad u\in Z(\frac{\rho_{1}}{32},3 \delta_{0}). \tag{38}\]
We note that by [43, Lemma 2.3], \(\Gamma_{\varepsilon,K}\) is well-defined and is of class \(C^{1}\) on \(Z_{L}(3\delta_{0},\rho_{1})\). For \(u\in H_{\varepsilon}\), we also set
\[G(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}u^{2}\mathrm{d}x.\]
**Lemma 3.11**.: _If \(u\in Z(\tfrac{\rho_{1}}{32},3\delta_{0})\) satisfies \(\Gamma_{\varepsilon,K}(u)\leq(\ell+1)E_{(\ell+1)^{-1}\alpha}+\tfrac{1}{2}V_{0}\alpha\), then the following quantities are bounded by a constant \(C_{0}>0\) independent of \(\varepsilon\), \(L\) or \(K\):_
\[\|u\|_{\varepsilon},\,\Phi_{\varepsilon}(u),\,\int_{\mathbb{R}^{N}}f_{1}(u)u, \,\int_{\mathbb{R}^{N}}f_{2,K}(u)u,\,\int_{\mathbb{R}^{N}}F_{1}(u),\,\int_{ \mathbb{R}^{N}}F_{2,K}(u),\,\xi_{1}(\Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u} u^{2}.\]
_If we assume additionally that \(\|\Gamma_{\varepsilon,K}^{\prime}(u)-\lambda G^{\prime}(u)\|_{H_{\varepsilon}^ {-1}}\leq 1\), then making \(C_{0}\) larger if necessary, it holds \(|\lambda|\leq C_{0}\)._
Proof.: Clearly, \(\|u\|_{\varepsilon}\leq C\) for some \(C>0\) independent of \(L,\varepsilon,K\). Hence, by \(|f_{2,K}(t)|\leq|f_{2}(t)|\leq C|t|^{1+\frac{4}{N}}\), we have \(\|f_{2,K}(u)u\|_{L^{1}(\mathbb{R}^{N})}+\|F_{2,K}(u)\|_{L^{1}(\mathbb{R}^{N})}\leq C\). Then, we have
\[\Phi_{\varepsilon}(u)+\int_{\mathbb{R}^{N}}F_{1}(u)\leq\Gamma_{\varepsilon,K}( u)-\frac{1}{2}\|u\|_{\varepsilon}^{2}+\int_{\mathbb{R}^{N}}F_{2,K}(u)-\Psi_{ \varepsilon}(u)\leq C,\quad\xi_{1}(\Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u}u^ {2}\leq\Phi_{\varepsilon}(u)^{\frac{1}{2}}+1\leq C.\]
By Lemma 2.1 (iii), there holds
\[\frac{1}{2}\int_{\mathbb{R}^{N}}f_{1}(u)u\leq\int_{\mathbb{R}^{N}}F_{1}(u) \leq C.\]
Thus the first conclusion follows. To show \(|\lambda|\leq C_{0}\), we see that
\[|\Gamma_{\varepsilon,K}^{\prime}(u)u|\leq\|u\|_{\varepsilon}^{2}+\int_{ \mathbb{R}^{N}}|f_{K}(u)u|+|\Psi_{\varepsilon}^{\prime}(u)u|+|\Psi_{ \varepsilon}^{\prime}(u)u|\leq C,\]
\[|\lambda|\leq\alpha^{-1}(\|u\|_{\varepsilon}+|\Gamma_{\varepsilon,K}^{\prime} (u)u|)\leq C_{0}.\]
which complete the proof.
**Remark 3.12**.: _Let \(u\in H^{1}(\mathbb{R}^{N})\) weakly solves the following equation_
\[-\Delta|u|\leq f_{2}(|u|)+|\lambda u|\quad\text{in }B(x,1),\]
_with \(\|u\|\leq C_{0}\) and \(|\lambda|\leq C_{0}\), where \(x\in\mathbb{R}^{N}\) is arbitrary and \(C_{0}\) is the constant in Lemma 3.11. Then by the fact \(f_{2}(t)\leq Ct^{1+\frac{4}{N}}\quad\text{for any }t\geq 0,\) and the subsolution estimates [26], it follows \(\|u\|_{L^{\infty}(B(x,1/2))}\leq K_{0}\) for some constant \(K_{0}>0\). Making \(K_{0}\) larger if necessary, then_
\[2u_{0}\leq K_{0}. \tag{39}\]
_From now on, we fix \(K=K_{0}\), and denote \(\Gamma_{\varepsilon}(u):=\Gamma_{\varepsilon,K_{0}}(u)\). Moreover, we set \(\bar{f}_{2}=f_{2,K_{0}}\), \(\bar{F}_{2}=F_{2,K_{0}}\), \(\bar{f}=f_{K_{0}}\) and \(\bar{F}=F_{K_{0}}\), and hence there always hold \(\bar{f}_{2}(t)\leq f_{2}(K_{0})\) and \(\bar{f}(t)\leq f(t)\) for \(t\geq 0\)._
### A prior decay estimate
The following lemma is useful to get a priori decay estimate.
**Lemma 3.13**.: _Let \(\theta>1\), \(b\geq 0\), \(R_{1},R>0\) be such that \(R>R_{1}+1\). Assume \(Q(r)\) is a nonincreasing function in \([R_{1},R]\) satisfying_
\[Q(r)\leq\theta^{-1}Q(r-1)+b\quad\text{for }r\in[R_{1}+1,R].\]
_Then_
\[Q(R)\leq\theta^{R_{1}+1}Q(R_{1})e^{-R\ln\theta}+\frac{\theta b}{\theta-1}.\]
Proof.: By the assumptions, we can get the conclusion from
\[(Q(R)-\frac{\theta b}{\theta-1})^{+}\leq\theta^{-1}(Q(R-1)-\frac{\theta b}{ \theta-1})^{+}\leq\theta^{-\lfloor R-R_{1}\rfloor}(Q(R_{1})-\frac{\theta b}{ \theta-1}).\qed\]
**Proposition 3.14**.: _There is \(\rho_{0}\in(0,\rho_{1}/96)\) and \(L_{1}\geq 100R_{0}\) such that the following statements hold for each \(L\geq L_{1}\) and \(\varepsilon\in(0,\varepsilon_{L})\). If \(u\in Z(3\rho_{0},3\delta_{0})\) and \(\lambda\in\mathbb{R}\) satisfy_
\[\Gamma_{\varepsilon}(u)\leq(\ell+1)E_{(\ell+1)^{-1}\alpha}+\frac{1}{2}V_{0}\alpha,\]
\[\|\Gamma_{\varepsilon}^{\prime}(u)-\lambda G^{\prime}(u)\|_{H_{\varepsilon}^{- 1}}\leq b_{\varepsilon}\quad\text{for some }b_{\varepsilon}\geq 0,\]
_then there is \(C,c>0\) independent of \(\varepsilon,L,b_{\varepsilon}\) such that \(|\lambda|\leq C(1+b_{\varepsilon})\) and for each \(R\geq 8R_{0}\),_
\[\int_{\mathbb{R}^{N}\setminus\cup_{j=1}^{L}B(\Upsilon_{j}(u),R)}\big{(}| \nabla u|^{2}+u^{2}\big{)}\leq C(b_{\varepsilon}+e^{-cR}+e^{-\frac{\varepsilon }{\varepsilon}}).\]
Proof.: By Lemma 3.11, we have
\[|\Gamma^{\prime}_{\varepsilon}(u)u|\leq\|u\|_{\varepsilon}^{2}+\int_{ \mathbb{R}^{N}}|\bar{f}(u)u|+|\Phi^{\prime}_{\varepsilon}(u)u|+|\Psi^{\prime}_{ \varepsilon}(u)u|\leq C,\] \[|\lambda|\leq\alpha^{-1}(b_{\varepsilon}\|u\|_{\varepsilon}+| \Gamma^{\prime}_{\varepsilon}(u)u|)\leq C(1+b_{\varepsilon}). \tag{40}\]
First note that, by Lemma 3.7 and the compactness of \(K_{\alpha}\), for each given \(\rho_{0}\in(0,\rho_{1}/96)\), there is \(R_{1}>4R_{0}\) such that
\[\sup_{u\in Z(3\rho_{0},3\delta_{0})}\int_{\mathbb{R}^{N}\setminus B(\Upsilon_{ j}(u),R_{1})}\big{(}|\nabla u|^{2}+u^{2}\big{)}\leq 10\rho_{0}^{2}. \tag{41}\]
We fix \(L_{1}>R_{1}+1\) and consider \(L\geq L_{1}\). For \(R\in[R_{1}+1,L]\) and \(r\in[R_{1}+1,R]\), we take \(\psi_{r}\in C^{1}(\mathbb{R}^{N},[0,1])\) such that \(|\nabla\psi_{r}|\leq 2\) and
\[\psi_{r}(x)=\begin{cases}0&\text{if}\quad x\in\cup_{j=1}^{\ell}B(\Upsilon_{j} (u),r-1),\\ 1&\text{if}\quad x\in\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B(\Upsilon_{j} (u),r),\end{cases}\]
Since \(u\in Z(\rho_{1}/32,3\delta_{0})\), there is \(C>0\) independent of \(\varepsilon\), \(L\), \(r\) and \(u\) such that
\[\|\psi_{r}u\|_{\varepsilon}\leq C.\]
We have
\[\Gamma^{\prime}_{\varepsilon}(u)(\psi_{r}u)-\lambda\int_{\mathbb{R}^{N}}\psi_ {r}u^{2}\leq b_{\varepsilon}\|\psi_{r}u\|_{\varepsilon}\leq Cb_{\varepsilon}.\]
By Lemma 3.10 and \(\operatorname{supp}\left(\psi_{r}u\right)\subset\mathbb{R}^{N}\setminus\cup_{ j=1}^{\ell}B(\Upsilon_{j}(u),4R_{0})\), we have
\[\Phi^{\prime}_{\varepsilon}(u)(\psi_{r}u)=4\Phi_{\varepsilon}(u)^{\frac{1}{2}} \xi_{1}(\Upsilon(u))\int_{\mathbb{R}^{N}}\chi_{u}\psi_{r}u^{2}\geq 0.\]
Together with Lemma 3.9, we have
\[\begin{split} Cb_{\varepsilon}\geq&\int_{\mathbb{R }^{N}}\psi_{r}(|\nabla u|^{2}+\widetilde{V}_{\varepsilon}u^{2}-\bar{f}(u)u- \lambda u^{2})+\int_{\mathbb{R}^{N}}u\nabla\psi_{r}\nabla u+O(e^{-\frac{ \varepsilon}{\varepsilon}})\\ \geq&\int_{\mathbb{R}^{N}}\psi_{r}(|\nabla u|^{2}+u^ {2}-f(u)u-\lambda u^{2})-\int_{\operatorname{supp}|\nabla\psi_{r}|}(|\nabla u |^{2}+u^{2})+O(e^{-\frac{\varepsilon}{\varepsilon}}).\end{split} \tag{42}\]
By (40) and (F2),
\[f(u)+\lambda u^{2}\leq(\frac{f(u)}{u}+C)^{+}u^{2}+Cb_{\varepsilon}u^{2}\leq C |u|^{2+\frac{4}{N}}+Cb_{\varepsilon}u^{2}.\]
Setting
\[Q(r)=\int_{\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B(\Upsilon_{j}(u),r)}| \nabla u|^{2}+u^{2},\]
we conclude from (42) and the Sobolev inequality that
\[\begin{split} C(b_{\varepsilon}+e^{-\frac{\varepsilon}{ \varepsilon}})\geq& 2Q(r)-Q(r-1)-C\int_{\mathbb{R}^{N}}\psi_{r}|u|^{2+ \frac{4}{N}}\\ \geq& 2Q(r)-Q(r-1)-C_{N}C(Q(r-1))^{2+\frac{4}{N}}, \end{split}\]
where \(C_{N}>0\) is a constant depending only on \(N\). By (41), \(Q(r-1)\leq\sqrt{10}\rho_{0}\). Taking \(\rho_{0}>0\) small such that \(C_{N}C(\sqrt{10}\rho_{0})^{1+4/N}<1\), we complete the proof by Lemma 3.13.
A direct corollary is that, when \(L\geq L_{1}\) and \(\varepsilon\) is sufficiently small, \(\Phi_{\varepsilon}(u_{\varepsilon})\) disappear for a critical point \(u_{\varepsilon}\) of \(\Gamma_{\varepsilon}\) on \(\mathcal{M}^{\varepsilon}_{\alpha}\).
In what follows, we denote by \(\Gamma_{\varepsilon}|_{\mathcal{M}^{\varepsilon}_{\alpha}}^{\prime}(u)\) the derivative of \(\Gamma_{\varepsilon}\) restricted on \(\mathcal{M}^{\varepsilon}_{\alpha}\) at \(u\). We denote by \(T_{u}\mathcal{M}^{\varepsilon}_{\alpha}:=\{\,v\in H_{\varepsilon}\mid\int_{ \mathbb{R}^{N}}vu=0\,\}\) the tangent space of \(\mathcal{M}^{\varepsilon}_{\alpha}\) at \(u\in\mathcal{M}^{\varepsilon}_{\alpha}\). We also denote by \(\|\cdot\|_{*}\) the norm of the cotangent space. Note that
\[\|\Gamma_{\varepsilon}|_{\mathcal{M}^{\varepsilon}_{\alpha}}^{\prime}(u)\|_{*}= \inf_{\lambda\in\mathbb{R}}\|\Gamma^{\prime}_{\varepsilon}(u)-\lambda G^{ \prime}(u)\|_{H^{-1}_{\varepsilon}}.\]
**Corollary 3.15**.: _For \(u_{\varepsilon}\in Z(3\rho_{0},3\delta_{0})\) with \(\limsup_{\varepsilon\to 0}\Gamma_{\varepsilon}\left(u_{\varepsilon}\right)\leq \ell E_{(\ell+1)^{-1}\alpha}+\frac{1}{2}V_{0}\alpha\), if_
\[\xi_{1}(\Upsilon(u_{\varepsilon}))\|\Gamma_{\varepsilon}|^{\prime}_{\mathcal{ M}_{\alpha}^{\varepsilon}}\left(u_{\varepsilon}\right)\|_{*}\to 0\,\text{ as }\varepsilon\to 0,\]
_then \(\Phi_{\varepsilon}(u_{\varepsilon})=0\) and \(\Phi^{\prime}_{\varepsilon}(u_{\varepsilon})=0\) for \(L\geq L_{1}\) and small \(\varepsilon\)._
By the compact embedding from \(H_{\varepsilon}\) to \(L^{q}(\mathbb{R}^{N})\) for \(q\in(\frac{2N}{N+2},2^{*})\) ([43, Lemma 2.3]), it is standard to show the Palais-Smale condition for fixed \(\varepsilon\), i.e.,
**Proposition 3.16**.: _For \(L\geq L_{1}\), if \(\{u_{n}\}\subset Z(3\rho_{0},3\delta_{0})\) is such that \(\lim_{n\to\infty}\Gamma_{\varepsilon}(u_{n})\leq\ell E_{(\ell+1)^{-1}\alpha} +\frac{1}{2}V_{0}\alpha\) and \(\|\Gamma_{\varepsilon}|^{\prime}_{\mathcal{M}_{\alpha}^{\varepsilon}}(u_{n}) \|_{*}\to 0\) as \(n\to+\infty\), then \(\{u_{n}\}\) has a convergent subsequence._
We can also show the following \(\varepsilon\)-dependent concentration compactness result.
**Proposition 3.17**.: _For \(L\geq L_{1}\), suppose \(\varepsilon_{n}\to 0,u_{n}\in Z(3\rho_{0},3\delta_{0})\) satisfying_
\[\limsup_{n\to\infty}\Gamma_{\varepsilon_{n}}(u_{n})\leq\ell E_{\ell^{-1} \alpha}+\frac{1}{2}V_{0}\alpha,\quad\lim_{n\to\infty}\|\Gamma_{\varepsilon_{n} }|^{\prime}_{\mathcal{M}_{\alpha}^{\varepsilon}}(u_{n})\|_{*}=0. \tag{43}\]
_Then there exist \(\mathbf{U}\in K_{\alpha}\) and \((z_{n,j})\subset\mathbb{R}^{N},j=1,2,\cdots,\ell\) such that as \(n\to\infty\) (after extracting a subsequence if necessary)_
* \(|z_{n,j}-\Upsilon_{j}(u_{n})|\leq 2R_{0}\) _for_ \(j=1,2,\cdots,\ell\)_,_
* \(|z_{n,i}-z_{n,j}|\to\infty\,\text{ for }1\leq i<j\leq\ell\)_,_
* \(\|u_{n}-\sum_{j=1}^{\ell}(\phi_{\varepsilon_{n}}U_{j})(\cdot-z_{n,j})\|_{ \varepsilon_{n}}\to 0\)_, where_ \(U_{j}\) _is the_ \(j\)_-th component of_ \(\mathbf{U}\)_._
Proof.: Let \(\varepsilon_{n},u_{n}\) satisfy (43). By the compactness of \(K_{\alpha}\), we can write
\[u_{n}=\sum_{j=1}^{\ell}(\phi_{\varepsilon_{n}}\tilde{U}_{j})(\cdot-y_{n}^{j})+ w_{n},\quad\|w_{n}\|_{\varepsilon_{n}}\leq 3\rho_{0},\quad\varepsilon_{n} \Upsilon_{j}(u_{n})\in O^{3\delta_{0}},\quad\xi(y_{n}^{1},\cdots,y_{n}^{\ell}) \geq L, \tag{44}\]
where \((\tilde{U}_{1},\cdots,\tilde{U}_{\ell})\in K_{\alpha}\). By Lemma 3.7 (i), \(\operatorname{dist}(\varepsilon_{n}y_{n}^{j},O^{3\delta_{0}})\leq 2R_{0} \varepsilon_{n}\to 0\). The second equation in (43) implies that there is \(\lambda_{n}\in\mathbb{R}\) such that
\[\|\Gamma^{\prime}_{\varepsilon_{n}}(u_{n})-\lambda_{n}G^{\prime}(u_{n})\|_{H_ {\varepsilon}^{-1}}\to 0. \tag{45}\]
Hence, by Lemma 3.11 and Proposition 3.14, for constant \(C_{0}>0\) in Lemma 3.11 and some \(C,c>0\) independent of \(L,n\), there hold
\[\|u_{n}\|_{\varepsilon_{n}},\,\int_{\mathbb{R}^{N}}f_{1}(u_{n})u _{n},\,\int_{\mathbb{R}^{N}}F_{1}(u_{n}),\,\int_{\mathbb{R}^{N}}\bar{f}_{2}(u_ {n})u_{n}\,\int_{\mathbb{R}^{N}}\bar{F}_{2}(u_{n}),\,|\lambda_{n}|\leq C_{0}, \tag{46}\] \[\int_{\mathbb{R}^{N}\cup_{j=1}^{\ell}B(\Upsilon_{j}(u_{n}),\frac{ 1}{2}\delta_{1}(\Upsilon(u_{n})))}\left(|\nabla u_{n}|^{2}+u_{n}^{2}\right) \mathrm{d}x\leq Ce^{-c\xi_{1}(\Upsilon(u_{n}))}+o_{n}(1). \tag{47}\]
By (47) and \(\xi_{1}(\Upsilon(u_{n}))\geq\xi(\Upsilon(u_{n}))-1\geq L-4R_{0}-1\), we can assume \(L_{1}\) is so large that
\[\xi_{1}(\Upsilon(u_{n}))\int_{\mathbb{R}^{N}}\chi_{u_{n}}u_{n}^{2}\leq CLe^{- cL}+o_{n}(1)\xi_{1}(\Upsilon(u_{n}))\leq\frac{1}{2}+o_{n}(1)\xi_{1}(\Upsilon(u_{n})). \tag{48}\]
Up to a subsequence, we assume for \(j=1,\cdots,\ell\), \(\lambda_{n}\to\lambda_{0}\), \(\varepsilon_{n}y_{n}^{j}\to y^{j}\in\overline{O^{3\delta_{0}}}\) and \(u_{n}(\cdot+y_{n}^{j})\rightharpoonup W_{j}\neq 0\) in \(H^{1}(\mathbb{R}^{N})\). Note that by (48), if \(\xi(\Upsilon(u_{n}))\) is bounded, then \(\Phi^{\prime}_{\varepsilon_{n}}(u_{n})=0\) for every large \(n\). So in either case that \(\xi(\Upsilon(u_{n}))\) is bounded or \(\xi(\Upsilon(u_{n}))\to+\infty\), we can verify that \(W_{j}\) satisfies \(-\Delta u=\bar{f}(u)+(\lambda_{0}-V(y^{j}))u\) in \(\mathbb{R}^{N}\). Applying Kato's lemma, we deduce that \(|W_{j}|\) satisfies
\[-\Delta v\leq-f_{1}(v)+\bar{f}_{2}(v)+(\lambda_{0}-V(y^{j}))v\leq f_{2}(v)+ \lambda_{0}v.\]
By this, (46) and Remark 3.12, we get \(|W_{j}|\leq K_{0}\), and hence \(\bar{f}(W_{j})=f(W_{j})\). Thus \(W_{j}\) satisfies
\[-\Delta u=f(u)+(\lambda_{0}-V(y^{j}))u\quad\text{in }\mathbb{R}^{N}.\]
**Step 1.** We show that \(\xi(\Upsilon(u_{n}))\to+\infty\) as \(n\to+\infty\).
Since
\[\sum_{j=1}^{\ell}|W_{j}|_{2}^{2}\geq\liminf_{n\to+\infty}\|u_{n}\|_{L^{2}(\cup_{ j=1}^{\ell}B(y_{n}^{j},AR_{0}))}^{2}\geq\sum_{j=1}^{\ell}(\|\tilde{U}_{j}\|_{L^{2}(B(0, AR_{0}))}-3\rho_{0})^{2}\geq\frac{1}{2}\alpha,\]
we obtain that by Lemma 3.2, \(|\lambda_{0}|\leq D_{1}\). Hence, \(W_{j}^{-}\) satisfies
\[-\Delta W_{j}^{-}\leq f(W_{j}^{-})+D_{1}W_{j}^{-}.\]
Then it follows from \(|W_{j}^{-}|_{L^{2}}\leq\limsup_{n\to+\infty}\|w_{n}\|_{\varepsilon_{n}}\leq 3 \rho_{0}<\rho_{1}/16\), Lemma 3.3 and the definition of \(\rho_{1}\) in (28), that \(W_{j}^{-}=0\). Hence, by Lemma 6.1, \(W_{j}\) is positive and radially symmetric about some point.
Up to a subsequence, we may assume that the index set \(\{1,\cdots,\ell_{1}\}\) with \(\ell_{1}\geq 2\) satisfies \(\lim_{n\to\infty}|y_{n}^{i}-y_{n}^{j}|<+\infty\) for \(1\leq i<j\leq\ell_{1}\) and \(\lim_{n\to\infty}|y_{n}^{i}-y_{n}^{k}|=+\infty\) for \(1\leq i\leq\ell_{1}\) and \(k\geq\ell_{1}+1\). Assume \(y_{n}^{j}-y_{n}^{1}\to z_{j}\in\mathbb{R}^{N}\) for \(j=2,\cdots,\ell_{1}\). Then we have
\[\|W_{1}\|_{L^{2}(B(0,R_{0}))}\geq\liminf_{n\to\infty}\|u_{n}\|_{L^{2}(B(y_{n} ^{1},R_{0}))}\geq\|\tilde{U}_{1}\|_{L^{2}(B(0,R_{0}))}-\sum_{j=2}^{\ell}\| \tilde{U}_{j}\|_{L^{2}(\mathbb{R}^{N}\setminus B(0,R_{0}))}-3\rho_{0}>\frac{ \rho_{1}}{2}.\]
Similarly,
\[\|W_{1}\|_{L^{2}(B(z_{j},R_{0}))}>\frac{\rho_{1}}{2},\ \ j=2,\cdots,\ell_{1}.\]
Setting \(z_{1}=0\), by (30)
\[\|W_{1}\|_{L^{2}(\mathbb{R}^{N}\setminus(\cup_{j=1}^{\ell_{1}}B(z_{j},R_{0}) )}\leq\sum_{j=1}^{\ell_{1}}\|\tilde{U}_{j}\|_{L^{2}(\mathbb{R}^{N}\setminus B (z_{j},R_{0}))}+3\rho_{0}\leq\frac{\ell_{1}\rho_{1}}{8\ell}+3\rho_{0}<\frac{ \rho_{1}}{4}.\]
Then \(W_{1}\) can not be radially symmetric about any point, which is a contradiction.
**Step 2.** Setting \(v_{n}:=u_{n}-\sum_{j=1}^{\ell}(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{n}^{j})\), we show \(|v_{n}|_{p}\to 0\) for \(p\in(2,2^{*})\).
Otherwise, by Lions' Lemma, there is \(y_{n}\) such that \(|y_{n}-y_{n}^{j}|\to\infty\) for each \(j=1,\cdots,\ell\) and
\[\limsup_{n\to\infty}\|u_{n}(\cdot+y_{n})\|_{L^{2}(B(0,1))}>0. \tag{49}\]
By Lemma 3.10, (47), and \(\xi_{1}(\Upsilon(u_{n}))\to+\infty\), there holds
\[\Phi_{\varepsilon_{n}}^{\prime}(u_{n})v-4\Phi_{\varepsilon_{n}}(u_{n})^{\frac{ 1}{2}}\xi_{1}(\Upsilon(u_{n}))\int_{\mathbb{R}^{N}}\chi_{u_{n}}u_{n}v=o_{n}(1) \|v\|_{\varepsilon_{n}},\ \ v\in H_{\varepsilon_{n}}.\]
Set
\[R_{n}:=\frac{1}{2}\min_{1\leq j\leq\ell}\{|y_{n}-y_{n}^{j}|\},\]
and let \(\eta_{n}\in C_{0}^{\infty}(\mathbb{R}^{N},[0,1])\) be such that \(\eta_{n}=1\) in \(B(y_{n},1)\), \(\eta_{n}=0\) in \(\mathbb{R}^{N}\setminus B(y_{n},R_{n})\) and \(|\nabla\eta_{n}|\leq 2/R_{n}\). We have
\[o_{n}(1)= \Gamma_{\varepsilon_{n}}^{\prime}(u_{n})(\eta_{n}^{2}u_{n})-\int_ {\mathbb{R}^{N}}\lambda_{n}\eta_{n}^{2}u_{n}^{2}\mathrm{d}x\] \[= \int_{\mathbb{R}^{N}}\left(\nabla u_{n}\nabla(\eta_{n}^{2}u_{n})+ \widetilde{V}_{\varepsilon}\eta_{n}^{2}u_{n}^{2}-\eta_{n}^{2}\bar{f}(u_{n})u_{n }-\lambda_{0}\eta_{n}^{2}u_{n}^{2}\right)\mathrm{d}x\] \[+4\Phi_{\varepsilon_{n}}(u_{n})^{\frac{1}{2}}\xi_{1}(\Upsilon(u_{n }))\int_{\mathbb{R}^{N}}\chi_{u_{n}}\eta_{n}^{2}u_{n}^{2}\mathrm{d}x+o_{n}(1)\] \[\geq \int_{\mathbb{R}^{N}}\left(|\nabla(\eta_{n}u_{n})|^{2}+\eta_{n}^{ 2}u_{n}^{2}-|\nabla\eta_{n}|^{2}u_{n}^{2}-(\frac{f(u_{n})}{u_{n}}+\lambda_{0})^ {+}\eta_{n}^{2}u_{n}^{2}\right)\mathrm{d}x+o_{n}(1)\] \[\geq \int_{\mathbb{R}^{N}}\left(|\nabla(\eta_{n}u_{n})|^{2}+\eta_{n}^{ 2}u_{n}^{2}-\frac{4}{R_{n}^{2}}u_{n}^{2}-C_{N}\eta_{n}^{2}u_{n}^{2+\frac{4}{N} }\right)\mathrm{d}x+o_{n}(1)\] \[= \int_{\mathbb{R}^{N}}\left(|\nabla(\eta_{n}u_{n})|^{2}+\eta_{n}^{ 2}u_{n}^{2}-C_{N}\eta_{n}^{2}u_{n}^{2+\frac{4}{N}}\right)\mathrm{d}x+o_{n}(1).\]
By (44), \(\|u_{n}\|_{H^{1}(B(y_{n},R_{n}))}\leq\sum_{j=1}^{\ell}\|\bar{U}_{j}(\cdot-y_{n}^{j} )\|_{H^{1}(B(y_{n},R_{n}))}+3\rho_{0}\leq 4\rho_{0}\) for large \(n\). Therefore,
\[\|\eta_{n}u_{n}\|^{2}\leq C\int_{\mathbb{R}^{N}}\eta_{n}^{2}u_{n}^{2+\frac{4}{N }}+o_{n}(1) \leq C_{N}\|\eta_{n}u_{n}\|^{2}\|u_{n}\|_{H^{1}(B(y_{n},R_{n}))}^{ \frac{4}{N}}+o(1)\]
where \(C_{N}\) is a constant. Decreasing \(\rho_{0}\) if necessary, there holds \(C_{N}4^{\frac{4}{N}}\rho_{0}^{\frac{4}{N}}<1\). Therefore,
\[\limsup_{n\to\infty}\|u_{n}(\cdot+y_{n})\|_{L^{2}(B(0,1))}\leq\limsup_{n\to+ \infty}\|\eta_{n}u_{n}\|^{2}=0,\]
which is a contradiction to (49).
**Step 3.**\(\|v_{n}\|_{\varepsilon_{n}}\to 0\).
We test (45) by \(v_{n}\) to get
\[\begin{split} I+II:=&\int_{\mathbb{R}^{N}}\left( \nabla u_{n}\nabla v_{n}+\widetilde{V}_{\varepsilon}u_{n}v_{n}-\bar{f}(u_{n}) v_{n}-\lambda_{n}u_{n}v_{n}\right)\\ &+4\Phi_{\varepsilon_{n}}(u_{n})^{\frac{1}{2}}\xi_{1}(\Upsilon( u_{n}))\int_{\mathbb{R}^{N}}\chi_{u_{n}}u_{n}v_{n}\mathrm{d}x=o_{n}(1).\end{split} \tag{50}\]
By Lemma 6.1,
\[\begin{split}\int_{\mathbb{R}^{N}}\chi_{u_{n}}u_{n}v_{n}\mathrm{d }x=&\int_{\mathbb{R}^{N}}\chi_{u_{n}}|u_{n}|^{2}-\chi_{u_{n}}u_{n} \sum_{j=1}^{\ell}(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{n}^{j})dx\\ &\geq-Ce^{-\xi_{1}(\Upsilon(u_{n}))}.\end{split}\]
Hence, \(II\geq-o_{n}(1)\), which implies \(I\leq o_{n}(1)\). Then we have
\[\|v_{n}\|_{\varepsilon_{n}}^{2}=I-\int_{\mathbb{R}^{N}}\sum_{j=1}^{\ell}\left( \nabla(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{n}^{j})\nabla v_{n}+\widetilde{ V}_{\varepsilon}(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{n}^{j})v_{n}\right)+\int_{ \mathbb{R}^{N}}(\bar{f}(u_{n})+\lambda_{n}u_{n})v_{n}.\]
We have, by \(v_{n}(\cdot+y_{n}^{j})\rightharpoonup 0\) in \(H^{1}(\mathbb{R}^{N})\) and the decay property of \(W_{j}\),
\[\int_{\mathbb{R}^{N}}\nabla(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{n}^{j}) \nabla v_{n}=\int_{\mathbb{R}^{N}}\nabla W_{j}\nabla(v_{n}(\cdot+y_{n}))+o_{n }(1)=o_{n}(1),\]
\[\int_{\mathbb{R}^{N}}|V_{\varepsilon}(\phi_{\varepsilon_{n}}W_{j})(\cdot-y_{ n}^{j})v_{n}|\leq\int_{\mathbb{R}^{N}}V_{0}|W_{j}v_{n}(\cdot+y_{n})|=o_{n}(1).\]
Then
\[\|v_{n}\|_{\varepsilon_{n}}^{2}\leq o_{n}(1)+\int_{\mathbb{R}^{N}}(\frac{\bar {f}(u_{n})}{u_{n}}+\lambda_{n})u_{n}v_{n}.\]
Note that \(\bar{f}(t)=f(t)\) for \(|t|\leq K_{0}\) and \(\bar{f}(t)\leq f(t)\) for \(t\geq 0\). By (F5), there is \(\delta>0\) such that \(|\bar{f}(t)/t|\leq t^{-\frac{1}{2}}\) for \(|t|\leq\delta\). By (F2), making \(\delta\) smaller if necessary, \(\bar{f}(t)/t+\lambda_{n}\leq 0\) for \(|t|\leq\delta\). By (F3), \(|\bar{f}(t)/t+\lambda_{n}|\leq C|t|^{\frac{1}{N}}\) for \(|t|\geq\delta\). So we have
\[\int_{\{|u_{n}|\leq\delta\}}(\frac{\bar{f}(u_{n})}{u_{n}}+\lambda_{n})u_{n}v_{ n}\leq C\int_{\mathbb{R}^{N}}|u_{n}|^{1+\frac{4}{N}}|v_{n}|\to 0.\]
On the other hand, for any \(R>0\), setting \(B_{R}=\cup_{j=1}^{\ell}B(y_{n}^{j},R)\), we have that
\[\begin{split}\int_{\{|u_{n}|\leq\delta\}\cap B_{R}}(\frac{\bar{f} (u_{n})}{u_{n}}+\lambda_{n})u_{n}v_{n}\leq&\int_{B_{R}}|u_{n}|^{ \frac{1}{2}}|v_{n}|\leq R^{N(\frac{3}{4}-\frac{N}{2N+4})}|u_{n}|_{2}^{\frac{1}{ 2}}|v_{n}|_{2+\frac{4}{N}}\to 0,\\ \int_{\{|u_{n}|\leq\delta\}\setminus B_{R}}(\frac{\bar{f}(u_{n})} {u_{n}}+\lambda_{n})u_{n}v_{n}\leq&\int_{\{|u_{n}|\leq\delta\} \setminus B_{R}}|\frac{\bar{f}(u_{n})}{u_{n}}+\lambda_{n}||u_{n}|\sum_{j=1}^{ \ell}W_{j}(\cdot-y_{n}^{j})\\ \leq& C\int_{\mathbb{R}^{N}\setminus B_{R}}|u_{n}|^{ \frac{1}{2}}\sum_{j=1}^{\ell}W_{j}(\cdot-y_{n}^{j})\leq C|u_{n}|_{2}^{\frac{1}{ 2}}e^{-R}.\end{split}\]
Hence, there holds \(\lim_{n\to\infty}\|v_{n}\|_{\varepsilon_{n}}^{2}=0\).
**Step 4.** Completion of the proof. Let \(z^{j}\) be the unique maximum point of \(W_{j}\), we set \(\mathbf{U}=(U_{1},\cdots,U_{\ell})=(W_{1}(\cdot+z^{1}),\cdots,W_{\ell}(\cdot+z^{ \ell}))\in H_{r}^{1}(\mathbb{R}^{N})^{\ell}\). Since
\[\int_{\mathbb{R}^{N}\setminus B(0,2R_{0})}W_{j}^{2}=\lim_{n\to\infty}\int_{B(y _{n}^{j},\frac{1}{2}\Upsilon(u_{n})\setminus B(y_{n}^{j},2R_{0})}u_{n}^{2} \leq\frac{\rho_{1}^{2}}{16\ell^{2}},\]
we have \(|z^{j}|\leq 2R_{0}\). By Step 3 and similarly to Lemma 2.5 (ii), we have
\[\lim_{n\to+\infty}\int_{\mathbb{R}^{N}}\bar{F}(u_{n})=\sum_{j=1}^{\ell}\int_{ \mathbb{R}^{N}}\bar{F}(W_{j})=\sum_{j=1}^{\ell}\int_{\mathbb{R}^{N}}F(W_{j})= \sum_{j=1}^{\ell}\int_{\mathbb{R}^{N}}F(U_{j}).\]
Therefore, for \(\mathbf{\mu}=(V(y^{1}),\cdots,V(y^{\ell}))\in[\mu_{0},V_{0}]^{\ell}\),
\[\mathbb{J}_{\mathbf{\mu}}(\mathbf{U})\leq\lim_{n\to\infty}\Gamma_{\varepsilon_{n}}(u_ {n})\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha.\]
Then \(\mathbf{U}\in K_{\alpha}\). Setting \(z_{n,j}=y_{n}^{j}+z^{j}\), we have completed the proof.
## 4 Existence of critical points
### Gradient estimates
Let \(d_{\varepsilon}>0\) be such that \(d_{\varepsilon}\to 0\) as \(\varepsilon\to 0\). By Proposition 3.17, there are \(\nu_{L}>0\), \(\varepsilon_{L}>0\) such that if \(\varepsilon\in(0,\varepsilon_{L})\), then
\[\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\alpha}^{\varepsilon}}^{\prime}(u)\|_{ \ast}\geq 2\nu_{L},\text{ provided that }u\in Z(3\rho_{0},2\delta_{0})\setminus Z(\rho_{0},3\delta_{0})\cap[ \Gamma_{\varepsilon}\leq\ell E_{\ell^{-1}\alpha}+\tfrac{1}{2}V_{0}\alpha+2d_{ \varepsilon}]. \tag{51}\]
Here we use notation \([\Gamma_{\varepsilon}\leq a]:=\{\,u\in\mathcal{M}_{\alpha}^{\varepsilon}\mid \Gamma_{\varepsilon}(u)\leq a\,\}\). To prove the existence of a critical point for all small \(\varepsilon\), we assume to the contrary that
* For any small \(\varepsilon_{1}\in(0,\varepsilon_{L})\), there exists \(\varepsilon\in(0,\varepsilon_{1})\) such that \(\Gamma_{\varepsilon}\) has no critical points in \(Z(3\rho_{0},3\delta_{0})\cap[\Gamma_{\varepsilon}\leq\ell E_{\ell^{-1}\alpha} +\tfrac{1}{2}V_{0}\alpha+2d_{\varepsilon}]\).
Then by Proposition 3.16, there is \(\nu_{\varepsilon}>0\) such that
\[\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\alpha}^{\varepsilon}}^{\prime}(u)\|_{ \ast}\geq 2\nu_{\varepsilon},\text{ provided }u\in Z(3\rho_{0},3\delta_{0})\cap[\Gamma_{ \varepsilon}\leq\ell E_{\ell^{-1}\alpha}+\tfrac{1}{2}V_{0}\alpha+2d_{ \varepsilon}]. \tag{52}\]
Next we give a gradient estimate when \(\varepsilon\Upsilon_{j}(u)\in O^{3\delta_{0}}\setminus O^{\delta_{0}}\) for some \(j\). In fact, we show
**Proposition 4.1**.: _Assume (A). Decreasing \(\nu_{L}\) if necessary, it holds that_
\[\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\alpha}^{\varepsilon}}^{\prime}(u)\|_{ \ast}\geq 2\nu_{L}\varepsilon\]
_for all small \(\varepsilon\), provided that \(u\in Z(3\rho_{0},3\delta_{0})\cap[\Gamma_{\varepsilon}\leq\ell E_{\ell^{-1} \alpha}+\tfrac{1}{2}V_{0}\alpha+2d_{\varepsilon}]\) and \(\varepsilon\Upsilon_{j}(u)\in O^{3\delta_{0}}\setminus O^{\delta_{0}}\) for some \(j\)._
Proof.: To prove Proposition 4.1, we consider \(u_{\varepsilon}\in Z(3\rho_{0},3\delta_{0})\) such that \(\varepsilon\Upsilon_{j_{\varepsilon}}(u_{\varepsilon})\in O^{3\delta_{0}} \setminus O^{\delta_{0}}\) and \(\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\alpha}^{\varepsilon}}^{\prime}(u_{ \varepsilon})\|_{\ast}=o_{\varepsilon}(\varepsilon)\) to get a contradiction as \(\varepsilon\to 0\). From \(\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\alpha}^{\varepsilon}}^{\prime}(u_{ \varepsilon})\|_{\ast}=o_{\varepsilon}(\varepsilon)\) and Proposition 3.14, it follows that
\[\int_{\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B(\Upsilon_{j}(u_{\varepsilon}), \frac{1}{2}\delta_{0}\xi_{1}(\Upsilon(u_{\varepsilon})))}\big{(}|\nabla u_{ \varepsilon}|^{2}+u_{\varepsilon}^{2}\big{)}\leq Ce^{-c\xi_{1}(\Upsilon(u_{ \varepsilon}))}+o_{\varepsilon}(\varepsilon).\]
Hence, \(\Phi_{\varepsilon}(u_{\varepsilon})=0\) and \(\Phi_{\varepsilon}^{\prime}(u_{\varepsilon})=0\) for small \(\varepsilon\).
We set
\[\tilde{f}_{1}(t)=\begin{cases}-\sigma t\log|t|,&t\in(-e^{-1},e^{-1}),\\ \sigma e^{-1}\text{sgn}(t),&t\in(-\infty,-e^{-1}]\cup[e^{-1},+\infty),\end{cases} \quad\tilde{f}_{2}=\bar{f}+\tilde{f}_{1},\]
and \(\tilde{F}_{1}(u)=\int_{0}^{|u|}\tilde{f}_{1}(t)\mathrm{d}t\), \(\tilde{F}_{2}(u)=\int_{0}^{|u|}\tilde{f}_{2}(t)\mathrm{d}t\), where sgn is the signum. In \((0,+\infty)\), \(\tilde{f}_{1}\) is increasing; \(\tilde{F}_{1}\) is convex; and by (F5) and \(\bar{f}(t)=\min\{f(K_{0}),f(t)\}\),
\[|\tilde{f}_{2}(t)|\leq Ct,\quad|\tilde{F}_{2}(t)|\leq Ct^{2}. \tag{53}\]
**Step 1**. In this step, we show the following result.
**Lemma 4.2**.: _There is a unique \((\lambda_{\varepsilon},w_{\varepsilon})\in\mathbb{R}\times H_{\varepsilon}\) such that \(\int_{\mathbb{R}^{N}}w_{\varepsilon}u_{\varepsilon}=\alpha\) and_
\[-\Delta w_{\varepsilon}+\widetilde{V}(\varepsilon x)w_{\varepsilon}+\tilde{f}_ {1}(w_{\varepsilon})=\tilde{f}_{2}(u_{\varepsilon})+\lambda_{\varepsilon}u_{ \varepsilon}. \tag{54}\]
_Moreover, the following statements hold._
* _There is a positive constant_ \(C\) _independent of_ \(\varepsilon\) _such that_ \(|\lambda_{\varepsilon}|+\|w_{\varepsilon}\|_{\varepsilon}+\|\tilde{F}_{1}(w _{\varepsilon})\|_{L^{1}(\mathbb{R}^{N})}\leq C\)_._
* _There hold_ \[\|w_{\varepsilon}-u_{\varepsilon}\|_{\varepsilon}+|\Upsilon_{j_{\varepsilon }}(w_{\varepsilon})-\Upsilon_{j_{\varepsilon}}(u_{\varepsilon})|=o_{ \varepsilon}(\varepsilon).\]
* \(\|\tilde{f}_{2}(u_{\varepsilon})\|_{L^{2}(\mathbb{R}^{N})}\) _is bounded,_ \(w_{\varepsilon}\in H^{2}_{loc}(\mathbb{R}^{N})\) _and_ \(\tilde{f}_{1}(w_{\varepsilon})\in L^{\infty}(\mathbb{R}^{N})\)__
Proof.: Consider the minimization problem
\[e_{\varepsilon}=\inf\Big{\{}\mathcal{I}(w)=\frac{1}{2}\int_{\mathbb{R}^{N}}| \nabla w|^{2}+\widetilde{V}(\varepsilon x)w^{2}+\int_{\mathbb{R}^{N}}\tilde{ F}_{1}(w)-\int_{\mathbb{R}^{N}}\tilde{f}_{2}(u_{\varepsilon})w\ \Big{|}\ \int_{\mathbb{R}^{N}}wu_{\varepsilon}=\alpha\Big{\}}.\]
By the compact embedding from \(H_{\varepsilon}\) to \(L^{q}(\mathbb{R}^{N})\) for \(q\in(\frac{2N}{N+2},2^{*})\) ([43, Lemma 2.3]) and the fact that \(\mathcal{I}\) is continuous, convex and coercive, \(e_{\varepsilon}\) is attained at some \(w_{\varepsilon}\). The uniqueness follows from the convexity of \(\tilde{F}_{1}\). Since \(e_{\varepsilon}\leq\mathcal{I}(u_{\varepsilon})<C\) for some \(C>0\), we can conclude from (53) that
\[\frac{1}{2}\|w_{\varepsilon}\|_{\varepsilon}^{2}+\|\tilde{F}_{1}(w_{ \varepsilon})\|_{L^{1}(\mathbb{R}^{N})}\leq C(1+\|u_{\varepsilon}\|_{ \varepsilon}\|w_{\varepsilon}\|_{\varepsilon})\leq C^{\prime}(1+\|u_{ \varepsilon}\|_{\varepsilon}^{2})+\frac{1}{4}\|w_{\varepsilon}\|_{ \varepsilon}^{2}.\]
Then \(\|w_{\varepsilon}\|_{\varepsilon}\) is bounded and we can prove (i) by testing (54) by \(w_{\varepsilon}\).
To see (ii), we first note that \(u_{\varepsilon}-w_{\varepsilon}\in T_{u_{\varepsilon}}\mathcal{M}_{\alpha}^{\varepsilon}\). By assumption (A), \(u_{\varepsilon}-w_{\varepsilon}\neq 0\). We have by (54) and the monotonicity of \(\tilde{f}_{1}\) that
\[o_{\varepsilon}(\varepsilon)\|u_{\varepsilon}-w_{\varepsilon}\|_ {\varepsilon}= \Gamma_{\varepsilon}^{\prime}(u_{\varepsilon})(u_{\varepsilon}-w_ {\varepsilon})\] \[= (u_{\varepsilon},u_{\varepsilon}-w_{\varepsilon})_{\varepsilon}+ \int_{\mathbb{R}^{N}}\tilde{f}_{1}(u_{\varepsilon})(u_{\varepsilon}-w_{ \varepsilon})-\int_{\mathbb{R}^{N}}\tilde{f}_{2}(u_{\varepsilon})(u_{ \varepsilon}-w_{\varepsilon})+O(e^{-\frac{\varepsilon}{\varepsilon}})\] \[= \|u_{\varepsilon}-w_{\varepsilon}\|_{\varepsilon}^{2}+\int_{ \mathbb{R}^{N}}(\tilde{f}_{1}(u_{\varepsilon})-\tilde{f}_{1}(w_{\varepsilon} ))(u_{\varepsilon}-w_{\varepsilon})+O(e^{-\frac{\varepsilon}{\varepsilon}})\] \[\geq \|u_{\varepsilon}-w_{\varepsilon}\|_{\varepsilon}^{2}+O(e^{- \frac{\varepsilon}{\varepsilon}}).\]
Hence, \(\|u_{\varepsilon}-w_{\varepsilon}\|_{\varepsilon}=o_{\varepsilon}(\varepsilon)\). By Lemma 3.7 (ii), \(|\Upsilon_{j_{\varepsilon}}(w_{\varepsilon})-\Upsilon_{j_{\varepsilon}}(u_{ \varepsilon})|=o_{\varepsilon}(\varepsilon)\).
Then,
\[\Gamma_{\varepsilon}^{\prime}(v_{\varepsilon})\varphi= (v_{\varepsilon},\varphi)_{\varepsilon}+\int_{\mathbb{R}^{N}} \tilde{f}_{1}(v_{\varepsilon})\varphi-\int_{\mathbb{R}^{N}}\tilde{f}_{2}(v_{ \varepsilon})\varphi+O(e^{-\frac{\varepsilon}{\varepsilon}})\] \[= (w_{\varepsilon},\varphi)_{\varepsilon}+\int_{\mathbb{R}^{N}}\tilde {f}_{1}(w_{\varepsilon})\varphi-\int_{\mathbb{R}^{N}}\tilde{f}_{2}(w_{ \varepsilon})\varphi+O(e^{-\frac{\varepsilon}{\varepsilon}})+o_{\varepsilon}(\varepsilon)\] \[= \int_{\mathbb{R}^{N}}(\tilde{f}_{2}(u_{\varepsilon})-\tilde{f}_{2}( w_{\varepsilon}))\varphi+\lambda_{\varepsilon}\int_{\mathbb{R}^{N}}u_{\varepsilon} \varphi+O(e^{-\frac{\varepsilon}{\varepsilon}})+o_{\varepsilon}(\varepsilon)\] \[= \lambda_{\varepsilon}\int_{\mathbb{R}^{N}}(u_{\varepsilon}-w_{ \varepsilon})\varphi+o_{\varepsilon}(\varepsilon)=o_{\varepsilon}(\varepsilon).\]
Therefore, \(\|\Gamma_{\varepsilon}|_{\mathcal{M}_{\varepsilon}^{\prime}}^{\prime}(v_{ \varepsilon})\|_{*}=o_{\varepsilon}(\varepsilon)\).
To see (iii), by (53), \(\|\tilde{f}_{2}(u_{\varepsilon})\|_{L^{2}(\mathbb{R}^{N})}\) is bounded. Together with \(|\tilde{f}_{1}|\leq\sigma e^{-1}\), we can get from the elliptic estimate that \(w_{\varepsilon}\in H^{2}_{loc}(\mathbb{R}^{N})\).
**Step 2.** By Lemma 4.2 (i) (ii), and Proposition 3.14, we have
\[\int_{\mathbb{R}^{N}\setminus\cup_{j=1}^{\ell}B(\Upsilon_{j}(u_{\varepsilon}), \frac{1}{\sqrt{t}})}\big{(}|\nabla u_{\varepsilon}|^{2}+u_{\varepsilon}^{2}+| \nabla w_{\varepsilon}|^{2}+w_{\varepsilon}^{2}\big{)}=o_{\varepsilon}( \varepsilon). \tag{55}\]
Since \(|\tilde{f}_{1}(t)t|+|\tilde{F}_{1}(t)|\leq C(|t|^{\frac{3N}{N+1}}+t^{2})\), by Holder inequality, we get
\[\int_{\frac{1}{2}\Omega\setminus t_{j=1}^{d}B(\Upsilon_{j}(u_{\varepsilon}), \frac{1}{\sqrt{\varepsilon}})}\left(\tilde{f}_{1}(u_{\varepsilon})u_{ \varepsilon}+\tilde{f}_{1}(w_{\varepsilon})w_{\varepsilon}+\tilde{F}_{1}(u_{ \varepsilon})+\tilde{F}_{1}(w_{\varepsilon})\right)=o_{\varepsilon}(1). \tag{56}\]
Since \(\varepsilon\Upsilon_{j_{\varepsilon}}(u_{\varepsilon})\in O^{3\delta_{0}} \setminus O^{\delta_{0}}\), up to a subsequence, we may assume that \(j_{\varepsilon}\equiv 1\), \(u_{\varepsilon}(\cdot+\Upsilon_{j}(u_{\varepsilon}))\rightharpoonup u_{0}\neq 0\), \(\varepsilon\Upsilon_{i}(u_{\varepsilon})\to y_{i}\), \(i=1,\cdots,\ell\) and \(\frac{\partial V}{\partial x_{1}}(y_{1})>\nu_{0}>0\). We take
\[\delta_{1}\in(0,\frac{1}{4}\min_{y_{i}\neq y_{1}}|y_{1}-y_{i}|)\quad\text{ small enough such that }\frac{\partial V}{\partial x_{1}}>\frac{\nu_{0}}{2}\text{ in }B(y_{1},2\delta_{1})\subset\Omega.\]
Choose a smooth map \(\psi_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{N},[0,1])\) satisfying \(|\nabla\psi_{\varepsilon}|\leq 2\varepsilon/\delta_{1}\) and
\[\psi_{\varepsilon}(x)=\begin{cases}1,&\quad|x-y_{1}/\varepsilon|\leq\delta_{1 }\varepsilon^{-1},\\ 0,&\quad|x-y_{1}/\varepsilon|\geq 2\delta_{1}\varepsilon^{-1}.\end{cases}\]
By (55) and (56),
\[\int_{\mathbb{R}^{N}}|\nabla\psi_{\varepsilon}|(|\nabla u_{\varepsilon}|^{2} +u_{\varepsilon}^{2}+|\nabla w_{\varepsilon}|^{2}+w_{\varepsilon}^{2}+ \tilde{F}_{1}(u_{\varepsilon})+\tilde{f}_{1}(u_{\varepsilon})u_{\varepsilon} +\tilde{F}_{1}(w_{\varepsilon})+\tilde{f}_{1}(w_{\varepsilon})w_{\varepsilon })=o_{\varepsilon}(\varepsilon).\]
Testing (54) by \(\frac{\partial(\psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}\in H_{\varepsilon}\), we get
\[\int_{\mathbb{R}^{N}}\left\{\nabla w_{\varepsilon}\nabla\frac{\partial(\psi_ {\varepsilon}w_{\varepsilon})}{\partial x_{1}}+\tilde{f}_{1}(w_{\varepsilon} )\frac{\partial(\psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}-\tilde{ f}_{2}(u_{\varepsilon})\frac{\partial(\psi_{\varepsilon}w_{\varepsilon})}{ \partial x_{1}}-\lambda_{\varepsilon}u_{\varepsilon}\frac{\partial(\psi_{ \varepsilon}w_{\varepsilon})}{\partial x_{1}}\right\}=-\int_{\mathbb{R}^{N}} \widetilde{V}(\varepsilon x)w_{\varepsilon}\frac{\partial(\psi_{\varepsilon} w_{\varepsilon})}{\partial x_{1}}.\]
Integrating by parts, we get
\[\int_{\mathbb{R}^{N}}\nabla w_{\varepsilon}\nabla\frac{\partial( \psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}= \int_{\mathbb{R}^{N}}\left\{\frac{1}{2}\frac{\partial(\psi_{ \varepsilon}|\nabla w_{\varepsilon}|^{2})}{\partial x_{1}}+\frac{1}{2}|\nabla w _{\varepsilon}|^{2}\frac{\partial\psi_{\varepsilon}}{\partial x_{1}}+w_{ \varepsilon}\nabla w_{\varepsilon}\nabla\frac{\partial\psi_{\varepsilon}}{ \partial x_{1}}+\nabla w_{\varepsilon}\nabla\psi_{\varepsilon}\frac{\partial w _{\varepsilon}}{\partial x_{1}}\right\}=o_{\varepsilon}(\varepsilon),\] \[\int_{\mathbb{R}^{N}}\tilde{f}_{1}(w_{\varepsilon})\frac{\partial( \psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}= \int_{\mathbb{R}^{N}}\left\{\frac{\partial(\psi_{\varepsilon} \tilde{F}_{1}(w_{\varepsilon}))}{\partial x_{1}}+\frac{\partial\psi_{ \varepsilon}}{\partial x_{1}}|\tilde{f}_{1}(w_{\varepsilon})w_{\varepsilon}- \tilde{F}_{1}(w_{\varepsilon})|\right\}=o_{\varepsilon}(\varepsilon),\] \[\int_{\mathbb{R}^{N}}\tilde{f}_{2}(u_{\varepsilon})\frac{\partial( \psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}= \int_{\mathbb{R}^{N}}\tilde{f}_{2}(u_{\varepsilon})\frac{\partial(\psi_{ \varepsilon}u_{\varepsilon})}{\partial x_{1}}+o_{\varepsilon}(\varepsilon)= \int_{\mathbb{R}^{N}}\left\{\frac{\partial\psi_{\varepsilon}}{\partial x_{1}} [\tilde{f}_{2}(u_{\varepsilon})u_{\varepsilon}-\tilde{F}_{2}(u_{\varepsilon}) ]\right\}+o_{\varepsilon}(\varepsilon)=o_{\varepsilon}(\varepsilon),\] \[\int_{\mathbb{R}^{N}}\lambda_{\varepsilon}u_{\varepsilon}\frac{ \partial(\psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}= \int_{\mathbb{R}^{N}}\lambda_{\varepsilon}w_{\varepsilon}\frac{\partial( \psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}+o_{\varepsilon}(\varepsilon)= \frac{\lambda_{\varepsilon}}{2}\int_{\mathbb{R}^{N}}\frac{\partial\psi_{ \varepsilon}}{\partial x_{1}}w_{\varepsilon}^{2}=o_{\varepsilon}(\varepsilon),\]
and
\[\int_{\mathbb{R}^{N}}\widetilde{V}(\varepsilon x)w_{\varepsilon}\frac{\partial( \psi_{\varepsilon}w_{\varepsilon})}{\partial x_{1}}=\frac{1}{2}\int_{\mathbb{R}^{N }}\left\{\frac{\partial(\widetilde{V}_{\varepsilon}\psi_{\varepsilon}w_{ \varepsilon}^{2})}{\partial x_{1}}+\widetilde{V}_{\varepsilon}\frac{\partial\psi_{ \varepsilon}}{\partial x_{1}}w_{\varepsilon}^{2}-\frac{\partial\widetilde{V}_{ \varepsilon}}{\partial x_{1}}\psi_{\varepsilon}w_{\varepsilon}^{2}\right\}=-\frac{ \varepsilon}{2}\int_{\mathbb{R}^{N}}\frac{\partial\widetilde{V}(\varepsilon x)}{ \partial x_{1}}\psi_{\varepsilon}w_{\varepsilon}^{2}+o_{\varepsilon}(\varepsilon).\]
Therefore,
\[\int_{\mathbb{R}^{N}}\frac{\partial\widetilde{V}(\varepsilon x)}{\partial x_{1}} \psi_{\varepsilon}w_{\varepsilon}^{2}=o_{\varepsilon}(1).\]
Taking limits as \(\varepsilon\to 0\), we have
\[\frac{\nu_{0}}{2}\int_{\mathbb{R}^{N}}u_{0}^{2}\leq\liminf_{\varepsilon\to 0} \int_{\mathbb{R}^{N}}\frac{\partial\widetilde{V}(\varepsilon x)}{\partial x_{1}} \psi_{\varepsilon}w_{\varepsilon}^{2}=0.\]
This is a contradiction.
**Remark 4.3**.:
1. _To deal with the nonlipschitzian property of the nonlinearity, we have considered the problem in the suitable Hilbert space_ \(H_{\varepsilon}\) _to recover the smoothness of energy functional. However, the global_ \(W^{2,p}\) _estimate is not applicable for the corresponding operator_ \(-\Delta+\widetilde{V}_{\varepsilon}\) _for_ \(w_{\varepsilon}\) _since_ \(\widetilde{V}_{\varepsilon}\) _is unbounded._
2. _We explain how our arguments work for the setting of_ _[_8_]__. In fact, in their setting, there is no restriction on_ \(L^{2}\) _norm of_ \(u_{\varepsilon}\) _and_ \(f(u)/u\) _has no singularity, we can just consider the following equation to continue our arguments_ \[-\Delta w_{\varepsilon}+V(\varepsilon x)w_{\varepsilon}=f(u_{\varepsilon}).\]
### Deformation along negative pseudogradient flow
By (51), (52) and Proposition 4.1, there exists a pseudogradient field on \(\mathcal{M}_{\alpha}^{\varepsilon}\).
**Lemma 4.4**.: _There is a locally lipschitzian continuous vector field \(\mathcal{W}:\mathcal{M}_{\alpha}^{\varepsilon}\to H_{\varepsilon}\) such that the following statements hold._
1. \(\mathcal{W}(u)\in T_{u}\mathcal{M}_{\alpha}^{\varepsilon}\)_,_ \(\Gamma_{\varepsilon}^{\prime}(u)\mathcal{W}(u)\geq 0\) _and_ \(\|\mathcal{V}(u)\|_{\varepsilon}\leq 1\) _for_ \(u\in\mathcal{M}_{\alpha}^{\varepsilon}\)_._
2. \(\Gamma_{\varepsilon}^{\prime}(u)\mathcal{W}(u)=0\) _if_ \(u\notin Z(3\rho_{0},3\delta_{0})\)_._
3. \(\Gamma_{\varepsilon}^{\prime}(u)\mathcal{W}(u)\geq\nu_{\varepsilon}\)_, provided that_ \(u\in Z(2\rho_{0},2\delta_{0})\cap[\Gamma_{\varepsilon}\leq\ell E_{\ell^{-1} \alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}]\)_._
4. \(\Gamma_{\varepsilon}^{\prime}(u)\mathcal{W}(u)\geq\nu_{L}\varepsilon\)_, provided that_ \(u\in Z(2\rho_{0},2\delta_{0})\setminus Z(\rho_{0},\delta_{0})\cap[\Gamma_{ \varepsilon}\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}]\)_._
5. \(\Gamma_{\varepsilon}^{\prime}(u)\mathcal{W}(u)\geq\nu_{L}\)_, provided that_ \(u\in Z(2\rho_{0},2\delta_{0})\setminus Z(\rho_{0},3\delta_{0})\cap[\Gamma_{ \varepsilon}\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}]\)_._
By this lemma, we have
**Lemma 4.5**.: _Let \(\nu_{0}=\min\{\frac{\delta_{0}\nu_{L}}{4D_{2}},\frac{\rho\omega_{L}}{8}\}\), where \(D_{2}\) is the constant given in Lemma 3.7. For any \(\nu\in(0,\nu_{0})\), there is a descending flow \(\eta\in C([0,+\infty)\times\mathcal{M}_{\alpha}^{\varepsilon},\mathcal{M}_{ \alpha}^{\varepsilon})\) such that_
1. \(\eta(0,u)=u\)_, and_ \(\Gamma_{\varepsilon}(\eta(t,u))\leq\Gamma_{\varepsilon}(u)\) _for any_ \(t\in[0,+\infty)\) _and_ \(u\in\mathcal{M}_{\alpha}^{\varepsilon}\)_._
2. _For any_ \(t\geq 0\)_,_ \(\eta(t,u)=u\) _provided that_ \(u\notin Z(3\rho_{0},3\delta_{0})\) _or_ \(\Gamma_{\varepsilon}(u)\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-2\nu\)_._
3. _For any_ \(t\geq 0\)_,_ \(\eta(t,u)\in Z(3\rho_{0},3\delta_{0})\) _if_ \(u\in Z(3\rho_{0},3\delta_{0})\)_._
4. _There is_ \(t_{\varepsilon}>0\) _such that_ \(\Gamma_{\varepsilon}(\eta(t_{\varepsilon},u))<\ell E_{\ell^{-1}\alpha}+\frac{1 }{2}V_{0}\alpha-\nu\) _if_ \(u\in Z(\rho_{0},\delta_{0})\cap[\Gamma_{\varepsilon}\leq\ell E_{\ell^{-1} \alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}]\)_._
Proof.: Let \(\psi:\mathcal{M}_{\alpha}^{\varepsilon}\to[0,1]\) be locally Lipschitz continuous such that \(\psi(u)=1\) if \(\Gamma_{\varepsilon}(u)\geq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-\nu\) and \(\psi(u)=0\) if \(\Gamma_{\varepsilon}(u)\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-2\nu\). For \(t\geq 0\), \(u\in\mathcal{M}_{\alpha}^{\varepsilon}\), define \(\eta(t,u)\) by the following initial value problem
\[\frac{\mathrm{d}}{\mathrm{d}t}\eta(t,u)=-\psi(\eta(t,u))\mathcal{W}(\eta(t,u)), \quad\eta(0,u)=u.\]
Then (i) (ii) and (iii) follow from Lemma 4.4 (i) and (ii). To show (iv), we assume without loss of generality that \(d_{\varepsilon}<\nu_{0}\), and set \(t_{\varepsilon}=\frac{\nu+\eta_{0}}{\nu_{\varepsilon}}\). There are three cases.
**Case 1.**\(\eta(t,u)\in Z(2\rho_{0},2\delta_{0})\) for any \(t\in[0,t_{\varepsilon}]\).
In this case, by Lemma 4.4 (iii),
\[\Gamma_{\varepsilon}(\eta(t_{\varepsilon},u))\leq \Gamma_{\varepsilon}(u)+\int_{0}^{t_{\varepsilon}}\frac{\mathrm{d }}{\mathrm{d}s}\Gamma_{\varepsilon}(\eta(s,u))\mathrm{d}s\] \[\leq \ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}- \int_{0}^{t_{\varepsilon}}\Gamma_{\varepsilon}^{\prime}(\eta(s,u))\mathcal{W}( \eta(s,u))\mathrm{d}s\] \[\leq E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}-\nu_{ \varepsilon}t_{\varepsilon}<E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-\nu.\]
**Case 2.** There is \(t\in[0,t_{\varepsilon}]\) such that \(\mathrm{dist}(\varepsilon\Upsilon_{j}(\eta(t,u)),O)=2\delta_{0}\) for some \(j\) and \(\eta(s,u)\in Z(2\rho_{0},2\delta_{0})\) for \(s\in[0,t)\).
Let \(t_{2}>t_{1}>0\) be such that \(\mathrm{dist}(\varepsilon\Upsilon_{j}(\eta(t_{1},u)),O)=\delta_{0}\), \(\mathrm{dist}(\varepsilon\Upsilon_{j}(\eta(t_{2},u)),O)=2\delta_{0}\), and \(\eta(t,u)\in Z(2\rho_{0},2\delta_{0})\setminus Z(\rho_{0},\delta_{0})\) for \(t\in(t_{1},t_{2})\). By Lemma 3.7, \(|t_{1}-t_{2}|\geq\frac{\delta_{0}}{\varepsilon D_{2}}\). Then by Lemma 4.4 (iv)
\[\Gamma_{\varepsilon}(\eta(t_{\varepsilon},u))\leq \Gamma_{\varepsilon}(u)+\int_{t_{1}}^{t_{2}}\frac{\mathrm{d}}{ \mathrm{d}s}\Gamma_{\varepsilon}(\eta(s,u))\mathrm{d}s\] \[\leq \ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}- \int_{t_{1}}^{t_{2}}\Gamma_{\varepsilon}^{\prime}(\eta(s,u))\mathcal{W}(\eta(s,u ))\mathrm{d}s\] \[\leq E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}- \frac{\delta}{\varepsilon D_{2}}\nu_{L}\varepsilon<E_{\ell^{-1}\alpha}+\frac{1 }{2}V_{0}\alpha-\nu.\]
**Case 3.** There is \(t\in[0,t_{\varepsilon}]\) such that \(\operatorname{dist}_{H_{\varepsilon}}(\eta(t,u),Z_{L,\varepsilon})\geq 2\rho_{0}\), and \(\varepsilon\Upsilon_{j}(\eta(s,u))\in O^{2\delta_{0}}\) for any \(j\) and \(s\in[0,t]\).
In this case, there are \(t_{2}>t_{1}>0\) such that \(\operatorname{dist}_{H_{\varepsilon}}(\eta(t_{1},u),Z_{L,\varepsilon})=\rho_{ 0}\), \(\operatorname{dist}_{H_{\varepsilon}}(\eta(t_{2},u),Z_{L,\varepsilon})\geq 2\rho_{0}\), and \(\eta(t,u)\in Z(2\rho_{0},2\delta_{0})\setminus Z(\rho_{0},2\delta_{0})=Z(2\rho _{0},2\delta_{0})\setminus Z(\rho_{0},3\delta_{0})\) for \(t\in(t_{1},t_{2})\). Then \(\|\eta(t_{1},u)-\eta(t_{2},u)\|\geq\rho_{0}\). By Lemma 4.4 (i), \(|t_{1}-t_{2}|\geq\rho_{0}\). Then By Lemma 4.4 (v),
\[\Gamma_{\varepsilon}(\eta(t_{\varepsilon},u))\leq \Gamma_{\varepsilon}(u)+\int_{t_{1}}^{t_{2}}\frac{\mathrm{d}}{ \mathrm{d}s}\Gamma_{\varepsilon}(\eta(s,u))\mathrm{d}s\] \[\leq \ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}- \int_{t_{1}}^{t_{2}}\Gamma_{\varepsilon}^{\prime}(\eta(s,u))\mathcal{W}(\eta(s,u))\mathrm{d}s\] \[\leq E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{\varepsilon}-\rho_ {0}\nu_{L}<E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-\nu.\qed\]
### Existence of a critical point
In this section, we assume (A) and get a contradiction. Set
\[S=\left\{\,\boldsymbol{s}=(s_{1},\cdots,s_{\ell})\in S_{\ell-1}\mid|s_{j}-\ell ^{-1}|\leq\delta,\;j=1,\cdots,\ell\,\right\},\]
where \(\delta>0\) is a constant such that \(\delta\ell\leq 1/2\). Define
\[\gamma_{0}(\boldsymbol{p},\boldsymbol{s}):=B\sum_{j=1}^{\ell}\sqrt{\ell s_{j }}(\phi_{\varepsilon}u_{0})(\cdot-p_{j})\in\mathcal{M}_{\alpha}^{\varepsilon},\]
for each
\[(\boldsymbol{p},\boldsymbol{s})\in A(L):=\left\{\,\boldsymbol{p}=(p_{1}, \cdots,p_{\ell_{0}})\in\big{(}\frac{1}{\varepsilon}O^{\delta_{0}}\big{)}^{ \ell}\,\,\bigg{|}\,\,\xi(\boldsymbol{p})\geq L\,\right\}\times S,\]
where \(B:=\alpha^{1/2}|\sum_{j=1}^{\ell}\sqrt{\ell s_{j}}(\phi_{\varepsilon}u_{0})( \cdot-p_{j})|_{2}^{-1}\). We have the following lemma.
**Proposition 4.6**.: _There is \(L_{2}>L_{1}\) such that the following statements hold for \(L>L_{2}\) and \(\varepsilon\in(0,\varepsilon_{L})\)._
1. \(\gamma_{0}(\boldsymbol{p},\boldsymbol{s})\in Z(\rho_{0},\delta_{0})\) _for_ \((\boldsymbol{p},\boldsymbol{s})\in A(L)\)_._
2. _For any permutation_ \(\sigma\) _of_ \(1,2,\cdots,\ell\)_,_ \[\gamma_{0}(p_{\sigma(1)},\cdots,p_{\sigma(\ell)},s_{\sigma(1)},\cdots,s_{ \sigma(\ell)})=\gamma_{0}(p_{1},\cdots,p_{\ell},s_{1},\cdots,s_{\ell}).\]
3. \(|p_{j}-\Upsilon_{j}(\gamma_{0}(\boldsymbol{p},\boldsymbol{s}))|\leq 3R_{0}\) _up to a permutation._
4. _There is_ \(\nu\in(0,\nu_{0})\) _independent of_ \(\varepsilon\) _such that for any_ \((\boldsymbol{p},\boldsymbol{s})\in\partial A(L)\)_,_ \[\Gamma_{\varepsilon}(\gamma_{0}(\boldsymbol{p},\boldsymbol{s}))\leq\ell E_{ \ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-2\nu.\]
5. _There is_ \(d_{\varepsilon}>0\) _with_ \(d_{\varepsilon}\to 0\) _such that_ \[\sup_{(\boldsymbol{p},\boldsymbol{s})\in A(L)}\Gamma_{\varepsilon}(\gamma_{0}( \boldsymbol{p},\boldsymbol{s}))\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0} \alpha+d_{\varepsilon}.\]
Proof.: (i) follows from the fact that \(|B^{2}-1|\to 0\) uniformly as \(L\to\infty\). (ii) and (iii) is clear.
To prove (iv), we first note the fact that for large \(L>0\), there uniformly holds
\[|\gamma_{0}(\boldsymbol{p},\boldsymbol{s})|\leq 2\|u_{0}\|_{L^{\infty}(\mathbb{R }^{N})},\;\;(\boldsymbol{p},\boldsymbol{s})\in A(L).\]
So by (39), \(\widetilde{F}(\gamma_{0}(\boldsymbol{p},\boldsymbol{s}))=F(\gamma_{0}( \boldsymbol{p},\boldsymbol{s}))\). Then we consider any sequence \((p(L),s(L))\in\partial\left(\big{(}\frac{1}{\varepsilon}O^{\delta_{0}} \big{)}^{\ell}\times S\right)\). Since \(\varepsilon\to 0^{+}\) as \(L\to+\infty\), we have, up to a subsequence, \(s_{j}(L)\to s_{j},\widetilde{V}(\varepsilon p_{j}(L))\to V_{j}\leq V_{0}\).
In the case \((p(L),s(L))\in\partial\big{(}\frac{1}{\varepsilon}O^{\delta_{0}}\big{)}^{\ell}\times S\), we have \(V_{j_{0}}\leq\sup V(\partial O^{\delta_{0}})<V_{0}\) for some \(j_{0}\). Therefore,
\[\begin{split}\limsup_{L\to\infty}\sup_{\varepsilon\in(0, \varepsilon_{L})}\Gamma_{\varepsilon}(\gamma_{0}(\mathbf{p},\mathbf{s}))& =\mathbb{J}(\sqrt{\ell s_{1}}u_{0},\cdots,\sqrt{\ell s_{\ell}}u_{ 0})+\sum_{j=1}^{\ell}\frac{V_{j}s_{j}}{2}\alpha\\ &\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-\frac{V_{0}- V_{j_{0}}}{2}s_{j_{0}}.\end{split} \tag{57}\]
When \((p(L),s(L))\in\big{(}\frac{1}{\varepsilon}O^{\delta_{0}}\big{)}^{\ell}\times\partial S\), by (24) and similar to (57), we have \(\limsup_{L\to\infty}\sup_{\varepsilon\in(0,\varepsilon_{L})}\Gamma_{ \varepsilon}(\gamma_{0}(\mathbf{p},\mathbf{s}))<\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_ {0}\alpha\).
Lastly, if \(\xi(p)=L\), setting \(u=\gamma_{0}(p,s)\), we have
\[\int_{\mathbb{R}^{N}}\chi_{u}u^{2}\mathrm{d}x\leq Ce^{-c\xi_{1}(\Upsilon(u))^ {2}},\]
for some \(C,c>0\) independent of \(L,\varepsilon\). Then \(\Phi_{\varepsilon}(u)=0\) for large \(L\). On the other hand, by Corollary 3.9 and (35), \(\sup_{H_{\varepsilon}}|\Psi_{\varepsilon}|\leq Ce^{-cL^{4}}\) for some \(C,c>0\) independent of \(L,\varepsilon\). Then by the proof of Proposition 3.5, \(\Gamma_{\varepsilon}(\gamma_{0}(p,s))\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_ {0}\alpha-C(L)\) when \(L\) is large.
(v) follows from Proposition 3.5 as well.
As in [8], we define an equivalence relation \(\approx\) in \((\mathbb{R}^{N})^{\ell}\times S\) as follows:
\[(\mathbf{p}_{1},\cdots\mathbf{p}_{\ell},s_{1},\cdots,s_{\ell})\approx(\mathbf{p}^{\prime }_{1},\cdots\mathbf{p}^{\prime}_{\ell},s^{\prime}_{1},\cdots,s^{\prime}_{\ell})\]
if and only if there is a permutation \(\sigma\) of \(\{\,1,\cdots,\ell\,\}\) such that \(p_{j}=p^{\prime}_{\sigma(j)}\) and \(s_{j}=s_{\sigma(j)}\) for \(j=1,\cdots,\ell\).
Fixing \(x_{0}\in O\), we set
\[p_{j}^{\varepsilon}=\frac{1}{\varepsilon}(x_{0}+4\sqrt{\varepsilon}(j-1)e_{0} )\ \text{ with }e_{0}=(1,0,\cdots),\]
and
\[Q^{\varepsilon}:=\big{[}(p_{1}^{\varepsilon},\cdots,p_{\ell}^{\varepsilon}, \ell^{-1},\cdots,\ell^{-1})\big{]}\in((\mathbb{R}^{N})^{\ell}\times S)/\approx.\]
Define a map \(\mathcal{F}:Z(3\rho_{0},3\delta_{0})\to((\mathbb{R}^{N})^{\ell}\times S)/\approx\)
\[\mathcal{F}(u)=\big{[}\Upsilon_{1}(u),\cdots,\Upsilon_{\ell}(u),N_{1,\xi( \Upsilon(u))/4}(u),\cdots,N_{\ell,\xi(\Upsilon(u))/4}(u)\big{]}\,,\]
where
\[N_{j,t}(u)=\frac{\int_{B(\Upsilon_{j}(u),t)}u^{2}}{\int_{\zeta_{j=1}^{\ell}B( \Upsilon_{j}(u),t)}u^{2}}.\]
By Proposition 4.6 (ii) \(\mathcal{F}\circ\gamma_{0}\) can be considered as a map from \(A(L)/\approx\) to \(((\mathbb{R}^{N})^{\ell}\times S)/\approx\).
**Proposition 4.7**.: _There is \(L_{3}>L_{2}\) such that for each \(L\geq L_{3}\), there hold_
\[\deg(\mathcal{F}\circ\gamma_{0},A(L)/\approx,Q^{\varepsilon})=1.\]
Proof.: We show that if \(L\) is sufficiently large
\[Q^{\varepsilon}\neq(1-t)[(\mathbf{p},\mathbf{s})]+t\mathcal{F}\circ\gamma_{0}(\mathbf{p}, \mathbf{s}) \tag{58}\]
for any \(t\in[0,1]\) and \((\mathbf{p},\mathbf{s})\in\partial A(L)\). For \((\mathbf{p},\mathbf{s})\in\partial A(L)\), one of the following take place.
(i) \(|p_{i}-p_{j}|=L\) for some \(i\neq j\); (ii) \(p_{j}\in\partial\big{(}\frac{1}{\varepsilon}O^{\delta_{0}}\big{)}\) for some \(j\); (iii) \(\mathbf{s}\in\partial S\).
If (i) or (ii) happens, by Proposition 4.6 (iii), we have \(\xi((1-t)\mathbf{p}+t\Upsilon(\gamma_{0}(\mathbf{p},\mathbf{s})))\leq 2L<\frac{4}{\sqrt{ \varepsilon}}\) or \(\operatorname{dist}(\varepsilon(1-t)p_{j}+\varepsilon t\Upsilon_{j}(\gamma_{0}( \mathbf{p},\mathbf{s})),x_{0})\geq\delta/2>4\ell\sqrt{\varepsilon}\). Hence, (58) holds. On the other hand, if (iii) hold, by \(\xi(\Upsilon(u))\geq\xi(\mathbf{p})-2R_{0}\geq L/2\) and the decay estimate for \(u_{0}\), there holds
\[\lim_{L\to\infty}|N_{j,\xi(\Upsilon(u))/4}(\gamma_{0}(\mathbf{p},\mathbf{s}))-s_{j}|=0.\]
Therefore, we can also get (58).
**Lemma 4.8**.: _For fixed \(L\geq L_{3}\), there holds_
\[\liminf_{\varepsilon\to 0}\inf\{\Gamma_{\varepsilon}(u)\mid u\in Z(3\rho_{0},3 \delta_{0}),\;\mathcal{F}(u)=Q^{\varepsilon}\}\geq\ell E_{\ell^{-1}\alpha}+ \frac{1}{2}V_{0}\alpha.\]
Proof.: For \(u\) such that \(\mathcal{F}(u)=Q^{\varepsilon}\), we have by Lemma 3.2, \(\xi(\Upsilon(u))=\xi(p_{1}^{\varepsilon},\cdot\cdot\cdot,p_{\ell}^{ \varepsilon})=\frac{4}{\sqrt{\varepsilon}}\). Note that if \(\Gamma_{\varepsilon}(u)\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+1\), we have
\[\lim_{\varepsilon\to 0}\sum_{j=1}^{\ell}\int_{\mathbb{R}^{N}\setminus B(p_{j}^{ \varepsilon},\cdot\cdot^{-\frac{1}{2}})}u^{2}\mathrm{d}x=0.\]
By Gagliardo-Nirenberg inequality,
\[\lim_{\varepsilon\to 0}\sum_{j=1}^{\ell}\int_{\mathbb{R}^{N}\setminus B(p_{j}^{ \varepsilon},\cdot\cdot^{-\frac{1}{2}})}\bar{F}_{2}(u)\mathrm{d}x=0.\]
Take \(\zeta_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{N},[0,1])\) such that \(\zeta_{\varepsilon}=1\) in \(B(0,\varepsilon^{-\frac{1}{2}})\), \(\zeta_{\varepsilon}=0\) in \(\mathbb{R}^{N}\setminus B(0,2\varepsilon^{-\frac{1}{2}})\) and \(|\nabla\zeta_{\varepsilon}|\leq 10\varepsilon^{\frac{1}{2}}\). We have
\[\lim_{\varepsilon\to 0}\|\zeta_{\varepsilon}(\cdot-p_{j}^{\varepsilon})u-u\|_{L^{2 }(B(p_{j}^{\varepsilon},\zeta(\Upsilon(u))/4))}^{2}=0,\quad\lim_{\varepsilon \to 0}N_{j,\xi(\Upsilon(u))/4}(u)=\ell^{-1}.\]
Moreover,
\[\int_{B(p_{j}^{\varepsilon},\xi(\Upsilon(u))/4)}|\nabla(\zeta_{ \varepsilon}(\cdot-p_{j}^{\varepsilon})u)|^{2}= \int_{\mathbb{R}^{N}}|\nabla\zeta_{\varepsilon}(\cdot-p_{j}^{ \varepsilon})|^{2}u^{2}+\nabla\zeta_{\varepsilon}(\cdot-p_{j}^{\varepsilon}) \nabla u\zeta_{\varepsilon}(\cdot-p_{j}^{\varepsilon})u+\zeta_{ \varepsilon}(\cdot-p_{j}^{\varepsilon})|\nabla u|^{2}\] \[\leq \int_{B(p_{j}^{\varepsilon},\xi(\Upsilon(u))/4)}|\nabla u|^{2}+o_ {\varepsilon}(1),\] \[\int_{B(p_{j}^{\varepsilon},\xi(\Upsilon(u))/4)}F_{1}(\zeta_{ \varepsilon}(\cdot-p_{j}^{\varepsilon})u)\leq \int_{B(p_{j}^{\varepsilon},\xi(\Upsilon(u))/4)}F_{1}(u).\]
Then, we have
\[\liminf_{\varepsilon\to 0}\Gamma_{\varepsilon}(u)\geq \liminf_{\varepsilon\to 0}\sum_{j=1}^{\ell}\Gamma_{\varepsilon}( \zeta_{\varepsilon}(\cdot-p_{j}^{\varepsilon})u)\] \[\geq \liminf_{\varepsilon\to 0}\sum_{j=1}^{\ell}J(\zeta_{\varepsilon}( \cdot-p_{j}^{\varepsilon})u)+\frac{V_{0}}{2}\alpha=\ell E_{\ell^{-1}\alpha}+ \frac{1}{2}V_{0}\alpha.\qed\]
Proof of the existence of critical point of \(\Gamma_{\varepsilon}\).: By Proposition 4.6 (v), there holds
\[\max_{(p,s)\in A(L)}\Gamma_{\varepsilon}(\gamma_{0}(\boldsymbol{p}, \boldsymbol{s}))\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha+d_{ \varepsilon}.\]
By Proposition 4.6 (iv), there exists \(\nu\in(0,\nu_{0})\) such that
\[\max_{(p,s)\in\partial A(L)}\Gamma_{\varepsilon}(\gamma_{0}(\boldsymbol{p}, \boldsymbol{s}))\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-2\nu.\]
If assumption (A) holds. From Lemma 4.5,
\[\Gamma_{\varepsilon}(\gamma_{1}(\boldsymbol{p},\boldsymbol{s}))\leq\ell E_{ \ell^{-1}\alpha}+\frac{1}{2}V_{0}\alpha-\nu,\quad(p,s)\in A(L), \tag{59}\]
where \(\gamma_{1}(p,s):=\eta(t_{\varepsilon},\gamma_{0}(\boldsymbol{p},\boldsymbol{s}))\) satisfying \(\gamma_{1}=\gamma_{0}\) on \(\partial A(L)\). On the other hand, by Proposition 4.7,
\[\deg(\mathcal{F}\circ\gamma_{1},A(L)/\approx,Q^{\varepsilon})=\deg(\mathcal{F }\circ\gamma_{0},A(L)/\approx,Q^{\varepsilon})\neq 0,\]
which means that \(\mathcal{F}(\gamma_{1}(\boldsymbol{p}_{\varepsilon},\boldsymbol{s}_{ \varepsilon}))=Q^{\varepsilon}\) for some \((\boldsymbol{p}_{\varepsilon},\boldsymbol{s}_{\varepsilon})\in A(L)\). By Lemma 4.8,
\[\liminf_{\varepsilon\to 0}\Gamma_{\varepsilon}(\gamma_{1}(\boldsymbol{p}_{ \varepsilon},\boldsymbol{s}_{\varepsilon}))\geq\ell E_{\ell^{-1}\alpha}+ \frac{1}{2}V_{0}\alpha,\]
which contradicts to (59).
Completion of the proof for Theorem 1.2
For each \(i\), we choose a decreasing sequence of positive numbers \(\{\delta_{i}\}\), and a sequence of open sets \(\{O_{i}\}\) such that \(\delta_{i}\to 0\), and
\[O_{i+1}\subset O_{i},\quad\bigcap_{i=1}^{\infty}O_{i}=\mathcal{V},\quad\inf_{O_ {i}^{3\delta_{i}}\setminus O_{i}^{\delta_{i}}}|\nabla V|\geq\tilde{\nu}_{i}>0.\]
Then for each \(i\), there exist positive \(\nu_{i}\to 0\), and positive decreasing \(\varepsilon_{i}\to 0\) such that \(\Gamma_{\varepsilon}\) has a nontrivial critical point \((\lambda_{\varepsilon,i},u_{\varepsilon,i})\in\mathbb{R}^{N}\times Z(3\rho_{0 },3\delta_{i})\cap[\Gamma_{\varepsilon}\leq\ell E_{\ell^{-1}\alpha}+\frac{1}{ 2}V_{0}\alpha+2\nu_{i}]\) when \(\varepsilon\in(0,\varepsilon_{i})\). Define
\[(\lambda_{\varepsilon},u_{\varepsilon})=(\lambda_{\varepsilon,i},u_{ \varepsilon,i})\ \ \text{for}\ \varepsilon\in[\varepsilon_{i+1},\varepsilon_{i}).\]
Then for any subsequence of \(\varepsilon\to 0\), \(u_{\varepsilon}\) satisfies the assumption of Proposition 3.17, because \(Z(3\rho_{0},\delta_{i})\subset Z(3\rho_{0},\delta_{0})\) for each \(i\). We have also that \(\varepsilon\Upsilon_{j}(u_{\varepsilon})\in O_{i}^{3\delta_{i}}\) if \(\varepsilon\in[\varepsilon_{i+1},\varepsilon_{i})\), \(j=1,\cdots,\ell\). Applying Proposition 3.17 to \(u_{\varepsilon}\). Then there exist \(\mathbf{U}\in K_{\alpha}\) and \((z_{\varepsilon,j})\subset\mathbb{R}^{N},j=1,2,\cdots,\ell\) such that as \(\varepsilon\to 0\) (after extracting a subsequence if necessary)
1. \(|z_{\varepsilon,j}-\Upsilon_{j}(u_{\varepsilon})|\leq 2R_{0}\) for \(j=1,2,\cdots,\ell\),
2. \(\|u_{\varepsilon}-\sum_{j=1}^{\ell}U_{j}(\cdot-z_{\varepsilon,j})\|_{ \varepsilon}\to 0\), where \(U_{j}\) is the \(j\)-th component of \(\mathbf{U}\).
Then necessarily, for \(j=1,\cdots,\ell\),
\[\operatorname{dist}(\varepsilon z_{\varepsilon,j},O_{i}^{3\delta_{i}})\leq \varepsilon|z_{\varepsilon,j}-\Upsilon_{j}(u_{\varepsilon})|\leq 2R_{0} \varepsilon,\quad\varepsilon\in[\varepsilon_{i+1},\varepsilon_{i}). \tag{60}\]
By the choice of \(O_{i}\) and \(\delta_{i}\), we have \(\operatorname{dist}(\varepsilon z_{\varepsilon,j},\mathcal{V})\to 0\) as \(\varepsilon\to 0\) for \(j=1,\cdots,\ell\). Hence, \(\mathbf{U}\) is a solution to system (6) with \(\lambda_{\varepsilon}\to\lambda+V_{0}\).
By Corollary 3.15, we can conclude that \(\Phi_{\varepsilon}(u_{\varepsilon})=0\), \(\Phi_{\varepsilon}^{\prime}(u_{\varepsilon})=0\). Hence, \(u_{\varepsilon}\) weakly solves
\[-\Delta u+\widetilde{V}_{\varepsilon}u+\overline{V}_{\varepsilon}T(x,u)u= \bar{f}(u)+\lambda_{\varepsilon}u,\]
where
\[T(x,u)=H(e^{\varepsilon|x|^{2}}u)+\frac{1}{2}H^{\prime}(e^{\varepsilon|x|^{2} }|u|)e^{\varepsilon|x|^{2}}|u|,\ \ \ |\lambda_{\varepsilon}|\leq C.\]
By Kato's inequality and (F5), for constant \(C>0\) independent of \(\varepsilon\), \(|u_{\varepsilon}|\) weakly solves
\[-\Delta u+\widetilde{V}_{\varepsilon}u+\overline{V}_{\varepsilon}T(x,u)u\leq \frac{1}{2}\sigma u\log u+Cu^{p-1},\ \ \text{for some}\ p\in(2,2^{*}).\]
Since \(\widetilde{V}_{\varepsilon}\geq 1\), \(\overline{V}_{\varepsilon}\leq 0\), and \(H^{\prime}(t)\leq 0\) for \(t\geq 0\), we have \(|u_{\varepsilon}|\) solves
\[-\Delta u+u+\overline{V}_{\varepsilon}H(e^{\varepsilon|x|^{2}}u)u\leq\frac{1 }{2}\sigma u\log u+Cu^{p-1},\ \ \text{for some}\ p\in(2,2^{*}).\]
By this and a comparison argument (see [43, Remark 2.4 (i), Corollary 2.7, Proposition 3.3]), we have
\[|u_{\varepsilon}(x)|\leq C\sum_{j=1}^{\ell}e^{-c\varepsilon^{-2}|x-z_{ \varepsilon,j}|^{2}},\quad\text{for some}\ C,c>0\ \text{independent of}\ \varepsilon.\]
Therefore, by (37) and (60), \(u_{\varepsilon}\) solves \(-\Delta u+V_{\varepsilon}u=\bar{f}(u)+\lambda_{\varepsilon}u\). By Lemma 3.11, \(\|u_{\varepsilon}\|\leq C_{0}\) and \(|\lambda_{\varepsilon}|\leq C_{0}\). Since \(V(x)\geq 1\) on \(B(0,M_{0})\), we apply Remark 3.12 to \(|u_{\varepsilon}|\) in \(B(0,\varepsilon^{-1}M_{0})\), and obtain that \(|u_{\varepsilon}(x)|\leq K_{0}\) for \(x\in B(0,\varepsilon^{-1}M_{0}-1/2)\). While \(|u_{\varepsilon}(x)|\leq Ce^{-c\varepsilon^{-2}}\leq K_{0}\), \(x\not\in B(0,\varepsilon^{-1}M_{0}-1/2)\), for small \(\varepsilon>0\). Thus \(\bar{f}(u_{\varepsilon})=f(u_{\varepsilon})\). By Lemma 3.3 and the choice of \(\rho_{1}\), \(u_{\varepsilon}\geq 0\). Hence \(u_{\varepsilon}\) solves the original problem. At last, \(u_{\varepsilon}>0\) by the maximum principle [39].
Appendix
### Symmetry and decay properties of the autonomous problem
**Lemma 6.1**.: _Assume (F1), (F4) and (F5). Let \(u\in H^{1}(\mathbb{R}^{N})\) and \(\lambda\in\mathbb{R}\) satisfy_
\[-\Delta u=f(u)+\lambda u,\quad u\geq 0,\quad u\not\equiv 0.\]
_Then \(u\in C^{2}(\mathbb{R}^{N})\), \(u>0\) in \(\mathbb{R}^{N}\), and it is radially symmetric about some point. Moreover, if \(|\lambda|+\|u\|_{H^{1}}\leq C_{0}\), then there is \(C_{1}>0\), \(C_{2}>0\) independent of \(\lambda\) such that_
\[C_{1}e^{\frac{\lambda}{\sigma}}e^{-\frac{\sigma}{4}\tau^{2}}\leq u(r)\leq C_{2 }e^{\frac{\lambda}{\sigma}}e^{-\frac{\sigma}{4}\tau^{2}}, \tag{61}\]
_and_
\[\frac{u^{\prime}}{ru}\to-\frac{\sigma}{2}\quad\text{as}\;r\to+\infty. \tag{62}\]
Proof.: By the maximum principle [39], we have \(u>0\). It is clear that \(u\in C^{2}\) and \(|u(x)|+|\nabla u(x)|\to 0\) as \(|x|\to\infty\). To show radial symmetry, we apply moving plane arguments. (See e.g. [19, 33, 42].) Denote \(x=(x_{1},x^{\prime})\), and for \(t\in\mathbb{R}\), set
\[\Sigma_{t}=\{\,x\in\mathbb{R}^{N}\mid x_{1}<t\,\}\,,\]
\[x_{t}=(2t-x_{1},x^{\prime}),\quad u_{t}(x)=u(x_{t}),\quad w_{t}=u_{t}-u.\]
Then in \(\Sigma_{t}\), we have
\[-\Delta w_{t}=\lambda w_{t}+f(u_{t})-f(u). \tag{63}\]
**Step 1.** By (F5), there is \(\tau>0\) such that \(f^{\prime}(s)<-\lambda-1\) for \(s\in(0,\tau)\). Take \(R>1\) such that \(u(x)<\min\{\tau,u(0)\}\) if \(|x|\geq R\). We show that \(w_{t}\geq 0\) in \(\Sigma_{t}\setminus B_{R}(0)\) for each \(t\).
Otherwise, since \(w_{t}(x)\to 0\) as \(|x|\to+\infty\) and \(w_{t}|_{\partial\Sigma_{t}}=0\), we assume \(w_{t}\) reaches its negative minimum at some \(\hat{x}\in\Sigma_{t}\setminus B_{R}(0)\). By \(u_{t}(\hat{x})<u(\hat{x})\), we have \(-\Delta w_{t}=\lambda w_{t}+f(u_{t})-f(u)=\int_{u}^{u_{t}}(f^{\prime}(s)+ \lambda)\mathrm{d}s>0\) at \(\hat{x}\). This is a contradiction since \(\hat{x}\) is the minimum point of \(w_{t}\).
Note that Step 1 implies that \(w_{t}\geq 0\) in \(\Sigma_{t}\) for each \(t\leq-R\).
**Step 2.** Set \(t_{0}=\sup\,\{\,t\mid w_{t^{\prime}}\geq 0\text{ in }\Sigma_{t^{\prime}}\text{ for any }t^{\prime}\in(-\infty,t]\,\}<\infty\). We claim that \(w_{t_{0}}\equiv 0\). By continuity, \(w_{t_{0}}\geq 0\). For \(t\leq t_{0}\) by (63), there holds
\[-\Delta w_{t}+\left[\frac{f_{1}(u_{t})-f_{1}(u)}{u_{t}-u}-\lambda\right]w_{t} =\frac{f_{2}(u_{t})}{u_{t}}u_{t}-\frac{f_{2}(u)}{u}u\geq\frac{f_{2}(u)}{u}w_{t }\geq 0, \tag{64}\]
where \(\frac{f_{1}(u_{t})-f_{1}(u)}{u_{t}-u}-\lambda\) is bounded from below in \(\Sigma_{t}\). By maximum principle ([13, 26]), \(w_{t}\equiv 0\) or \(w_{t}>0\) in \(\Sigma_{t}\). If \(w_{t_{0}}\not\equiv 0\) then \(w_{t_{0}}>0\).
To finish this step, we prove that there exists \(\delta_{0}>0\) such that for any \(\delta\in(0,\delta_{0}]\)
\[w_{\lambda_{0}+\delta}\geq 0\text{ in }\Sigma_{\lambda_{0}+\delta}.\]
Arguing by contradiction, for \(\delta_{i}\to 0^{+}\), we let \(x^{i}\in\Sigma_{\lambda_{0}+\delta_{i}}\) be the negative minimum point of \(w_{\lambda_{0}+\delta_{i}}\). We note that by Step 1, \(|x^{i}|\leq R\) for all \(i\). We assume \(x^{i}\to x^{0}\). Then
\[w_{\lambda_{0}}(x^{0})\leq 0,\quad\nabla w_{\lambda_{0}}(x^{0})=0,\]
which implies \(x^{0}\in\partial\Sigma_{\lambda_{0}}\). By (64) and Hopf Lemma ([13, 26]), we get a contradiction
\[\frac{\partial w_{\lambda_{0}}(x^{0})}{\partial x_{1}}<0.\]
Now we have shown that \(u_{t_{0}}=u\) and \(\frac{\partial u}{\partial x_{1}}>0\) in \(\Sigma_{t_{0}}\) by Step 2. Then we can complete the proof since similar arguments hold for any direction in \(\mathbb{R}^{N}\).
To proceed, we can get (61) by comparing with the unique positive solution
\[v=e^{\frac{u}{\sigma}+\frac{N}{2}}e^{-\frac{\sigma}{4}|x|^{2}}\]
to
\[\begin{cases}-\Delta v=\sigma v\log|v|+av&\text{ in }\ \mathbb{R}^{N},\\ v(x)\to 0&\text{ as }\ |x|\to\infty,\end{cases}\]
where \(a\in\mathbb{R}\). Here we only give the details for the proof of (62). Set \(z=-\frac{v^{\prime}}{ru}\). We have
\[z^{\prime}=rz^{2}+r^{-1}\frac{f(u)}{u}-Nr^{-1}z:=d(r,z).\]
By (F6), as \(r\to+\infty\),
\[d(r,z)=rz^{2}+r^{-1}\sigma\log u-Nr^{-1}z+O(r^{-1})=r(z^{2}-\frac{\sigma^{2}}{4 })-Nr^{-1}z+O(r^{-1}).\]
For each \(\tau\in(0,1)\), there is \(r_{1,\tau}>0\) such that if \(r\geq r_{1,\tau}\) and \(z\geq\frac{\sigma}{2(1-\tau)}\), then
\[d(r,z)\geq rz^{2}(1-(1-\tau)^{2})-Nr^{-1}z+O(r^{-1})\geq z^{2}.\]
On the other hand, there is \(r_{1,\tau}>0\) such that if \(r\geq r_{2,\tau}\) and \(0<z\leq\frac{\sigma((1-\tau))}{2}\), then
\[d(r,z)\leq-r\frac{\sigma^{2}}{4}(1-(1-\tau)^{2})-Nr^{-1}z+O(r^{-1})\leq-1.\]
Once the solution curve \((r,z(r))\) enters \([r_{1,\tau},+\infty)\times[\frac{\sigma}{2(1-\tau)},+\infty)\) or \([r_{2,\tau},+\infty)\times(0,\frac{\sigma((1-\tau))}{2}]\), it either blows up at some finite \(r\) or touches the \(r-\)axis. This is impossible since \(z(r)>0\) exists in \((0,+\infty)\). Hence we have
\[\frac{\sigma((1-\tau))}{2}\leq z(r)\leq\frac{\sigma}{2(1-\tau)}\quad\text{for each }r\geq\max\{r_{1,\tau},r_{2,\tau}\}.\qed\]
### Proof of Proposition 3.5
Proof.: Let \((\lambda,\mathbf{v})\) be a solution to (27). Then by Lemma 3.2, \(\sum_{i=1}^{\ell}(|\lambda|+\|v_{i}\|_{H^{1}})\) is bounded. Setting \(w_{j}=v_{j}(\cdot-p_{j})\) and \(\lambda_{i}=\lambda-\mu_{i}\), we have
\[J(\sum_{j=1}^{\ell}w_{j})= \frac{1}{2}\int_{\mathbb{R}^{N}}|\sum_{j=1}^{\ell}\nabla w_{j}|^{ 2}-\int_{\mathbb{R}^{N}}F(\sum_{j=1}^{\ell}w_{j})\] \[= \sum_{j=1}^{\ell}J(w_{j})+\frac{1}{2}\sum_{i=1}^{\ell}\sum_{j \neq i}\int_{\mathbb{R}^{N}}\nabla w_{i}\nabla w_{j}+\sum_{i=1}^{\ell}\int_{ \mathbb{R}^{N}}F(w_{i})-\int_{\mathbb{R}^{N}}F(\sum_{j=1}^{\ell}w_{j})\] \[= \mathbb{J}(\mathbf{v})+\frac{1}{2}\sum_{i=1}^{\ell}\int_{\mathbb{R}^ {N}}(f(w_{i})+\lambda_{i}w_{i})\sum_{j\neq i}w_{j}+\sum_{i=1}^{\ell}\int_{ \mathbb{R}^{N}}F(w_{i})-\int_{\mathbb{R}^{N}}F(\sum_{j=1}^{\ell}w_{j}).\]
Note that by Lemma 2.1 (iii),
\[F(\sum_{j=1}^{\ell}w_{j}) =\frac{F(\sum_{j=1}^{\ell}w_{j})}{(\sum_{j=1}^{\ell}w_{j})^{2}}( \sum_{j=1}^{\ell}w_{j})^{2}\] \[=\frac{F(\sum_{j=1}^{\ell}w_{j})}{(\sum_{j=1}^{\ell}w_{j})^{2}}( \sum_{k=1}^{\ell}w_{k}^{2}+\sum_{i=1}^{\ell}\sum_{k\neq i}w_{i}w_{k})\] \[>\sum_{k=1}^{\ell}\frac{F(w_{k})}{w_{k}^{2}}w_{k}^{2}+\frac{F( \sum_{j=1}^{\ell}w_{j})}{(\sum_{j=1}^{\ell}w_{j})^{2}}\sum_{i=1}^{\ell}\sum_{k \neq i}w_{i}w_{k}\] \[=\sum_{k=1}^{\ell}F(w_{k})+\frac{F(\sum_{j=1}^{\ell}w_{j})}{(\sum _{j=1}^{\ell}w_{j})^{2}}\sum_{i=1}^{\ell}\sum_{k\neq i}w_{i}w_{k}.\]
By (F5), there is \(C>0\) such that for \(s\in(0,1+\max_{1\leq i\leq\ell}\|v_{i}\|_{L^{\infty}})\),
\[f(s)\leq\sigma s\log s+Cs,\quad F(s)\geq\frac{\sigma}{2}s^{2}\log s-Cs^{2}.\]
Therefore,
\[J(\sum_{j=1}^{\ell}w_{j})-\mathbb{J}(\mathbf{v})<\frac{1}{2}\sum_{i=1}^{\ell}\sum_ {k\neq i}\int_{\mathbb{R}^{N}}w_{i}w_{k}\left(\sigma\log w_{i}-\sigma\log(\sum_ {j=1}^{\ell}w_{j})+4C\right).\]
Without loss of generality, for some \(i\neq k\), we assume that \(|p_{i}-p_{k}|=\xi(\mathbf{p})\), and up to a transformation of coordinates, \(p_{i}=(-\xi(\mathbf{p})/2,0^{\prime})\in\mathbb{R}^{N}\) and \(p_{k}=(\xi(\mathbf{p})/2,0^{\prime})\in\mathbb{R}^{N}\) with \(0^{\prime}\in\mathbb{R}^{N-1}\). By Lemma 6.1,
\[C_{1}e^{-\frac{\sigma}{4}|x-p_{i}|^{2}}\leq w_{i}\leq C_{2}e^{-\frac{\sigma}{4}|x-p_{i}|^{2}}, \quad C_{1}e^{-\frac{\sigma}{4}|x-p_{k}|^{2}}\leq w_{k}\leq C_{2}e^{-\frac{ \sigma}{4}|x-p_{k}|^{2}}.\]
Then we have
\[\int_{\mathbb{R}^{N}}w_{i}w_{k}\mathrm{d}x\leq C\int_{\mathbb{R}^{N}}e^{-\frac{\sigma}{4}(|x_{1}+\frac{\xi(\mathbf{p})}{2 }|^{2}+|x_{1}-\frac{\xi(\mathbf{p})}{2}|^{2}+2|x^{\prime}|^{2})}\mathrm{d}x_{1} \mathrm{d}x^{\prime} \tag{65}\] \[= C\int_{\mathbb{R}^{N}}e^{-\frac{\sigma}{4}(2x_{1}^{2}+2|x^{\prime }|^{2}+\frac{\xi(\mathbf{p})^{2}}{2})}\mathrm{d}x_{1}\mathrm{d}x^{\prime}=Ce^{- \frac{\sigma\xi(\mathbf{p})^{2}}{8}},\]
where \(x=(x_{1},x^{\prime})\) with \(x_{1}\in\mathbb{R}\) and \(x^{\prime}\in\mathbb{R}^{N-1}\). On the other hand,
\[\int_{\mathbb{R}^{N}}w_{i}w_{k}\left(\log w_{i}-\log(\sum_{j=1}^{ \ell}w_{j})\right)\] \[\leq \int_{[0,1]\times\mathbb{R}^{N-1}}w_{i}w_{k}\left(\log w_{i}-\log (\sum_{j=1}^{\ell}w_{j})\right)\leq\int_{[0,1]\times\mathbb{R}^{N-1}}w_{i}w_{k }\log\frac{w_{i}}{w_{k}}\] \[\leq -C\int_{[0,1]\times\mathbb{R}^{N-1}}(|x_{1}+\frac{\xi(\mathbf{p})}{2 }|^{2}-|x_{1}-\frac{\xi(\mathbf{p})}{2}|^{2})e^{-\frac{\sigma}{4}(|x_{1}+\frac{\xi (\mathbf{p})}{2}|^{2}+|x_{1}-\frac{\xi(\mathbf{p})}{2}|^{2}+2|x^{\prime}|^{2})} \mathrm{d}x_{1}\mathrm{d}x^{\prime}+C\int_{[0,1]\times\mathbb{R}^{N-1}}w_{i}w_ {k}\] \[\leq -C\int_{[0,1]\times\mathbb{R}^{N-1}}2\xi(\mathbf{p})x_{1}e^{-\frac{ \sigma}{4}(2x_{1}^{2}+2|x^{\prime}|^{2}+\frac{\xi(\mathbf{p})^{2}}{2})}\mathrm{d}x_ {1}\mathrm{d}x^{\prime}+C\int_{\mathbb{R}^{N}}w_{i}w_{k}\] \[\leq -C\xi(\mathbf{p})e^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}}\int_{0}^{1}x_{1 }e^{-\frac{\sigma}{2}x_{1}^{2}}\mathrm{d}x_{1}\int_{\mathbb{R}^{N-1}}e^{- \frac{\sigma}{2}|x^{\prime}|^{2}}\mathrm{d}x^{\prime}+Ce^{-\frac{\sigma\xi( \mathbf{p})^{2}}{8}}.\]
By (65) again, we deduce
\[\alpha<|\sum_{j=1}^{\ell}w_{j}|_{2}^{2}\leq\alpha+Ce^{-\frac{\sigma\xi(\mathbf{p}) ^{2}}{8}},\quad.\]
Then,
\[J(\sum_{j=1}^{\ell}w_{j})+\frac{V_{0}}{2}\int_{\mathbb{R}^{N}}|\sum_{j=1}^{ \ell}w_{j}|^{2}\leq\mathbb{J}(\mathbf{v})+\frac{V_{0}}{2}\alpha-C^{\prime}\xi( \mathbf{p})e^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}}.\]
Since \(0\leq 1-B^{2}=|\sum_{j=1}^{\ell}w_{j}|_{2}^{2}-2(|\sum_{j=1}^{\ell}w_{j}|_{2}^{2}- \alpha)\leq Ce^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}}\), we have \(|B-1|\leq Ce^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}}\). Hence,
\[\int_{\mathbb{R}^{N}}|F(B\sum_{j=1}^{\ell}w_{j})-F(\sum_{j=1}^{ \ell}w_{j})|\leq |B-1|\int_{\mathbb{R}^{N}}\sum_{j=1}^{\ell}w_{j}|f(\theta\sum_{j=1}^{ \ell}w_{j})|\leq Ce^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}},\]
where \(\theta\in(B,1)\). Then,
\[J(B\sum_{j=1}^{\ell}w_{j})+\frac{V_{0}}{2}\int_{\mathbb{R}^{N}}|B \sum_{j=1}^{\ell}w_{j}|^{2}\leq J(\sum_{j=1}^{\ell}w_{j})+\frac{V_{0}}{2}\int_{\mathbb{R}^{N}}| \sum_{j=1}^{\ell}w_{j}|^{2}+Ce^{-\frac{\sigma\xi(\mathbf{p})^{2}}{8}}\] \[\leq \mathbb{J}(\mathbf{v})+\frac{V_{0}}{2}\alpha-C\xi(\mathbf{p})e^{-\frac{ \sigma\xi(\mathbf{p})^{2}}{8}}.\qed\]
**Acknowledgement.** The research was supported by NSFC-12001044, NSFC-12071036, NSFC-11901582. |
2302.05085 | Pathological exponential asymptotics for a model problem of an
equatorially trapped Rossby wave | We examine a misleadingly simple linear second-order eigenvalue problem (the
Hermite-with-pole equation) that was previously proposed as a model problem of
an equatorially-trapped Rossby wave. In the singularly perturbed limit
representing small latitudinal shear, the eigenvalue contains an
exponentially-small imaginary part; the derivation of this component requires
exponential asymptotics. In this work, we demonstrate that the problem contains
a number of pathological elements in exponential asymptotics that were not
remarked upon in the original studies. This includes the presence of dominant
divergent eigenvalues, non-standard divergence of the eigenfunctions, and
inactive Stokes lines due to the higher-order Stokes phenomenon. The techniques
developed in this work can be generalised to other linear or nonlinear
eigenvalue problems involving asymptotics beyond-all-orders where such
pathologies are present. | Josh Shelton, S. Jonathan Chapman, Philippe H. Trinh | 2023-02-10T07:08:11Z | http://arxiv.org/abs/2302.05085v1 | # Pathological exponential asymptotics for a model problem of an equatorially trapped Rossby wave +
###### Abstract
We examine a misleadingly simple linear second-order eigenvalue problem (the Hermite-with-pole equation) that was previously proposed as a model problem of an equatorially-trapped Rossby wave. In the singularly perturbed limit representing small latitudinal shear, the eigenvalue contains an exponentially-small imaginary part; the derivation of this component requires exponential asymptotics. In this work, we demonstrate that the problem contains a number of pathological elements in exponential asymptotics that were not remarked upon in the original studies. This includes the presence of dominant divergent eigenvalues, non-standard divergence of the eigenfunctions, and inactive Stokes lines due to the higher-order Stokes phenomenon. The techniques developed in this work can be generalised to other linear or nonlinear eigenvalue problems involving asymptotics beyond-all-orders where such pathologies are present.
Exponential asymptotics, beyond-all-orders analysis, Stokes phenomenon
## 1 Introduction
The motivation of this work stems from an interesting mathematical model that was proposed by Boyd & Natarov (1998) in order to describe equatorially-trapped Rossby waves when the mean shear flow is only a function of the latitude. In such cases, the eigenfunctions are modelled by the so-called _Hermite-with-pole_ equation
\[\frac{\mathrm{d}^{2}u}{\mathrm{d}z^{2}}+\left[\frac{1}{z}-\lambda -\left(z-\frac{1}{\epsilon}\right)^{2}\right]u=0, \tag{1a}\] \[u(z)\to 0\quad\text{as}\quad z\to\pm\infty,\] (1b) \[u(0)=1. \tag{1c}\]
Here, \(\epsilon\) corresponds to the shear strength, \(u\) corresponds to a normal mode amplitude, and \(\lambda\) is an eigenvalue determined by the boundary condition at \(z=0\). Although this resembles the standard parabolic cylinder equation with Hermite functions as eigenfunctions, the pole at \(z=0\) lies in the interval of consideration. Boyd & Natarov consider the pole at \(z=0\) as emerging from a singularity in the analytic continuation, which approaches the real axis as viscosity tends to zero. As it turns out, the associated eigenvalue to (1a) is complex-valued; in the limit \(\epsilon\to 0\), the eigenvalue contains an exponentially-small imaginary part, \(\mathrm{Im}[\lambda]=O(\mathrm{e}^{-1/\epsilon^{2}})\). One of the aims of the analysis is to derive this exponentially-small eigenvalue component.
In their work, Boyd & Natarov (1998) note that an asymptotic expansion of \(u(z)\) in integer powers of \(\epsilon\) diverges, and they develop a procedure for approximating \(\mathrm{Im}[\lambda]\) with the use of an integral property from Sturm-Louville theory. Their approach relies upon the use of special functions theory and the niceties of the linear differential equation. In contrast, the emphasis of our work here will be on developing a framework that is applicable for more general differential equations--particularly for nonlinear problems where special functions theory is unavailable. Our goal is to study the divergence of
the asymptotic expansion for the eigenfunction and examine its connection, via the Stokes phenomenon, to the exponentially-small components. In SS7.1 we discuss the significance in the application of techniques developed in this paper, both to the more complete geophysical problem discussed by Natarov & Boyd (2001), as well as other problems involving singular perturbations.
For analysis, there is a more convenient form of (1a), which is found by shifting
\[y=z-\frac{1}{\epsilon}, \tag{2}\]
where now \(y=0\) corresponds to the equator. Then we have, for \(u=u(y)\),
\[\frac{\mathrm{d}^{2}u}{\mathrm{d}y^{2}}+\left[\frac{\epsilon}{1+\epsilon y}-y ^{2}\right]u=\lambda u. \tag{3}\]
This intermediary equation contains a turning point at \(y=-1/\epsilon\) which we study by rescaling with \(y=Y/\epsilon\). We then set \(u(y)=\mathrm{e}^{-y^{2}/2}\psi(Y)\) which yields the system
\[\epsilon^{2}\psi^{\prime\prime}-2Y\psi^{\prime}+\frac{\epsilon \psi}{1+Y}=(\lambda+1)\psi, \tag{4a}\] \[\mathrm{e}^{-Y^{2}/2\epsilon^{2}}\psi(Y)\to 0\quad\text{as} \quad Y\to\pm\infty,\] (4b) \[\psi(0)=1. \tag{4c}\]
In (4a) and henceforth, we use primes (\({}^{\prime}\)) to denote differentiation in \(Y\).
## 2 A roadmap of the methodology and main results
As it turns out, the Hermite-with-pole problem (4a) has a number of non-trivial elements that were not remarked upon in the original studies; the treatment of which has required the development of new techniques in exponential asymptotics. We explain some of
Figure 1: The imaginary component of the eigenvalue, \(\lambda\), is shown for the numerical solutions of Boyd & Natarov (1998) (circles) and the analytical prediction of \(\mathrm{Im}[\lambda]=\sqrt{\pi}[1-2\epsilon\log\epsilon+\epsilon(\log 2+\gamma)] \mathrm{e}^{-1/\epsilon^{2}}\) (line). Here, \(\gamma\approx 0.577\) is the Euler-Macheroni constant.
these aspects in the context of singularly perturbed linear eigenvalue problems of the form (1.4a), \(\mathcal{L}(\psi;\epsilon)=\lambda\psi\), although many of the same ideas apply more generally to nonlinear eigenvalue problems.
Firstly, asymptotic expansions for \(\psi=\psi_{0}+\epsilon\psi_{1}+\cdots\) and \(\lambda=\lambda_{0}+\epsilon\lambda_{1}+\cdots\) are sought, but these expansions are divergent and must be optimally truncated. The solution is then expressed as a truncated series with a remainder by considering
\[\psi(Y)=\sum_{n=0}^{N-1}\epsilon^{n}\psi_{n}(Y)+\mathcal{R}_{N}(Y), \tag{1}\]
with a similar expression for the eigenvalue, \(\lambda\). When \(N\) is chosen optimally [later shown to be of \(O(\epsilon^{-2})\)] the remainder \(\mathcal{R}_{N}(Y)\) is exponentially-small, and satisfies the linear eigenvalue problem of
\[\mathcal{L}(\mathcal{R}_{N};\epsilon)\sim-\epsilon^{N}\psi_{N-2}^{\prime\prime}. \tag{2}\]
The remainder, \(\mathcal{R}_{N}\), will exhibit the Stokes phenomenon, in which its magnitude rapidly varies across certain contours in the complex \(Y\)-plane. Indeed, as we shall show, this behaviour can be predicted by estimating the growth of the forcing term \(\psi_{N\,-2}^{\prime\prime}\). Thus, the late-term behaviour of the divergent series, \(\psi_{N}\) with \(N\to\infty\), is required in order to correctly resolve the Stokes phenomenon on the remainder \(\mathcal{R}_{N}(Y)\). This 'decoding' of divergence is one of the hallmarks of exponential asymptotics.
One of our main results of this paper is that for the Hermite-with-pole problem, additional components of the late-term divergence, \(\psi_{n}\), are required. It is well known, according to the principles of exponential asymptotics (cf. Chapman & Vanden-Broeck (2002)) that the \(n\)th-order approximation of most singularly perturbed differential equations exhibits a factorial-power-divergence similar to
\[\psi_{n}\sim\frac{Q(Y)\Gamma\left(\frac{n}{2}+\alpha\right)}{\chi(Y)^{\frac{n} {2}+\alpha}}\quad\text{as $n\to\infty$}, \tag{3}\]
where different problems may involve slight modifications to the above form. Thus for instance, the fractional coefficient of \(n\) that appears above may be modified to ensure the correct dominant balance arises in the equation. The functions \(Q\) and \(\chi\) and the constant \(\alpha\) prescribe the divergent behaviour.
However in this work we demonstrate that the Hermite-with-pole problem exhibits an atypical divergence of the form
\[\psi_{n}\sim\left\{\begin{aligned} &\mathcal{S}(Y)\Big{[}L(Y)\log \left(n\right)+Q(Y)\Big{]}\frac{\Gamma(\frac{n}{2}+\alpha_{0})}{\chi^{n/2+ \alpha_{0}}}\\ &\qquad\qquad\qquad\qquad\qquad+Q_{0}^{(\lambda_{n})}(Y)\log^{2} \left(n\right)\Gamma\!\left(\frac{n+1}{2}+\alpha_{0}\right)& \text{for $n$ even},\\ &\underbrace{\mathcal{S}(Y)}_{\text{HOSP}}\underbrace{R(Y)\frac{ \Gamma(\frac{n}{2}+\alpha_{1})}{\chi^{n/2+\alpha_{1}}}}_{\text{naive divergence}}+\underbrace{R_{1}^{(\lambda_{n})}(Y)\log \left(n\right)\Gamma\!\left(\frac{n+1}{2}+\alpha_{1}\right)}_{\lambda_{n}\text { divergence}}&\text{for $n$ odd}.\end{aligned}\right. \tag{4}\]
Here, the singular, \(\chi(Y)\), takes a value of zero at singularities in the early orders of the asymptotic expansion, and \(\mathcal{S}(Y)\) is a higher-order Stokes multiplier which takes the values of \(\mathcal{S}=1\) for \(\text{Re}[Y]<0\) and \(\mathcal{S}=0\) for \(\text{Re}[Y]>0\). This change in \(\mathcal{S}\) occurs smoothly across a boundary layer, surrounding the imaginary axis, of diminishing
width as \(n\to\infty\). The solution divergence (4) is also associated with a divergent eigenvalue, of the form
\[\lambda_{n}\sim\begin{cases}\Big{[}\delta_{0}\log{(n)}+\delta_{1}\Big{]}\Gamma \Big{(}\frac{n+1}{2}+\alpha_{0}\Big{)}&\quad\text{for $n$ even},\\ \delta_{2}\Gamma\Big{(}\frac{n+1}{2}+\alpha_{1}\Big{)}&\quad\text{for $n$ odd}.\end{cases} \tag{5}\]
Once these late-term components of the solution and eigenvalue are known, a procedure for the derivation of the exponentially-small components can be followed.
We now comment on the following pathologies related to (4) and (5):
1. _Divergent eigenvalues._ Although exponential asymptotics has been applied to other eigenvalue problems (cf. Tanveer (1987), Kruskal & Segur (1991), Chapman & Kozyreff (2009), Shelton & Trinh (2022)), in such cases, the eigenvalue divergence has not been noted as significant. In the present work, the divergence of \(\lambda_{n}\) affects the leading-order prediction of the eigenfunction divergence in (4), and is required to satisfy the associated boundary conditions on the late-term solution.
2. _Spurious singularities in the late-term approximation_. It is known (cf. Dingle (1973), Berry (1989), Chapman _et al._ (1998)) that typically, divergence of the late terms is captured by a factorial-over-power ansatz of the form displayed in (3). This factorial-over-power divergence is often taken as a universality of many problems in singularly perturbed asymptotics. However, we find that in the Hermite-with-pole problem, an additional singularity beyond that of \(Y=-1\) is predicted by the divergent ansatz. This misleadingly suggests that the late-order divergence of the asymptotic series is attributed to a point where no singularity appears in the early orders. This unusual aspect is associated with the following item.
3. _The higher-order Stokes phenomenon (HOSP)._ The Hermite-with-pole problem exhibits a pathology where the anticipated Stokes phenomenon is suppressed in certain regions of the complex plane. This complexity is an example of the higher-order Stokes phenomena, for which a general analytic understanding from the viewpoint of the divergent series has remained elusive (c.f. Howls _et al._ (2004), Daalhuis (2004), Body _et al._ (2005), Chapman & Mortimer (2005)). Only the consequences of this phenomena will be discussed in this work, and we refer the reader to Shelton _et al._ (2023_b_) for a detailed derivation of HOSP from the perspective of the divergent series.
4. _Atypical boundary layers in the late terms._ The naive factorial-over-power divergence (4) is unable to satisfy boundary condition (4c) at \(Y=0\), due to the functional prefactor growing without bound as \(Y\to 0\). A boundary layer of vanishing size as \(n\to\infty\) must be introduced, in which the two divergences shown in (4) interact, which also drives the HOSP of the previous point.
5. _Even-and-odd pairing of the late terms._ Consecutive terms in the asymptotic expansion exhibit different singular behaviour at \(Y=-1\): one is purely algebraic, and the other is the product of a logarithmic and an algebraic singularity. Consequently the late-term representation (4) requires a different ansatz for \(n\) even and \(n\) odd.
It is the resolution of these complicated issues within that separates our work from the previous work by Boyd & Natarov (1998). In the end, despite its misleadingly
simple form, the Hermite-with-pole problem turns out to be quite a pathological investigation of beyond-all-orders asymptotics.
## 3 An Initial Asymptotic Expansion
We begin by considering the asymptotic expansions
\[\psi(Y)=\sum_{n=0}^{\infty}\epsilon^{n}\psi_{n}(Y)\qquad\text{and}\qquad\lambda= \sum_{n=0}^{\infty}\epsilon^{n}\lambda_{n}. \tag{10}\]
At leading order in equation (4a) we find the solution \(\psi_{0}=C_{0}Y^{-(1+\lambda_{0})/2}\), where \(C_{0}\) is a constant of integration. In general this solution is singular or contains a branch point at \(Y=0\). In order to apply the leading-order boundary condition of \(\psi_{0}(0)=1\) at the same location, a boundary layer should typically be considered. However, we can verify through an inner-matching procedure that the leading-order eigenvalue is \(\lambda_{0}=-1\). Then the boundary condition at \(Y=0\) gives \(C_{0}=1\) and no boundary-layer theory is required. This yields our leading-order solution of
\[\psi_{0}=1\qquad\text{and}\qquad\lambda_{0}=-1. \tag{11}\]
We emphasise that the singularity at \(Y=0\) in the leading-order solution has been removed by the choice of the eigenvalue, \(\lambda_{0}=-1\). A similar argument will be applied in subsequent orders to enforce regularity of the solution at \(Y=0\).
At the next order, \(O(\epsilon)\), of equation (4a), we find the solution
\[\psi_{1}=C_{1}+\frac{(1-\lambda_{1})}{2}\log(Y)-\frac{1}{2}\log(1+Y), \tag{12}\]
which contains singularities at both \(Y=0\) and \(Y=-1\). To apply the boundary condition \(\psi_{1}(0)=0\), we require \(\lambda_{1}=1\), which then determines the constant of integration as \(C_{1}=0\). Thus, our \(O(\epsilon)\) solution is
\[\psi_{1}=-\frac{1}{2}\log(1+Y)\qquad\text{and}\qquad\lambda_{1}=1. \tag{13}\]
Note that the above is singular at \(Y=-1\). Since successive terms in the asymptotic series for \(\psi\) in (10) rely on repeated differentiation of previous terms, the logarithmic singularity will result in the divergence of the series for \(\psi_{n}\) as \(n\to\infty\). It is this divergence that we wish to characterise. Note that in the \(n\to\infty\) limit, on the assumption that \(\psi_{n}\) is divergent, there exists a dominant balance between the two terms \(\epsilon^{2}\psi^{\prime\prime}\) and \(-2Y\psi^{\prime}\) of (4a). Thus, we must continue to derive additional early orders of the solution until the effects of the \(\epsilon^{2}\psi^{\prime\prime}\) term become apparent. Since the singularity at \(Y=-1\) in \(\psi_{1}\) first appears at \(O(\epsilon)\), the effects of this term will begin at \(O(\epsilon^{3})\).
The same procedure is applied at \(O(\epsilon^{2})\) and \(O(\epsilon^{3})\), for which we find the solutions
\[\psi_{2}=\frac{1}{8}\log^{2}(1+Y),\qquad\lambda_{2}=0, \tag{14a}\] \[\psi_{3}=-\frac{Y}{4(1+Y)}-\frac{1}{48}\log^{3}(1+Y)-\frac{1}{4} \log(1+Y),\qquad\lambda_{3}=\frac{1}{2}. \tag{14b}\]
Note that while the singularities at \(Y=-1\) in \(\psi_{1}\) and \(\psi_{2}\) were logarithmic, the dominant singularity in \(\psi_{3}\) is algebraic and of order unity. Typically the order of the
singular behaviour of successive terms in the asymptotic series would increase linearly in a predictable fashion (see _e.g._ the work by Chapman _et al._ (1998)). This is not the case for our current problem, which can be seen by progressing to the next order, which has the solution
\[\psi_{4}=-\frac{\log(1+Y)}{8(1+Y)}-\frac{Y}{8(1+Y)}+\frac{\log^{4}(1+Y)}{384}+ \frac{\log^{2}(1+Y)}{8}\quad\mbox{and}\quad\lambda_{4}=\frac{1}{4}. \tag{10}\]
From (10b) and (10), we find the singular scalings, as \(Y\to-1\), of
\[\psi_{3}\sim\frac{1}{4(1+Y)}\qquad\mbox{and}\qquad\psi_{4}\sim\frac{-\log(1+Y) }{8(1+Y)}. \tag{11}\]
From this, we anticipate that the singular behaviour as \(Y\to-1\) of the asymptotic series will proceed in the pairwise fashion of
\[\psi_{2k-1}=O\biggl{(}\frac{1}{(1+Y)^{k-1}}\biggr{)}\qquad\mbox{and}\qquad \psi_{2k}=O\biggl{(}\frac{\log(1+Y)}{(1+Y)^{k-1}}\biggr{)} \tag{12}\]
for integer \(k\geq 2\), and hence the order of the algebraic singularity increases every other term. As it turns out, the above form in (12), which predicts the behaviour of the late-order terms as \(Y\to-1\) and \(n\to\infty\) also hints at the proper ansatz for \(n\to\infty\) in general. In the late-term analysis that follows we will employ separate divergent predictions for \(\psi_{n}\), distinguishing between the cases of \(n\) even and \(n\) odd. The decoupling of the even and odd terms in the expansion as \(n\to\infty\) essentially arises because (1a) without the \(\epsilon\psi/(1+Y)\) term would have a natural expansion in powers of \(\epsilon^{2}\), but the addition of this term forces an expansion in powers of \(\epsilon\); similar behaviour has been observed in Chapman (1999).
## 4 Typical exponential asymptotics and the naive divergence
The goal of the exponential asymptotics procedure is to predict the exponentially-small eigenvalue and eigenfunction solutions. We shall see in SS7 that these exponentially-small terms are connected to the divergence of the expansion (10).
Our task in this section is to derive the analytical form of the late terms of (10) in the limit of \(n\to\infty\). For this, we follow the procedure of introducing an ansatz for the factorial-over-power divergence. However, this ansatz, given in equation (9) below, takes an unusual form due to the inclusion of a \(\log{(n)}\) divergent scaling for even values of \(n\). It is demonstrated in SS4.1.2, through an inner analysis at the singularity, why the divergent ansatz must take this form.
At \(O(\epsilon^{n})\) in (1a), we have
\[\psi_{n-2}^{\prime\prime}-2Y\psi_{n}^{\prime}-\frac{Y}{1+Y}\psi_{n-1}=\lambda_ {3}\psi_{n-3}+\cdots+\lambda_{n-1}\psi_{1}+\lambda_{n}, \tag{13a}\]
and the boundary-condition of (1c) yields at \(O(\epsilon^{n})\)
\[\psi_{n}(0)=0. \tag{13b}\]
The late-order solutions, \(\psi_{n}\), will contain a singularity at \(Y=-1\) in the manner prescribed by equation (12). Moreover, since subsequent orders are determined by differentiation of earlier terms in the expansion, we anticipate that the divergence of the solution, introduced in (4), will be captured by the factorial-over-power ansatz,
\[\psi_{n}\sim\begin{cases}\Bigl{[}L(Y)\log{(n)}+Q(Y)\Bigr{]}\frac{\Gamma(\frac{ n}{2}+\alpha_{0})}{[\chi(Y)]^{n/2+\alpha_{0}}}&\mbox{for $n$ even},\\ &R(Y)\frac{\Gamma(\frac{n}{2}+\alpha_{1})}{[\chi(Y)]^{n/2+\alpha_{1}}}&\mbox{ for $n$ odd}.\end{cases} \tag{14}\]
As we have warned, the analysis to follow is quite involved. In essence, our first task is to derive the so-called _naive divergence_ that appears in (4) and above in (4). This is performed in SS4.2 by neglecting the late-terms of the eigenvalue in the \(O(\epsilon^{n})\) equation. Before we do this, however, we shall motivate the unusual form of (4) in the next section by considering the outer limit of an inner solution at the boundary-layer near \(Y=-1\).
### Inner problem for the singularity of \(Y=-1\)
First, we note that the early orders of expansion (1) reorder as we approach the singularity at \(Y=-1\). Instead of consecutive terms in the outer expansion reordering, those with an odd and even powers of \(\epsilon\) will reorder amongst themselves. For instance, the reordering occurs between odd terms for \(\epsilon^{3}\psi_{3}\sim\epsilon^{5}\psi_{5}\) and even terms for \(\epsilon^{4}\psi_{4}\sim\epsilon^{6}\psi_{6}\). Since \(\psi_{3}\sim(1+Y)^{-1}\) and \(\psi_{5}\sim(1+Y)^{-2}\) from the singular behaviour introduced in equation (11), we balance \((1+Y)^{-1}\sim\epsilon^{2}(1+Y)^{-2}\) to find the width of the boundary layer to be of \(O(\epsilon^{2})\). The same width is found by considering the even reordering. We thus introduce the inner-variable, \(\hat{y}\), by setting
\[1+Y=\epsilon^{2}\hat{y}, \tag{12}\]
with \(\hat{y}\) of \(O(1)\) in the inner region. The inner equation may then be derived by substituting for \(\hat{y}\), giving
\[\frac{\mathrm{d}^{2}\hat{\psi}}{\mathrm{d}\hat{y}^{2}}+2(1-\epsilon^{2}\hat{y} )\frac{\mathrm{d}\hat{\psi}}{\mathrm{d}\hat{y}}+\frac{\epsilon\hat{\psi}}{ \hat{y}}=\epsilon^{2}(1+\lambda)\hat{\psi}, \tag{13}\]
where we denote the inner solution by \(\hat{\psi}\).
#### 4.1.1 Inner limit of the early orders
To motivate the correct form for the inner solution, we take the inner limit of the outer solution by substituting for \(\hat{y}\) and expanding as \(\epsilon\to 0\). This yields
\[\psi_{\mathrm{outer}}\sim 1-\epsilon\log{(\epsilon)}+\epsilon \bigg{[}-\frac{\log{(\hat{y})}}{2}+\frac{1}{4\hat{y}}+\cdots\bigg{]}+\frac{ \epsilon^{2}\log^{2}{(\epsilon)}}{2}\] \[+\epsilon^{2}\log{(\epsilon)}\bigg{[}\frac{\log{(\hat{y})}}{2}- \frac{1}{4\hat{y}}+\cdots\bigg{]}+\epsilon^{2}\bigg{[}\frac{\log^{2}{(\hat{y}) }}{8}-\frac{\log{(\hat{y})}}{8\hat{y}}+\frac{1}{8\hat{y}}+\cdots\bigg{]}+\cdots. \tag{14}\]
#### 4.1.2 Outer limit of the inner solution
In Appendix B, we solve the inner equation (13) by considering an inner solution, motivated by (14), of the form \(\hat{\psi}=\hat{\psi}_{0}+\epsilon\log{(\epsilon)}\hat{\psi}_{(1,1)}+\epsilon \hat{\psi}_{1}+\epsilon^{2}\log^{2}{(\epsilon)}\hat{\psi}_{(2,2)}+\epsilon^{2 }\log{(\epsilon)}\hat{\psi}_{(2,1)}+\epsilon^{2}\hat{\psi}_{2}+\cdots\). We write the inner solution from (12) in outer variables by substituting for \(\hat{y}=(1+Y)/\epsilon^{2}\) to give the outer limit (of the first six terms of the inner series) as
\[\hat{\psi}\sim 1-\epsilon\frac{\log{(1+Y)}}{2}+\epsilon^{2}\frac{ \log^{2}{(1+Y)}}{8}+\sum_{k=1}^{\infty}\frac{\epsilon^{1+2k}}{2}\frac{\Gamma(k )}{[2(1+Y)]^{k}} \tag{15}\] \[+\sum_{k=1}^{\infty}\frac{\epsilon^{2+2k}}{4}\frac{[4b_{k}-\log{ (1+Y)}\Gamma(k)]}{[2(1+Y)]^{k}}.\]
The divergent constant \(b_{k}\) is determined by the recurrence relation (14), which may be solved in the limit of \(k\to\infty\) (as performed in (15)) to give \(b_{k}\sim\frac{1}{2}(\log{(k)}+\gamma)\Gamma(k)\). It is this extra factor of \(\log{(k)}\) in the expansion for \(b_{k}\) that causes the unusual \(\log{(n)}\) divergent form introduced in (13). In order to compare (15) with the late-terms of the
outer solution at \(O(\epsilon^{n})\), we substitute \(n=1+2k\) for the first sum on the right-hand side of (4.6) and \(n=2+2k\) for the second. The \(O(\epsilon^{n})\) of this outer limit is then
\[\hat{\psi}\sim\begin{cases}\epsilon^{n}\bigg{[}\frac{1}{2}\log \left(n\right)+\frac{\gamma-\log\left(2\right)}{2}-\frac{1}{4}\log\left(1+Y \right)\bigg{]}\frac{\Gamma(\frac{n}{2}-1)}{[2(1+Y)]^{\frac{n}{2}-1}}&\text{for $n$ even,}\\ \frac{\epsilon^{n}}{2}\frac{\Gamma(\frac{n-1}{2})}{[2(1+Y)]^{\frac{n-1}{2}}}& \text{for $n$ odd,}\end{cases} \tag{4.7}\]
where we expanded \(b_{n/2-1}\sim\frac{1}{2}[\log\left(n\right)+\gamma-\log\left(2\right)]\Gamma( \frac{n}{2}-1)\) for \(n\to\infty\) as in (B.12). Equation (4.7) motivates the slightly unusual form of the factorial-over-power ansatz we had previously introduced in (4.2). We are now ready to return to study the divergence of the outer solution.
### Divergence of the homogeneous late-term equation
In this section, we derive the _naive divergence_, which is obtained as a solution to the the \(O(\epsilon^{n})\) equation (4.1a) when the late-terms of the eigenvalue are neglected. We thus study the equation
\[\psi_{n-2}^{\prime\prime}-2Y\psi_{n}^{\prime}-\frac{Y}{1+Y}\psi_{n-1}=\lambda_ {3}\psi_{n-3}+\cdots, \tag{4.8}\]
where the lower order terms on the right hand side are of orders \(\psi_{n-4}\), \(\psi_{n-5}\), and so forth. Later in SS6, we demonstrate that the late-terms of the eigenvalue produces particular solutions that are subdominant as \(n\to\infty\) near the singularity of \(Y=-1\), but which are crucially responsible for the higher-order Stokes phenomenon.
Substituting the factorial-over-power ansatz (4.2) into the homogeneous equation (4.8), the dominant terms in the equation are of \(O(\log\left(n\right)\Gamma(n/2+\alpha_{0}+1)/\chi^{n/2+\alpha_{0}+1})\) for \(n\) even and \(O(\Gamma(n/2+\alpha_{1}+1)/\chi^{n/2+\alpha_{1}+1})\) for \(n\) odd. Dividing out this dominant behaviour gives terms of order \(O(1)\), \(O(n^{-1})\), etc. for \(n\) odd, and \(O(1)\), \(O(\log^{-1}n)\), \(O(n^{-1})\), \(O(n^{-1}\log^{-1}n)\) etc. for \(n\) even. At leading order as \(n\to\infty\), both cases give
\[\chi^{\prime}(\chi^{\prime}+2Y)=0. \tag{4.9}\]
The singular behaviour of \(\psi_{n}\) will be captured by the non-trivial solution, \(\chi^{\prime}=-2Y\). Since we require \(\chi(-1)=0\) in order to match with the inner solution near the singularity from (4.7), we find
\[\chi(Y)=1-Y^{2}. \tag{4.10}\]
Equations for the prefactor functions \(L\), \(Q\), and \(R\) are found at the following orders of \(n\) in equation (4.8). Since even and odd components of the divergence now interact, between \(\psi_{n}\) and \(\psi_{n-1}\) for instance, it is necessary to specify
\[\alpha_{1}-\alpha_{0}=1/2, \tag{4.11}\]
in keeping with the different rates of divergence in (4.7). At \(O(n^{-1})\) for \(n\) even we find an equation for \(L(Y)\). Similarly, the \(R(Y)\) and \(Q(Y)\) equations are found at \(O(n^{-1}\log^{-1}n)\) for the cases of \(n\) odd and \(n\) even, respectively. These equations are
\[L^{\prime}(Y)+\frac{1}{Y}L(Y)=0,\qquad R^{\prime}(Y)+\frac{1}{Y} R(Y)=0, \tag{4.12a}\] \[Q^{\prime}(Y)+\frac{1}{Y}Q(Y)=\frac{R(Y)}{2(1+Y)}+\frac{2YL(Y)}{ 1-Y^{2}}, \tag{4.12b}\]
which may be integrated directly to find the solutions
\[L(Y)= \frac{\Lambda_{\rm L}}{Y},\qquad R(Y)=\frac{\Lambda_{\rm R}}{Y}, \tag{4.13a}\] \[Q(Y)= \frac{\Lambda_{\rm Q}}{Y}+\frac{\Lambda_{\rm R}}{2Y}\log(1+Y)- \frac{\Lambda_{\rm L}}{Y}\log{(1-Y^{2})}, \tag{4.13b}\]
where \(\Lambda_{\rm L}\), \(\Lambda_{\rm R}\), and \(\Lambda_{\rm Q}\) are constants of integration.
Substitution of solutions (4.13a) and (4.13b) into the ansatz (4.2) gives our divergent prediction for \(\psi_{n}\), with \(n\to\infty\) as
\[\psi_{n}\sim\left\{\begin{aligned} \bigg{[}\frac{\Lambda_{\rm L}}{Y}\log{(n)}+ \bigg{(}\frac{\Lambda_{\rm Q}}{Y}+\frac{\Lambda_{\rm R}}{2Y}\log(1+Y)\\ -\frac{\Lambda_{\rm L}}{Y}\log{(1-Y^{2})}\bigg{)}\bigg{]}\frac{ \Gamma(\frac{n}{2}+\alpha_{0})}{(1-Y^{2})^{n/2+\alpha_{0}}}\qquad\text{for $n$ even},\\ \frac{\Lambda_{\rm R}}{Y}\frac{\Gamma(\frac{n}{2}+\alpha_{0}+ \frac{1}{2})}{(1-Y^{2})^{n/2+\alpha_{0}+1/2}}\qquad\text{for $n$ odd}.\end{aligned}\right. \tag{4.14}\]
We refer the late-order form of (4.14) as corresponding to the _naive divergence_, for which two noticeable issues are present:
1. The boundary condition, \(\psi_{n}(0)=0\), is unable to be satisfied as our current form is unbounded at \(Y=0\);
2. There are additional locations at which the singulant, \(\chi(Y)\), is equal to zero. Since \(\chi(Y)=1-Y^{2}\), our late term expression predicts singularities at both \(Y=-1\) and \(Y=1\). This is in contrast to the early orders of the expansion, which are singular at \(Y=-1\) only.
The first of these issues will be resolved in SS6.1. There, we demonstrate that as \(n\to\infty\), a boundary layer emerges in the late-order solution near \(Y=0\). This boundary layer is of diminishing width as \(n\to\infty\). A matched asymptotic approach then allows us to develop an inner solution that satisfies the boundary condition of \(\psi_{n}(0)=0\). Regarding the the second issue, the late terms (4.14) in fact switch off across a higher-order Stokes line along the imaginary axis. This is known as the higher-order Stokes phenomenon, which is in fact generated by the singularity discussed in item 1. For a derivation of this phenomenon from the perspective of the divergent series, we refer the reader to the work by Shelton _et al._ (2023_b_).
### Determination of the unknown constants
It remains to find values for the constants \(\Lambda_{\rm L}\), \(\Lambda_{\rm R}\), \(\Lambda_{\rm Q}\) and \(\alpha_{0}\), that appear in the late-term solution for \(\psi_{n}\) in (4.14). These are determined through matching with the outer limit of the inner solution about the singularity at \(Y=-1\) given in equation (4.7). Expanding the outer solution for \(\psi_{n}\) from (4.14) as \(Y\to-1\), we have
\[\psi_{n}\sim\left\{\begin{aligned} \bigg{[}-\Lambda_{\rm L}\log{(n)}+ \bigg{(}\Lambda_{\rm L}\log{(2)}-\Lambda_{\rm Q}\\ +\Big{[}\Lambda_{\rm L}-\frac{\Lambda_{\rm R}}{2}\Big{]}\log(1+Y) \bigg{)}\bigg{]}\frac{\Gamma(\frac{n}{2}+\alpha_{0})}{[2(1+Y)]^{n/2+\alpha_{0} }}\quad\text{for $n$ even},\\ -\Lambda_{\rm R}\frac{\Gamma(\frac{n}{2}+\alpha_{0}+\frac{1}{2})} {[2(1+Y)]^{n/2+\alpha_{0}+1/2}}\quad\text{for $n$ odd}.\end{aligned}\right. \tag{4.15}\]
This form may now be compared to the outer limit of the inner solution in (4.7) to find
\[\Lambda_{\rm R}=-\frac{1}{2},\qquad\Lambda_{\rm L}=-\frac{1}{2},\qquad\Lambda _{\rm Q}=-\frac{\gamma}{2},\qquad\alpha_{0}=-1, \tag{4.16}\]
where \(\gamma\approx 0.577\) is the Euler-Macheroni constant.
## 5 Late-term divergence of the eigenvalue expansion
### The boundary layer near \(Y=0\)
We saw in SS3 that each term in the expansion of the eigenvalue was determined by imposing that the outer solution had no singularity at \(Y=0\). We can find this expansion more readily by considering a local expansion in the vicinity of \(Y=0\). Writing \(Y=\epsilon y\) we find
\[\frac{\mathrm{d}^{2}\psi}{\mathrm{d}y^{2}}-2y\frac{\mathrm{d}\psi}{\mathrm{d} y}+\frac{\epsilon\psi}{1+\epsilon y}=(\lambda+1)\psi.\]
Expanding in powers of \(\epsilon\) as usual,
\[\psi=\sum_{n=0}^{\infty}\epsilon^{n}\psi_{n}\]
gives
\[\frac{\mathrm{d}^{2}\psi_{n}}{\mathrm{d}y^{2}}-2y\frac{\mathrm{d}\psi_{n}}{ \mathrm{d}y}=-\sum_{k=1}^{n}(-y)^{k-1}\psi_{n-k}+\sum_{k=1}^{n}\lambda_{k} \psi_{n-k}, \tag{10}\]
where we have expanded
\[\frac{\epsilon}{1+\epsilon y}=\sum_{n=1}^{\infty}\epsilon^{n}(-y)^{n-1}.\]
We find \(\psi_{0}=1\), \(\psi_{1}=0\), \(\lambda_{1}=1\), and in general
\[\psi_{n}=\sum_{m=1}^{n-1}a_{m,n}y^{m}\qquad\text{ for }n\geq 2,\]
where the series coefficient satisfies the recurrence relation
\[2ra_{r,n}=(r+2)(r+1)a_{r+2,n}-\sum_{k=1}^{n-r}\lambda_{k}a_{r,n-k}+\sum_{k=1}^ {r+1}(-1)^{k-1}a_{r+1-k,n-k} \tag{11}\]
with \(a_{r,n}=0\) if \(r\geq n-1\) or \(r=0\). It is straightforward to solve (11) numerically, stepping down from \(r=n-1\) to \(r=0\) for each \(n\). When \(r=0\) the left-hand side is zero; that the right-hand side must vanish then gives the equation for \(\lambda_{n}\), which is
\[\lambda_{n}=2a_{2,n}.\]
These numerical solutions are later compared to the divergent prediction for \(\lambda_{n}\) in figure 1.
It is not so straightforward to determine the divergence of \(\lambda_{n}\) as \(n\to\infty\) from (11), but we can make some progress by observing that the solution of the homogeneous adjoint to (10) is \(\mathrm{e}^{-y^{2}}\). Multiplying by this and integrating gives
\[\lambda_{n}\sqrt{\pi} =\sum_{k=1}^{n}\int_{-\infty}^{\infty}\mathrm{e}^{-y^{2}}(-y)^{k- 1}\psi_{n-k}\,\mathrm{d}y-\sum_{k=1}^{n-1}\lambda_{k}\int_{-\infty}^{\infty} \mathrm{e}^{-y^{2}}\psi_{n-k}\,\mathrm{d}y\] \[\sim\frac{(1-(-1)^{n})}{2}\Gamma(n/2)+\cdots.\]
When \(n\) is odd this gives
\[\lambda_{n}=\frac{1}{\sqrt{\pi}}\Gamma(n/2). \tag{5.3}\]
However, when \(n\) is even the first term vanishes, and the correction term is much harder to determine.
### Solution divergence forced by the eigenvalue
We now consider the particular solution of (4.1a) generated by the divergent eigenvalue expansion \(\lambda_{n}\). We will see (and motivated by (5.3)) that the correct form of the eigenvalue divergence is
\[\lambda_{n}\sim\begin{cases}\Big{[}\delta_{0}\log{(n)}+\delta_{1}\Big{]} \Gamma\Big{(}\frac{n-1}{2}\Big{)}&\text{for $n$ even,}\\ \delta_{2}\Gamma\Big{(}\frac{n}{2}\Big{)}&\text{for $n$ odd,}\end{cases} \tag{5.4}\]
where we expect to find \(\delta_{2}=1/\sqrt{\pi}\). We find that this generates a particular solution in \(\psi_{n}\) of the form
\[\psi_{n}(Y)\sim\begin{cases}\bigg{[}Q_{0}^{(\lambda_{n})}(Y)\log^{2}{(n)}+Q_{ 1}^{(\lambda_{n})}(Y)\log(n)+Q_{2}^{(\lambda_{n})}(Y)\bigg{]}\Gamma\Big{(} \frac{n-1}{2}\Big{)}&\text{for $n$ even,}\\ \bigg{[}R_{1}^{(\lambda_{n})}(Y)\log(n)+R_{2}^{(\lambda_{n})}(Y)\bigg{]}\Gamma \Big{(}\frac{n}{2}\Big{)}&\text{for $n$ odd.}\end{cases} \tag{5.5}\]
Substituting ansatz (5.5) into the \(O(\epsilon^{n})\) equation (4.1a), we divide out by the dominant behaviour, which is \(\log^{2}{(n)}\Gamma((n-1)/2)\) for \(n\) even and \(\log{(n)}\Gamma(n/2)\) for \(n\) odd. At \(O(n^{0})\) for \(n\) odd and \(n\) even, we then find
\[R_{1}^{(\lambda_{n})^{\prime}}(Y)=0,\qquad Q_{0}^{(\lambda_{n})^{\prime}}(Y)=0, \tag{5.6}\]
with solution
\[R_{1}^{(\lambda_{n})}(Y)=A_{1},\qquad Q_{0}^{(\lambda_{n})}(Y)=B_{0}, \tag{5.7}\]
where \(A_{1}\) and \(B_{0}\) are constants. Next, at \(O(\log^{-1}{(n)})\), for \(n\) odd and even respectively, we find
\[R_{2}^{(\lambda_{n})^{\prime}}(Y)=-\frac{\delta_{2}}{2Y}\qquad\text{and} \qquad Q_{1}^{(\lambda_{n})^{\prime}}(Y)=-\frac{\delta_{0}}{2Y}-\frac{A_{1}}{ 2(1+Y)}, \tag{5.8}\]
with solution
\[R_{2}^{(\lambda_{n})}(Y)=A_{2}-\frac{\delta_{2}}{2}\log{(Y)},\quad Q_{1}^{( \lambda_{n})}(Y)=B_{1}-\frac{\delta_{0}}{2}\log{(Y)}-\frac{A_{1}}{2}\log{(1+Y )}, \tag{5.9}\]
where \(A_{2}\) and \(B_{1}\) are constants. At the next order of \(O(\log^{-2}(n))\) for \(n\) even, we find
\[Q_{2}^{(\lambda_{n})^{\prime}}(Y)=-\frac{\delta_{1}}{2Y}-\frac{\delta_{2}}{2Y }\psi_{1}(Y)-\frac{R_{2}^{(\lambda_{n})}(Y)}{2(1+Y)}, \tag{5.10}\]
with solution
\[Q_{2}^{(\lambda_{n})}(Y)=B_{2}-\frac{\delta_{1}}{2}\log(Y)-\frac{A_{2}}{2}\log (1+Y)+\frac{\delta_{2}}{4}\log(Y)\log(1+Y). \tag{5.11}\]
Overall, the divergence of \(\psi_{n}\) is given by combining (4.14) with (5.5) to give
\[\psi_{n}\sim\begin{cases}\Big{[}L(Y)\log{(n)}+Q(Y)\Big{]}\frac{\Gamma(\frac{n}{2} -1)}{\chi^{n/2-1}}\\ +\Big{[}Q_{0}^{(\lambda_{n})}(Y)\log^{2}{(n)}+Q_{1}^{(\lambda_{n})}(Y)\log(n) +Q_{2}^{(\lambda_{n})}(Y)\Big{]}\Gamma\Big{(}\frac{n-1}{2}\Big{)}&\text{for $n$ even,}\\ \qquad\qquad R(Y)\frac{\Gamma(\frac{n-1}{2})}{\chi^{(n-1)/2}}+\Big{[}R_{1}^{( \lambda_{n})}(Y)\log(n)+R_{2}^{(\lambda_{n})}(Y)\Big{]}\Gamma\Big{(}\frac{n}{ 2}\Big{)}&\text{for $n$ odd.}\end{cases}\]
In the next section, we demonstrate how these divergences interact in a boundary layer near \(Y=0\), justifying the ansatzes (5.4) and (5.5), resolving issues 1 and 2 in SS4.2, and determining the coefficients \(\delta_{0}\), \(\delta_{1}\) and \(\delta_{2}\).
## 6 The late-term boundary layer at \(Y=0\)
Recall that in the early orders of the expansion, each order of the eigenvalue was determined by enforcing the boundary condition at \(Y=0\). However, late term expansion (5.12) is unbounded at \(Y=0\) and cannot satisfy the condition \(\psi_{n}(0)=0\). If we continue the expansion (5.12) to higher orders (in \(1/n\)) we find that the singularity at leading order (for instance \(R_{0}\sim Y^{-1}\)) forces a stronger singularity at the next order (so that \(R_{1}\sim Y^{-3}\)). Thus, this series reorders as \(Y\to 0\), so that there is a boundary layer in the late-term approximation near \(Y=0\), for which an inner analysis is required. Note the distinction between this boundary layer and that of SS5.1. There the boundary layer was due a nonuniformity in the expansion of \(\psi\) in \(\epsilon\), and involved rescaling \(Y\) with \(\epsilon\). Here the boundary later is due to a nonuniformity in the expansion of \(\psi_{n}\) in \(n\), and involves rescaling \(Y\) with \(n\).
### Reordering of the late-terms as \(Y\to 0\)
In order to determine the width of this boundary layer in the late-term solution, we introduce in Appendix A a factorial-over-power ansatz of the form
\[\psi_{n}\sim\begin{cases}\Big{[}L_{0}(Y)\log{(n)}+Q_{0}(Y)+\frac{\log{(n)}}{n }L_{1}(Y)+\cdots\Big{]}\frac{\Gamma(\frac{n}{2}-1)}{\chi^{n/2-1}}&\text{for $n$ even,}\\ \qquad\qquad\Big{[}R_{0}(Y)+\frac{\log{(n)}}{n}M_{1}(Y)+\frac{R_{1}(Y)}{n}+ \cdots\Big{]}\frac{\Gamma(\frac{n-1}{2})}{\chi^{(n-1)/2}}&\text{for $n$ odd.}\end{cases} \tag{6.1}\]
Here, the leading order solutions of \(L_{0}(Y)\), \(R_{0}(Y)\), and \(Q_{0}(Y)\) are the same as \(L(Y)\), \(R(Y)\), and \(Q(Y)\) derived previously in (4.13a) and (4.13b). The solutions of \(M_{1}(Y)\), \(L_{1}(Y)\), and \(R_{1}(Y)\) are given in equations (A.3) and (A.4). For the purposes of observing the reordering of these series near \(Y=0\), it is sufficient to display only their singular behaviour here, which is given by
\[L_{0}\sim\frac{\Lambda_{\text{L}}}{Y},\qquad L_{1}\sim\frac{\Lambda_{\text{L} }}{Y^{3}},\qquad R_{0}\sim\frac{\Lambda_{\text{R}}}{Y},\qquad R_{1}\sim\frac {\Lambda_{\text{R}}}{Y^{3}}. \tag{6.2}\]
The series expansions of \(\psi_{n}\) reorder when the two consecutive terms in each of (6.1) are of the same order as \(n\to\infty\). Since this occurs for \(Y=O(n^{-1/2})\), we introduce the inner variable \(\bar{y}=n^{1/2}Y\). Substituting this inner variable into the \(O(\epsilon^{n})\) equation (4.1a) gives the inner equation as
\[n\frac{\text{d}^{2}\bar{\psi}_{n-2}}{\text{d}\bar{y}^{2}}-2\bar{y}\frac{\text{ d}\bar{\psi}_{n}}{\text{d}\bar{y}}+\frac{\bar{y}}{n^{1/2}}\bigg{(}1+\frac{\bar{y}}{n ^{1/2}}\bigg{)}^{-1}\bar{\psi}_{n-1}=\lambda_{3}\bar{\psi}_{n-3}+\cdots+ \lambda_{n-1}\bar{\psi}_{1}+\lambda_{n}, \tag{6.3}\]
where \(\bar{\psi}_{1}=-\frac{1}{2}\log(1+n^{-1/2}\bar{y})\sim-\frac{1}{2}n^{-1/2}\bar {y}\).
### Inner limit of the late-term divergence
We now take the inner limit of the outer divergent solution to motivate the correct form for the inner solution. We begin by substituting the inner variable \(\bar{y}\) in the naive divergence (4.14) and taking the limit \(n\to\infty\). For the singulant we find
\[(1-Y^{2})^{-n/2}=\left(1-\frac{\bar{y}^{2}}{n}\right)^{-n/2}\sim\;\mathrm{e}^{ \bar{y}^{2}/2}\quad\text{as}\quad n\to\infty. \tag{6.4}\]
Furthermore, the scaling of \(Q(Y)\sim Y^{-1}\) will increase the argument of the gamma function by one half. Together we find
\[\psi_{n}\,\sim\begin{cases}\bigg{[}-\frac{\log\left(n\right)}{\sqrt{2}}-\frac {\gamma}{\sqrt{2}}\bigg{]}\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\bar{y}}\Gamma \Big{(}\frac{n-1}{2}\Big{)}&\text{for $n$ even},\\ -\frac{1}{\sqrt{2}}\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\bar{y}}\Gamma\Big{(} \frac{n}{2}\Big{)}&\text{for $n$ odd}.\end{cases} \tag{6.5}\]
We now take the inner limit of the particular solution generated by the divergent eigenvalue (5.5), which yields
\[\psi_{n}\sim\begin{cases}\bigg{[}\bigg{(}B_{0}+\frac{\delta_{0}}{4}\bigg{)} \log^{2}\left(n\right)+\bigg{(}B_{1}+\frac{\delta_{1}}{4}-\frac{\delta_{0}}{2} \log\left(\bar{y}\right)\bigg{)}\log(n)\\ +\bigg{(}B_{2}-\frac{\delta_{1}}{2}\log(\bar{y})\bigg{)}\bigg{]}\Gamma \Big{(}\frac{n-1}{2}\Big{)}&\text{for $n$ even},\\ \bigg{[}\bigg{(}A_{1}+\frac{\delta_{2}}{4}\bigg{)}\log(n)+\bigg{(}A_{2}-\frac {\delta_{2}}{2}\log(\bar{y})\bigg{)}\bigg{]}\Gamma\Big{(}\frac{n}{2}\Big{)}& \text{for $n$ odd}.\end{cases} \tag{6.6}\]
Together we find for \(n\) even
\[\psi_{n}\sim\bigg{[}\bigg{(}B_{0}+\frac{\delta_{0}}{4}\bigg{)} \log^{2}\left(n\right)+\bigg{(}-\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\sqrt{2}\bar {y}}+B_{1}+\frac{\delta_{1}}{4}-\frac{\delta_{0}}{2}\log\left(\bar{y}\right) \bigg{)}\log(n) \tag{6.7a}\] \[+\bigg{(}\frac{-\gamma\mathrm{e}^{\bar{y}^{2}/2}}{\sqrt{2}\bar{y}}+B_{2}- \frac{\delta_{1}}{2}\log\left(\bar{y}\right)\bigg{)}\bigg{]}\Gamma\bigg{(} \frac{n-1}{2}\bigg{)},\]
and for \(n\) odd
\[\psi_{n}\sim\bigg{[}\bigg{(}A_{1}+\frac{\delta_{2}}{4}\bigg{)}\log(n)+\bigg{(} -\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\sqrt{2}\bar{y}}+A_{2}-\frac{\delta_{2}}{2} \log\left(\bar{y}\right)\bigg{)}\bigg{]}\Gamma\bigg{(}\frac{n}{2}\bigg{)}. \tag{6.7b}\]
### An inner solution
We now look for a solution to the inner equation (6.3). Motivated by the form of the inner limit in (6.7), we make the ansatz
\[\bar{\psi}_{n}\sim\begin{cases}\Big{[}\bar{L}(\bar{y})\log\left(n\right)+\bar {Q}(\bar{y})\Big{]}\Gamma\Big{(}\frac{n-1}{2}\Big{)}&\text{for $n$ even},\\ \bar{R}(\bar{y})\Gamma\Big{(}\frac{n}{2}\Big{)}&\text{for $n$ odd}.\end{cases} \tag{6.8}\]
Substituting (6.8) and (5.4) into (6.3), and isolating the dominant factorial divergence of \(\Gamma(\frac{n}{2})\) for \(n\) odd and \(\Gamma(\frac{n-1}{2})\) for \(n\) even, yields at leading order the equations
\[\bar{R}^{\prime\prime}-\bar{y}\bar{R}^{\prime}=\frac{\delta_{2}}{2},\qquad\bar {L}^{\prime\prime}-\bar{y}\bar{L}^{\prime}=\frac{\delta_{0}}{2},\qquad\bar{Q}^ {\prime\prime}-\bar{y}\bar{Q}^{\prime}=\frac{\delta_{1}}{2}. \tag{6.9}\]
These three equations all have solutions of a similar form. We will now focus on the equation for \(\bar{R}\), and adapt the following results analogously for \(\bar{L}\) and \(\bar{Q}\). Integrating (6.9) we find
\[\bar{R}(\bar{y})=\bar{B}_{R}+\bar{A}_{R}\int_{0}^{\bar{y}}\mathrm{e}^{t^{2}/2} \;\mathrm{d}t+\frac{\delta_{2}}{2}\int_{0}^{\bar{y}}\mathrm{e}^{t^{2}/2}\bigg{[} \int_{0}^{t}\mathrm{e}^{-p^{2}/2}\,\mathrm{d}p\bigg{]}\,\mathrm{d}t, \tag{6.10}\]
with constants of integration \(\bar{A}_{R}\) and \(\bar{B}_{R}\). We are now able to apply the condition \(\bar{\psi}_{n}(0)=0\) (resolving issue 1 of SS4.2), which gives \(\bar{B}_{R}=0\). The remaining constants are determined by matching with the the outer solution.
We see that in the outer limit of \(|\bar{y}|\to\infty\) (6.9a) itself exhibits Stokes phenomenon. There is a Stokes line on the imaginary axis, across which the asymptotic behaviour of the term proportional to \(\delta_{2}\) changes from
\[\frac{\log(-\bar{y})}{2}+\cdots-\frac{\pi^{1/2}}{2^{3/2}}\frac{\mathrm{e}^{ \bar{y}^{2}/2}}{\bar{y}}\qquad\text{ to }\qquad\frac{\log(-\bar{y})}{2}+\cdots+\frac{\pi^{1/2}}{2^{3/2}}\frac{ \mathrm{e}^{\bar{y}^{2}/2}}{\bar{y}}.\]
This is an example of what is known as higher-order Stokes phenomenon, which is a Stokes phenomenon in the asymptotic approximation of the late terms of the expansion. Additionally there is a second Stokes line on the real axis, across which the asymptotic behaviour of the term proportional to \(\bar{A}_{R}\) picks up an additional constant (the complementary function is just an error function of imaginary argument). Altogether, on the real axis, as \(\bar{y}\to\infty\),
\[\bar{R}(\bar{y})\sim\bigg{[}\bar{A}_{R}+\frac{1}{2}\Big{(}\frac{\pi}{2}\Big{)} ^{\frac{1}{2}}\delta_{2}\bigg{]}\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\bar{y}}+ \cdots-\frac{\delta_{2}}{4}\bigg{(}\log(2)+\gamma+\log(-\bar{y}^{2})+\cdots \bigg{)}, \tag{6.11}\]
where \(\gamma\approx 0.577\) is the Euler-Mascheroni constant, while as \(\bar{y}\to-\infty\),
\[\bar{R}(\bar{y})\sim\bigg{[}\bar{A}_{R}-\frac{1}{2}\Big{(}\frac{\pi}{2}\Big{)} ^{\frac{1}{2}}\delta_{2}\bigg{]}\frac{\mathrm{e}^{\bar{y}^{2}/2}}{\bar{y}}+ \cdots-\frac{\delta_{2}}{4}\bigg{(}\log(2)+\gamma+\log(-\bar{y}^{2})+\cdots \bigg{)}. \tag{6.12}\]
Exactly similar expressions hold for \(\bar{L}\) and \(\bar{Q}\).
### Matching
We now match the inner limit of the outer solution (6.7) with the outer-limit of the inner solution as \(\bar{y}\to-\infty\) given by (6.12). Firstly, since the inner solution contains no terms of \(O(\log n)\) for \(n\) odd and \(O(\log^{2}{(n)})\) for \(n\) even, we require
\[A_{1}=-\frac{\delta_{2}}{4}\qquad\text{and}\qquad B_{0}=-\frac{\delta_{0}}{4}. \tag{6.13}\]
Next, matching each of the coefficients of \(\mathrm{e}^{\bar{y}^{2}/2}/\bar{y}\) as \(\bar{y}\to-\infty\) yields
\[\bar{A}_{L}-\delta_{0}\sqrt{\frac{\pi}{8}}=-\frac{1}{\sqrt{2}},\qquad\bar{A}_{ Q}-\delta_{1}\sqrt{\frac{\pi}{8}}=-\frac{\gamma}{\sqrt{2}},\qquad\bar{A}_{R}- \delta_{2}\sqrt{\frac{\pi}{8}}=-\frac{1}{\sqrt{2}}. \tag{6.14a}\]
As \(\bar{y}\to\infty\) we need the coefficients of \(\mathrm{e}^{\bar{y}^{2}/2}/\bar{y}\) to be zero, in order that the naive divergence is not present near the phantom singularity at \(Y=+1\) (resolving issue 2 of SS4.2). Thus matching as \(\bar{y}\to\infty\) gives
\[\bar{A}_{L}+\delta_{0}\sqrt{\frac{\pi}{8}}=0,\qquad\bar{A}_{Q}+\delta_{1}\sqrt {\frac{\pi}{8}}=0,\qquad\bar{A}_{R}+\delta_{2}\sqrt{\frac{\pi}{8}}=0. \tag{6.14b}\]
Solving (6.13), (6.14) gives
\[\delta_{0}=\delta_{2}=\frac{1}{\sqrt{\pi}},\qquad\delta_{1}=\frac{\gamma}{\sqrt{ \pi}},\qquad\bar{A}_{L}=\bar{A}_{R}=-\frac{1}{\sqrt{8}},\qquad\bar{A}_{Q}=- \frac{\gamma}{\sqrt{8}}, \tag{6.15}\]
which is consistent with \(\delta_{0}=1/\sqrt{\pi}\) from (5.3). In figure 6 we compare the asymptotic behaviour (5.4) with \(\lambda_{n}\) determined numerically following the procedure described in SS5.1; the agreement validates our predictions for \(\delta_{0}\), \(\delta_{1}\) and \(\delta_{2}\).
## 7 Stokes smoothing and determination of \(\mathbf{Im}[\lambda]\)
Having determined the form of the late terms we now truncate the divergent expansions for the solution and eigenvalue after \(N\) terms and study the remainder, by writing
\[\psi(Y)=\underbrace{\sum_{n=0}^{N-1}\epsilon^{n}\psi_{n}(Y)}_{\psi_{\mathrm{ reg}}(Y)}+\mathcal{R}_{N}(Y)\quad\text{and}\quad\lambda=\underbrace{\sum_{n=0}^{N-1} \epsilon^{n}\lambda_{n}}_{\lambda_{\mathrm{reg}}}+\lambda_{\mathrm{exp}}, \tag{7.1}\]
where the truncated series are denoted by \(\psi_{\mathrm{reg}}(Y)\) and \(\lambda_{\mathrm{reg}}\). We truncate optimally by setting
\[N=\frac{2|\chi|}{\epsilon^{2}}+\rho, \tag{7.2}\]
where \(0\leq\rho<1\) ensures that \(N\) takes integer values. Substituting into (1.4a) gives
\[\epsilon^{2}\mathcal{R}_{N}^{\prime\prime}-2Y\mathcal{R}_{N}^{\prime}+\bigg{[} \frac{\epsilon}{1+Y}-(1+\lambda_{\mathrm{reg}})\bigg{]}\mathcal{R}_{N}=\psi_{ \mathrm{reg}}\lambda_{\mathrm{exp}}+\xi_{\mathrm{eq}}+O(\lambda_{\mathrm{exp} }\mathcal{R}_{N}), \tag{7.3}\]
where the forcing term \(\xi_{\mathrm{eq}}\) is of \(O(\epsilon^{N})\) and is defined by
\[\xi_{\mathrm{eq}}=(1+\lambda_{\mathrm{reg}})\psi_{\mathrm{reg}}-\epsilon^{2} \psi_{\mathrm{reg}}^{\prime\prime}+2Y\psi_{\mathrm{reg}}^{\prime}-\epsilon \frac{\psi_{\mathrm{reg}}}{1+Y}. \tag{7.4}\]
Figure 6.1: The coefficient \(\lambda_{n}\), numerically calculated by the scheme of §5.1 is compared to the asymptotic prediction (5.4). Comparison occurs for even \(n\) in \((a)\), and odd \(n\) in \((b)\).
As \(\epsilon\to 0\),
\[\xi_{\rm eq}\sim-\epsilon^{N+2}\psi_{N}^{\prime\prime}-\epsilon^{N+3}\psi_{N+1}^{ \prime\prime}+\cdots. \tag{7.5}\]
The procedure now is to:
1. Expand (7.5) as \(\epsilon\to 0\), \(N\to\infty\) using (7.2);
2. Write \({\cal R}_{N}\) as a Stokes multiplier \({\cal S}(Y)\) multiplied by a homogeneous solution by setting, in this case, \({\cal R}_{N}={\cal S}(Y)\psi_{\rm exp}\) with \[\psi_{\rm exp}=\left(-\frac{1}{2Y}-\frac{\epsilon}{2Y}\left[\log(2/\epsilon^{ 2})+\gamma+\frac{\log{(1+Y)}}{2}\right]+\cdots\right){\rm e}^{-(1-Y^{2})/ \epsilon^{2}};\]
3. Localise in a boundary layer near the Stokes lines where \(\chi=1-Y^{2}\) is real and positive;
4. Solve for \({\cal S}\) to explicitly observe the rapid jump across the Stokes line.
Since these steps are fairly standard (see e.g. Chapman _et al._ (1998)) we omit the details here; the interested reader may refer to the geophysical study for the Kelvin wave problem Shelton _et al._ (2023_a_) where more details are given. The upshot is that there is a jump in \({\cal S}\) of \(2\pi{\rm i}\epsilon\) as the Stokes line \(-1\leq Y<0\) is crossed, so that a multiple of \(\psi_{\rm exp}\) is turned on, as shown in figure 7.1.
While \(\chi=1-Y^{2}\) is also real and positive on the imaginary axis, the Stokes line there is coincident with the higher-order Stokes line across which the relevant contribution to \(\psi_{n}\), including the right-hand side of (7.3), is switched off. The upshot is that the Stokes multiplier is multiplied by \(1/2\) on this segment of the Stokes line. Finally, on the strip \(0<Y<1\) the relevant terms in the expansion of \(\psi_{n}\) are no longer present, having been turned off by the higher-order Stokes phenomenon, so that this prospective Stokes line is inactive and no switching occurs.
The additional term \(\psi_{\rm exp}\) switched on across the Stokes lines does not satisfy the decay condition as \(Y\to\infty\). This term is cancelled by an additional contribution to \({\cal R}_{N}\) generated by the forcing term \(\psi_{\rm reg}\lambda_{\rm exp}\) due to the exponentially small correction
Figure 7.1: The Stokes lines generated by the divergent series expansion for our problem are shown (bold). Inactive Stokes lines are shown dashed, and along the imaginary axis the Stokes line has a multiplier of half the usual value. This inactivity is caused by the higher-order Stokes phenomenon, which switches off the naïve divergence across the imaginary axis.
to the eigenvalue; indeed it is this requirement of cancellation which determines \(\lambda_{\rm exp}\). This additional particular solution satisfies (to two orders in \(\epsilon\))
\[\epsilon^{2}{\cal R}_{N}^{\prime\prime}-2Y{\cal R}_{N}^{\prime}\sim\lambda_{\rm exp}, \tag{100}\]
which may be solved in terms of special functions to find
\[{\cal R}_{N}\sim\frac{\epsilon\lambda_{\rm exp}\sqrt{\pi}}{2Y}{\rm e}^{Y^{2}/ \epsilon^{2}} \tag{101}\]
as \(Y\to\infty\).
We now determine \(\lambda_{\rm exp}\) by imposing that the coefficient of \({\rm e}^{Y^{2}/\epsilon^{2}}/Y\) as \(Y\to\infty\) is zero. First we note that the decay condition as \(Y\to-\infty\) may be enforced on different Riemann sheets generated by the singularity at \(Y=-1\); essentially as we move from \(Y=-\infty\) to \(Y=+\infty\) we have to decide whether we pass above or below the point \(Y=-1\). If we pass above it the Stokes switching associated with the base expansion gives \(-\pi{\rm i}\epsilon\psi_{\rm exp}\) at \(Y=\infty\), while if we pass below it gives \(\pi{\rm i}\epsilon\psi_{\rm exp}\). This must cancel with the contribution from (101), which gives
\[\lambda_{\rm exp}\sim\pm\sqrt{\pi}\,{\rm i}\,[1-2\epsilon\log\epsilon+(\gamma +\log 2)\,\epsilon]\,{\rm e}^{-1/\epsilon^{2}}. \tag{102}\]
These are the complex-conjugate pairs for \({\rm Im}[\lambda]\), which correspond to growing and decaying temporal instabilities in the solution. We note that (102) is consistent with a direct application of Borel summation to the divergent series (90), as we would expect.
### Conclusion and discussion
We have derived the exponentially-small component of the eigenvalue,
\[{\rm Im}[\lambda]\sim\pm\sqrt{\pi}\Big{[}1-2\epsilon\log\epsilon+(\gamma+\log 2 )\,\epsilon\Big{]}{\rm e}^{-1/\epsilon^{2}}, \tag{103}\]
by considering the Stokes phenomenon displayed by the solution, \(\psi(Y)\), throughout the complex plane. Since this exponentially-small component of \(\lambda\) is imaginary, it corresponds to a growing temporal instability of the solution associated with weak shear, and is known as a critical layer instability.
As we noted in SS2, the Hermite-with-pole problem, posed by Boyd & Natarov (1998) as a model for weak latitudinal shear of the equatorial Kelvin wave, is an unusually difficult problem in exponential asymptotics.
Some of the issues we have had to confront, such as the differing asymptotic behaviours for even and odd terms in the expansion, arise from an unfortunate choice of model equation, forcing the expansion to proceed in powers of \(\epsilon\) when it would more naturally proceed in powers of \(\epsilon^{2}\). Some, such as the divergence of the asymptotic series for the eigenvalue, and its associated exponentially small imaginary component, are more generic.
The logarithmic factors of \(n\) in the behaviour of the late terms are associated with the logarithmic factors of \(\epsilon\) in the expansion of the imaginary part of the eigenvalue (103). It is not clear to what extent we were just unlucky to have to confront these, although we note that had we only wanted the leading term in (103) we could have avoided most (but not all) of the logs by only considering only the dominant (i.e. odd \(n\)) terms in the expansions of \(\psi\) and \(\lambda\).
The most interesting aspect of the problem has been the phantom singularity in the naive expansion of the late terms, and its resolution via a higher-order Stokes
phenomenon driven by the divergent eigenvalue expansion. At the moment it is not clear to us whether this behaviour is unusual or generic, but we hope the analysis we have presented will act as a road map for similar problems. Although we only considered the higher-order Stokes line in the vicinity of \(Y=0\), it is possible to show that it extends along the whole imaginary axis, and to smooth it in a similar manner to the smoothing of regular Stokes lines; the interested reader is referred to Shelton _et al._ (2023_b_).
## Acknowledgments
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Applicable Resurgent Asymptotics when work on this paper was undertaken. This work was supported by EPSRC Grant Number EP/R014604/1. We also thank Dr. Stephen Griffiths (Leeds) for many useful discussions and for hosting a short research visit, funded by the UK Fluids Network, where this work was initialised. PHT gratefully acknowledges support from EPSRC Grant Number EP/V012479/1.
|
2306.17176 | News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT
3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking | This study aimed to evaluate the proficiency of prominent Large Language
Models (LLMs), namely OpenAI's ChatGPT 3.5 and 4.0, Google's Bard(LaMDA), and
Microsoft's Bing AI in discerning the truthfulness of news items using black
box testing. A total of 100 fact-checked news items, all sourced from
independent fact-checking agencies, were presented to each of these LLMs under
controlled conditions. Their responses were classified into one of three
categories: True, False, and Partially True/False. The effectiveness of the
LLMs was gauged based on the accuracy of their classifications against the
verified facts provided by the independent agencies. The results showed a
moderate proficiency across all models, with an average score of 65.25 out of
100. Among the models, OpenAI's GPT-4.0 stood out with a score of 71,
suggesting an edge in newer LLMs' abilities to differentiate fact from
deception. However, when juxtaposed against the performance of human
fact-checkers, the AI models, despite showing promise, lag in comprehending the
subtleties and contexts inherent in news information. The findings highlight
the potential of AI in the domain of fact-checking while underscoring the
continued importance of human cognitive skills and the necessity for persistent
advancements in AI capabilities. Finally, the experimental data produced from
the simulation of this work is openly available on Kaggle. | Kevin Matthe Caramancion | 2023-06-18T04:30:29Z | http://arxiv.org/abs/2306.17176v1 | News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking
###### Abstract
This study aimed to evaluate the proficiency of prominent Large Language Models (LLMs)--OpenAI's ChatGPT 3.5 and 4.0, Google's Bard/LaMDA, and Microsoft's Bing AI--in discerning the truthfulness of news items using black box testing. A total of 100 fact-checked news items, all sourced from independent fact-checking agencies, were presented to each of these LLMs under controlled conditions. Their responses were classified into one of three categories: True, False, and Partially True/False. The effectiveness of the LLMs was gauged based on the accuracy of their classifications against the verified facts provided by the independent agencies. The results showed a moderate proficiency across all models, with an average score of 65.25 out of 100. Among the models, OpenAI's GPT-4.0 stood out with a score of 71, suggesting an edge in newer LLMs' abilities to differentiate fact from deception. However, when juxtaposed against the performance of human fact-checkers, the AI models, despite showing promise, lag in comprehending the subtleties and contexts inherent in news information. The findings highlight the potential of AI in the domain of fact-checking while underscoring the continued importance of human cognitive skills and the necessity for persistent advancements in AI capabilities. Finally, the experimental data produced from the simulation of this work is openly available on Kaggle.
ChatGPT, Fake News, Misinformation, Disinformation, Information Warfare, Cyber Deception
## I Introduction
One promising application of large language models (LLMs) is fact-checking news information. As an emerging authoritative technology across multiple domains, LLMs will undisputedly attract a mass of users seeking information confirmation. Preliminary findings on the accuracy of these models, however, are conflicting; That is, performance data yields suggest that on small sample sizes, they perform virtually in perfection [1]. On the other hand, hallucinations of these models are known to exist equally on both simple and complicated question prompts, showcasing their unpredictability [2].
The currently dominant LLMs in the US markets are Open AI's ChatGPT (based on Generative Pre-training Transformer or "_GPT_" 3.5 and 4.0 models), Google's Bard/_LaMDA_--short for "Language Model for Dialogue Applications," and Microsoft's Bing AI or also known as the _Sydney_ anchored on Prometheus AI model. One of the striking distinctions in these LLMs is the training data used to arrive at the generative response provided by each model. There is no unison or common design between these LLMs; as such, generalizations on their performance in the very endeavor of _fake news_ detection may be, in itself, _misleading_.
For instance, one striking limitation (or feature) in ChatGPT's legacy models up to 3.5, including the standard 4.0, is its lack of input inclusion of the web data after its knowledge cutoff date [3]. As of this writing, a _beta_ version that allows it to browse the web for additional input is available as a supplement to the GPT 4.0 model (i.e., the _browsing_ mode) [3]. The two chatbots, Bard and Sydney, are both without such restrictions in accessing the web in their generative responses. Fascinatingly though, a quick inspection of their responses' formatting, such as length, topic saturation, correctness, etc., are visibly non-similar. On top of these, ChatGPT's premium version is not free and imposes a cap on response count per time period.
Multiple metrics for comparative evaluations between the existing and future LLMs can be considered to decide which one performs most optimally. As a constitutive reference, this paper's sole focus, however, is on the dimension of misinformation and disinformation detection _accuracy_[4]. An investigative style of testing will be applied to the three prevailing LLMs discussed above, and the findings are all based on the experiment performed in this work. No existing assumptions nor performance expectations are considered prior to the inception of this project.
This paper will straightforwardly test the three LLMs', ChatGPT, Bard, and Bing AI,'s accuracy in a simulation where they will each be discerning facts from deceptive news information using a common prompt. This paper anchors on authoritative, independent fact-checking agencies in the United States, such as PolitiFact and Snopes; We then cross reference each model's response to that of the currently established "truth" as vouched for by these independent fact-checkers. We then analyze and present the performance evaluation of these models solely on one metric: the accuracy in correctly classifying news headline articles according to their integrity.
The following are the two main research questions we seek to substantiate. This paper will center on illuminating the two inquiries below:
RQ1: How accurately do leading large language models, specifically OpenAI's ChatGPT, Google's Bard/LaMDA, and Microsoft's Bing AI, discern facts from deceptive news information in a controlled simulation?
RQ 2: How does their performance vary in relation to established fact-checking agencies such as PolitiFact and Snopes?
The literary contribution of this paper is the addendum it confers to the continually and rapidly growing applications of LLMs. This paper's main topical application, _fake news,_ is inherently grounded in several interdisciplinary domains, including but not limited to media studies, artificial intelligence, and education [5]. Additionally, this paper utilizes fundamental principles and constructs in its method, such as psychological experimental designs and mathematical modeling--touching on both qualitative and quantitative perspectives on its analysis and interpretations.
The practical contribution of this paper is the insight it provides on the potential role of LLMs are _the solution_ for combatting cyber deceptions--misinformation and disinformation--and their advanced representations in the form of deepfakes and other AI-powered content. Dependency on human skills or information and media literacy alone is insufficient to destroy such falsehoods. An even more effective way of refuting these deceptions requires supplementary technological interventions as dictated in the disinformation ecosystem model proposed and theorized by [6] and proven in [7].
Finally, this paper is formatted as follows: The subsequent sections provide the necessary briefings on the following contexts to holistically provide the readers with the necessary background to understand the simulation process utilized in this paper (a) the fundamentals of _fake news_ tests/experiments, (b) mis/disinformation cyber risk modeling, and (c) prior works related to LLMs involving fake news. Afterward, the third chapter thoroughly explains the steps performed in the simulation so readers may re-create this experiment if desired. As an addendum, limitations inherent to this paper's simulation are also contained herein. The presentation of findings is placed in a separate chapter following the methods and materials section presenting the simulation results. A discussion section that analyzes the results in-depth will be the penultimate chapter of this paper. Finally, this paper concludes with the recommendations and directions for future iterations of this research area.
## II Literary Background
### _Psychometry of Fake News Assessment Devices_
In the context of misinformation and disinformation, _vulnerability_ refers to the likelihood of an _agent_, typically human, falling prey to believing that disseminated false news information is correct [8]. The primal motivation in experimentations seeking to gauge the capability of _agents_ to thwart deceptive social media content is rooted in the fact that there are very few to virtually no publicly available datasets to exhibit such vulnerability [9]. Furthermore, the social networking giant Facebook banned researchers involved in similar and related works, suggesting that such activities go against the terms and policies of the company, which critics have argued a move made to preserve the public image and branding of the platform [10]. As such, the experiments attempt to mimic an actual social platform by presenting non-synthetic news items, that is, actual content found on social media sites [11].
The actual simulation consists of both legitimate and misleading news items. The format of the experimentations typically varies; More controlled studies hold in-person tests where printouts of news headlines are given to the participants, and they then decide the legitimacy of the items. Stricter study designs totally prohibit access to electronic devices and the web to confirm the information presented in the test items. More modern variations of the study design transitioned to electronic testing, with most deployed over the web to distribute the experiment survey across a wider group of participants with diverse backgrounds [9].
### _Mis/Disinformation Cyber Risk Modeling_
The performance simulations involving the agents yield two performance data: First is the accuracy of their performance, that is, the correct number of detections (as count data) among the total count of presented items. Secondly, the time it takes for the agents to finish their assessments is typically measured in seconds. In the succeeding analyses, these garnered performance data are depicted as dependent variables on a wide breadth of a set of possible predictors--all in the name of attributing the possible correlative factors that may have caused performance gains or losses in the resulting experimental data [9].
The tested areas include demographic and socioeconomic factors, including the agent's age [12], sex assigned at birth [12], household income [9], native language [13], religion [14], veteran status [15], etc. Other areas include psychological perceptions [16] and political positions [17] as possible predictors, although these proposals currently yield non-significant outcomes. In the information sciences, studies focus on the test items and not directly on the agents; that is, some examples of the proposed predictors are the modality or textuality of news content [18], contextual clues that come with the news items [11], other metadata such as the time of circulation and exposure of news items to social media users [19]. On the other hand, predictors in behavioral sciences include the reporting behavior of users when doubting social media content [20] and the preferred device type when browsing a social media website [21]. Finally, arguments establishing that the way an information environment is designed itself, including but not limited to social media policies and terms and use of agreements, can either promote or demote mis/disinformation content [22][23].
### _Large Language Models (LLMs) in Fake News Management_
The philosophical rooting of the proposal that LLMs be used to combat misinformation and disinformation [1] is the argument that there are forms of _fake news_ that are advanced for the typical social media users to recognize easily and quickly, regardless of their information literacy skills. These deceptions are versed to be inherently cybersecurity threats [6][7] as they are carefully engineered for the sole purpose of swaying public opinions and causing societal polarization. An important but practical study likened such to distributed denial of service (DDoS) attacks as they typically come in mass and waves, especially at times of interest such as elections and public emergencies where civil unrests are highly likely to affect the national security of a nation [22].
Naturally, since cyber deceptions powered by advanced technologies are difficult for humans to combat with skills alone, the proposed ecosystem of [1] promotes the idea that the same technologies be used to combat such powerful mistrunts. The use of machine and deep learning, for instance, to detect _deepfakes_, is currently a rich and thriving stream of inquiry in this stream. Additionally, disinformation-as-service disseminated by bots are challenging for independent fact-checkers alone to combat, and thus, LLMs come in the role of balancing the scale of such information ecosystem.
_The Literary Gap: Comparative Evaluation of the Performance of LLMs in Mis/Disinformation Detection_
Given these underpinnings, the following corollaries are evident:
1. That the existing proposed _disinformation ecosystem model_ of [6] to use technologically powered solutions to technologically powered deceptions is not without its merit and appears to be very compelling.
2. That an existing study initiated by [1] showcasing and gauging the ability of an _LLM as an agent_ instead of actual humans exists and that--
3. Although the said study presented promising results and outcomes--the study design and sample size are premature and have been called on to be improved upon in future studies.
4. This paper answers the open call of the previous paper and fulfills the currently existing limitation and gaps. We improve upon the study design of [] by making a more comprehensive and controlled experimental simulation, adding more LLMs, and expanding the count of included test items.
## III Experimental Design & Simulations
### _Overview_
This chapter details the methodological approach and materials used in conducting this study. It outlines the selection of large language models (LLMs), data collection procedures, simulation setup, metrics for evaluation, data analysis process, and the limitations of the study design. In addition, it also addresses reproducibility and ethical considerations. Note that the experimental data produced from this simulation is openly available on Kaggle [25].
### _Selection of the Large Language Models (LLMs)_
#### Iii-B1 Brief Description of Selected LLMs
**OpenAI's ChatGPT 3.5 and 4.0:** This LLM, trained by OpenAI, is built on Generative Pre-training Transformer (GPT) technology and provides generative responses. Its unique characteristic is its knowledge cutoff, which limits the inclusion of web data beyond a certain date in its training.
The primary difference between GPT-3.5 and GPT-4 lies in their capacity for understanding and generating text. As a newer iteration, GPT-4 is more powerful, trained on a broader dataset, and has improved capabilities in generating coherent and contextually relevant text. This enhanced capability is due to its larger number of parameters, enabling it to learn more complex patterns in language. Additionally, GPT-4 may exhibit better performance in tasks that require a deep understanding of context and abstraction. However, both versions have a similar knowledge cutoff, meaning they can't generate responses based on real-world events or information beyond their respective training periods.
**Google's Bard/LaMDA:** Bard/LaMDA is Google's LLM designed for dialogue applications. Unlike ChatGPT, it doesn't have restrictions regarding accessing the web for generating responses.
**Microsoft's Bing AI or Sydney:** Anchored on the Prometheus AI model, Bing AI or Sydney is Microsoft's LLM offering. It doesn't have web data access restrictions, making it different from ChatGPT.
### _Data Collection_
#### Iii-C1 Collection of News Headlines
_a) Sourcing from News Outlets_
We collected fact-checked and verified content from independent fact-checking agencies and cross-referenced them. As a form of additional validation, we verified and checked the posting and content modification dates of the items to confirm that no updates were added to their legitimacy. The reason for this is that a news item may change its truth value over time when refuted (i.e., True to False) or when proven otherwise (i.e., False to True).
_b) Criteria for Inclusion_
All the news items presented to the LLMs are up to September 2021 to level the playing field for ChatGPT since its knowledge cutoff date is only until the specified date. Furthermore, news items that are unverified by independent fact-checking agencies, albeit posted by any media, are not included in the pool of news items presented. This control is put in place to limit the framing bias that may be yielded in the simulation by media, including state-owned, publicly owned, or even private, for-profit organizations; We only consider the independent fact-checkers as the sole source of the truth for this experiment.
Procedure for Classification as True or False
The attached legitimacy for the test items is threefold--(1) True, (2) False, and (3) Partially True or False. The basis of this simulation, unlike its paper predecessor, doesn't rely solely on binary classification/option responses. We observed that the fact-checked content of the third-party agencies typically falls into four categories--(1) True, (2) False, (3) Partially True, or (4) Partially False. In this experiment, we combine the latter two on the grounds that the original categories of "Partially True" and "Partially False" can be ambiguous and potentially confusing. We strive to eliminate this ambiguity and make it clear that these items contain a mix of true and false information. Practically, in many cases, it may be difficult to distinguish between "Partially True" and "Partially False" items.
### _Simulation Setup_
#### Iii-D1 Testing Environment
Details of the Control Environment
The testing environment was carefully designed to ensure a fair and controlled evaluation of the Language Learning Models (LLMs). The LLMs were run on identical hardware configurations to avoid any performance discrepancies due to hardware differences. The software environment was also standardized, with all LLMs running on the same operating system and using the same versions of necessary libraries and dependencies. This ensured that any differences in the LLMs' performance could be attributed to the models rather than external factors.
Control Measures to Prevent Uncontrolled VariablesSeveral control measures were implemented to prevent uncontrolled variables from influencing the results. First, the LLMs were all tested under the same conditions, at the same time of day, to avoid any potential effects of network traffic or server load. Second, the same set of news headlines was presented to each LLM, ensuring that all models were evaluated on the same tasks. Finally, any updates or modifications to the LLMs were prohibited during the testing period to maintain consistency.
#### Iii-D2 Testing Procedure
Prompts Used for Testing LLMsThe LLMs were tested using a set of prompts derived from the collected news headlines. Each prompt was designed to elicit a response that could be classified as true, false, or partially true/false. The prompts were presented to the LLMs in a random order to avoid any potential order effects.
Timeline of SimulationThe simulation was conducted over a period of one month, from mid-May to mid-June 2023. This timeline was chosen to allow sufficient time for the LLMs to process the prompts and for the researchers to analyze the results. Each LLM was tested individually, with a one-week interval between each testing session to allow for any necessary adjustments or troubleshooting.
### _Metric for Evaluation_
#### Iii-E1 Definition of Accuracy
Calculation of Detection AccuracyAccuracy was defined as the proportion of correct classifications (true, false, partial) by the LLMs compared to the established truth from the fact-checking agencies.
We did not include the second metric, the time it takes for an LLM to produce the response, and it is part of a different project. As a possible confounding factor, the time it takes for an LLM to produce a response could be influenced by the complexity of the input, the processing power of the computer, and the specific implementation of the LLM. By not including this metric, we strive to avoid the need to control for these potentially confounding factors.
### _Limitations of the Study Design_
1. To eliminate the classification ambiguity of the LLMs, we forcibly asked in the prompts on each LLM to choose between three choices (True, False, Partially True/False) for the elicitation of an even more explicit response.
2. Additionally, our evaluation metric focuses solely on the accuracy of the LLMs' classifications. This means that this study does not consider other important aspects of performance, such as the speed of response or the quality of the generated text.
3. Finally, we repeatedly highlighted the independent fact-checking agencies as our source; however, these agencies are not infallible, and there is a potential for misclassification. If an agency incorrectly classifies a news item, this could unfairly penalize the LLMs in our evaluation.
### _Ethical Considerations_
1. Ensuring the protection of user data was paramount during the simulation, adhering strictly to privacy policies and confidentiality principles.
2. The study was conducted with a keen emphasis on responsible AI principles, such as fairness, accountability, transparency, and ethics in AI usage.
## IV Findings & Results
Note: The performance data of the LLMs are openly available and can be accessed on Kaggle. The succeeding analyses are based on [25]
### _Descriptive Statistics_
#### Iv-A1 Average Accuracy Rates
[MISSING_PAGE_POST]
must continue to foster human skills and literacy, nurturing a complementary symbiosis between man and machine.
The juxtaposition of AI capabilities and human expertise offers a poignant reflection of our times. AI's emergence as a powerful tool in the fight against misinformation marks a turning point in our journey, yet it also highlights the irreplaceable value of human cognition, judgment, and emotional intelligence. The growth of AI, therefore, should not be perceived as a journey towards human redundancy but rather as an opportunity for harmonious collaboration.
In an era increasingly shaped by information integrity, our survival and success hinge not only on technological innovation but also on our ability to integrate these advancements with the cognitive capacities that make us uniquely human. It is in this synergy that we can foster a robust defense against the relentless onslaught of misinformation, ensuring a future where truth triumphs over deception.
|
2310.05294 | Hi Guys or Hi Folks? Benchmarking Gender-Neutral Machine Translation
with the GeNTE Corpus | Gender inequality is embedded in our communication practices and perpetuated
in translation technologies. This becomes particularly apparent when
translating into grammatical gender languages, where machine translation (MT)
often defaults to masculine and stereotypical representations by making undue
binary gender assumptions. Our work addresses the rising demand for inclusive
language by focusing head-on on gender-neutral translation from English to
Italian. We start from the essentials: proposing a dedicated benchmark and
exploring automated evaluation methods. First, we introduce GeNTE, a natural,
bilingual test set for gender-neutral translation, whose creation was informed
by a survey on the perception and use of neutral language. Based on GeNTE, we
then overview existing reference-based evaluation approaches, highlight their
limits, and propose a reference-free method more suitable to assess
gender-neutral translation. | Andrea Piergentili, Beatrice Savoldi, Dennis Fucci, Matteo Negri, Luisa Bentivogli | 2023-10-08T21:44:00Z | http://arxiv.org/abs/2310.05294v1 | # _Hi Guys_ or _Hi Folks_? Benchmarking Gender-Neutral Machine Translation with the GeNTE Corpus
###### Abstract
Gender inequality is embedded in our communication practices and perpetuated in translation technologies. This becomes particularly apparent when translating into grammatical gender languages, where machine translation (MT) often defaults to masculine and stereotypical representations by making undue binary gender assumptions. Our work addresses the rising demand for inclusive language by focusing head-on on gender-neutral translation from English to Italian. We start from the essentials: proposing a dedicated benchmark and exploring automated evaluation methods. First, we introduce GeNTE, a natural, bilingual test set for gender-neutral translation, whose creation was informed by a survey on the perception and use of neutral language. Based on GeNTE, we then overview existing reference-based evaluation approaches, highlight their limits, and propose a reference-free method more suitable to assess gender-neutral translation.
## 1 Introduction
Societal gender asymmetries and inequalities are reflected and perpetuated through language (Stahlberg et al., 2007; Menegatti and Rubini, 2017). Such awareness has grown also within the Natural Language Processing (NLP) field (Blodgett et al., 2020), where extensive research has highlighted how several applications suffer from gender bias (Sun et al., 2019; Sheng et al., 2021). As also noted by MT users themselves (Olson, 2018; Dev et al., 2021), among these applications are translation systems used at large scale, which pose the concrete risk of misrepresenting gender minorities by over-producing masculine forms, while reinforcing binary gendered expectations and stereotypes (Savoldi et al., 2021; Lardelli and Gromann, 2022).
To foster greater inclusivity and break free from the constraints of masculine/feminine language, neutral strategies have emerged and are increasingly adopted in academia (APA, 2020), institutions (Hoglund and Flinkfeldt, 2023), and industry alike (Langston, 2020). These strategies aim to overcome marked forms that treat the masculine gender as the conceptually generic, default human prototype (e.g., _humankind_ vs. _mankind_) (Silveira, 1980; Bailey et al., 2022). Thus, they challenge gender norms and embrace all gender identities by avoiding gendered terms when unnecessary (e.g. _chair_ vs. _chair_/_chairwoman_) (Hord, 2016).
English, being at the forefront of inclusive language changes and with its limited gendered grammar (Ackerman, 2019), has faced fewer obstacles in adapting to neutral forms, which have already been modeled into monolingual generative tasks (Sun et al., 2021; Vanmassenhove et al., 2021). As recently underscored by Amrhein et al. (2023), however, the resources and approaches made available for English are not portable to grammatical gender languages. Such need for dedicated efforts is exemplified in Italian, where neutral solutions must navigate the extensive encoding of masculine/feminine marking (e.g. _the doctors are qualified_\(\rightarrow\) it: _i/be doctori/esse sono qualificati/e_) through synonymy or more complex rephrasing (Papadimoulis, 2018) (e.g. \(\rightarrow\)_il personale medico_ [the medical staff]). While indeed more challenging, pursuing inclusivity in Italian is relevant exactly because sexist attitudes are more visible and impactful in grammatical gender languages (Wasserman and Weseley, 2009). Nonetheless, the implementation of neutral language in MT remains to date a basically uncharted territory, despite the desirability of neutral outputs under several circumstances where gender is ambiguous or irrelevant.
In light of the above, by focusing on English\(\rightarrow\)Italian as an exemplary and representative translation pair and direction, we hereby lay the groundwork toward gender-neutral MT. Starting from a survey aimed to understand the challenges
of neutral translation in cross-lingual settings, we provide the necessary tools and resources to foster research on the topic by estimating gender neutral translation in MT. Hence, our main contributions are: **(1)** A study on the feasibility of neutral translation, by surveying the potential trade-off among fluency, adequacy, and neutrality; **(2)** The creation of GeNTE,1 the first natural, parallel corpus designed to test MT systems' ability to generate neutral translations; **(3)** A comprehensive analysis of the (un)suitability of existing automatic metrics to evaluate neutral translation. As an inherent benchmark component, we indicate an alternative solution capable to better assess the task.
Footnote 1: **Gender-Neutral Translation Evaluation. In Italian, _gente_ means _folks_, a term used for inclusive greetings in lieu of _“guys_”.
We make the GeNTE dataset freely available at [https://mt.fbk.eu/gente/](https://mt.fbk.eu/gente/) and release the evaluation code under Apache License 2.0 at [https://github.com/hlt-mt/fbk-NEUTR-evAL](https://github.com/hlt-mt/fbk-NEUTR-evAL).
## 2 Background
Emerging research has highlighted the importance of reshaping gender in NLP technologies in a more inclusive manner (Dev et al., 2021), also through the representation of non-binary identities and language (Wagner and Zarriess, 2022; Lauscher et al., 2022; Ovalle et al., 2023). Foundational works in this area have included several applications, such as coreference resolution systems (Cao and Daume III, 2020; Brandl et al., 2022), intra-lingual fair rewriters (Amrhein et al., 2023), and automatic classification of gender-neutral text (Attanasio et al., 2021).
In MT, the research agenda has mainly focused on the improvement of masculine/feminine gender translation. Along this line, different mitigation methods have been devised to ensure that unambiguous gendered referents (e.g. _he/she is a doctor_) are properly resolved in the target language (Costajussa and de Jorge, 2020; Choubey et al., 2021; Saunders et al., 2022). These methods are often tested on synthetic template-based datasets such as WinoMT (Stanovsky et al., 2019) or Simple-GEN (Renduchintala and Williams, 2022). As also stressed by Saunders and Olsen (2023), however, in realistic scenarios MT systems are also confronted with ambiguous input sentences that do not convey any gender distinction (e.g., en: _I called the doctor_). Nonetheless, to date the resources and solutions envisioned for resolving such cases into grammatical gender languages like Arabic (Alhafni et al., 2022), Italian (Vanmassenhove and Monti, 2021), Spanish, or French (Rarrick et al., 2023) entail offering two possible translation outputs, still constrained to binary gender forms (e.g., it: _Ho chiamato il dottore_ masc vs. _la dottoressa_ fem).2
Footnote 2: Such double-outputs are currently offered for short, ambiguous queries also by Google Translate and Bing.
As an exception within the current MT landscape, Cho et al. (2019) and Ghosh and Caliskan (2023) investigate the preservation of gender-ambiguous pronouns for Korean/Bengali\(\rightarrow\)English. Since English can already boast the well-established neutral pronoun _they_, their study does not face the additional challenges of preserving such unmarked vagueness into grammatical gender languages. Such challenges are exemplified by Saunders et al. (2020), who created parallel test and fine-tuning data to develop MT systems able to generate non-binary translations for English\(\rightarrow\)German/Spanish. However, their target sentences are artificial - created by replacing gendered morphemes and articles with synthetic placeholders - thus serving only as a proof-of-concept. To the best of our knowledge, Piergentili et al. (2023) are the first to advocate the use of target gender-neutral rephrasings and synonyms as a viable paradigm toward more inclusive MT when gender is unknown or simply irrelevant. Despite this call to action, no concrete steps have been taken yet to actually facilitate research in this direction, not even toward suitable benchmarks to recognize the neutral forms occasionally generated by current systems (Savoldi et al., 2022).
In light of the above, the path toward gender-neutral translation in MT is bottlenecked by the lack of dedicated datasets and automated evaluations. Here, we fill this gap so to guide and allow research on this novel topic. To this aim, we start in SS3 by first ensuring that gender-neutral language can enable acceptable translations, not being perceived as inappropriate or intrusive.
## 3 Surveying Gender-Neutral Translation
Neutralization is a form of linguistic gender inclusivity that relies on the retooling of established forms and grammar (Gabriel et al., 2018). According to the review of several gender-inclusive public guidelines by Piergentili et al. (2023), these can range from _i_) simple word changes, like omissions or article/noun replacements with epicene alterna
tives (e.g. _il maestro_ vs. _l'inespanate3_), to _ii)_ more complex reformulations, which might involve altering the sentence structure (e.g. _i miei colleghi vs. le persone coni lavoro_).4 As such, to ensure neutrality, these solutions might have an effect in terms of brevity or perceived fluency.
Footnote 3: en: “the teacher”.
Footnote 4: en: “my colleagues” vs. “the people I work with”.
While widespread in monolingual, institutional contexts (Papadimoulis, 2018), the use of neutral forms in cross-lingual settings requires to weigh additional non-negligible factors. First, translations are bounded to a source text, whose meaning must be properly rendered in the target language. Thus, more creative reformulations might collide with this instrinc constraint. Also, it might be not always clear-cut when neutral translations ought to be performed. This is the case of masculine generics in the source language (e.g. _All firemen_): while they do not neatly fall under the idea of "ambiguous" input, their propagation to a target language clashes with the goal of inclusive MT itself.
These issues stand unaddressed: the study by Lardelli and Gromann (2023) represents the only empirical investigation on the feasibility of gender-neutral translation, but it is concerned with the cognitive effort that its realization poses to post-editors. Therefore, to better understand the implications of gender-neutral translation for a wider range of stakeholders, we carried out a preliminary analysis on English\(\rightarrow\)Italian by surveying the opinions of potential MT end-users.
**Questionnaire.** Our survey was structured into two main parts. In part _(i)_, we indirectly assessed linguistic acceptability: given a source English sentence paired with both a gd and a neutral (NT) translation, we asked participants to indicate whether they had a preference or found them to be equivalent5 (see Table 1). Then, in part _(ii)_ we asked direct questions to gauge participants' use and attitude toward gender-neutral language. The questionnaire was distributed online and received 98 responses by eligible participants. While all details are provided in Appendix A, here we summarize our main insights.
Footnote 5: i.e. equally adequate and fluent.
First, the linguistic acceptability of gender-neutral translations was positively judged, perhaps at the higher rate than our own expectations. In fact, overall results indicate that in the majority of cases the NT was deemed preferable (42.5%) rather than equivalent to the GT (36.5%), where only a minority favoured gendered translations (21%). Possibly, such trend is explained in light of participants' ideological preference for inclusive language rather than by purely linguistic factors. As shown in Table 1, this seems to be confirmed by disaggregating responses for each translated example sentence, where we can extract more qualitative insights. Indeed, example A attests one of the strongest preferences for NT, signalling a negative attitude toward the propagation of default, masculine forms in GT. Then, concerning the use of more of complex neutral rephrasing (example B), we found that slightly longer and sentence-altering neutralization strategies were still considered largely acceptable. Instead, literal NT with limited changes, which however sacrificed more the source meaning or altered its tone (example C), were comparatively penalized. This trend was also confirmed in part _(ii)_ of the survey (see Appendix A). Finally, as shown in Figure 1, participants' responses to direct questions attest that neutralization strategies are accepted/used differently depending on the speech situation, with a preference for their use in formal communicative situations. Having confirmed the feasibility and overall acceptability of neutral translations, we embed the gathered qualitative considerations in the design of the GeNTE corpus (SS4).
\begin{table}
\begin{tabular}{l l|c|c|c} & Questionnaire Example sentences & **Eq.** & **NT** & **GT** \\ \hline \multicolumn{4}{c|}{**tot. responses**} & 36.5 & 42.5 & 21 \\ \hline \multicolumn{4}{c|}{_Some mental may be toxic to man_.} & 39.6 & 50 & 10.4 \\ \multicolumn{4}{c|}{**AT** Cert metallial possonce stecisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisococisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisococisocisocisocisocisocisocisococisocisocisocisocisocisocisococisocisocisococisocisocisococisocisocisocisococisocisocisocisococisocisococisocisocisococisocisococisocisocisococisocisocisococisococisocisocisocisocisocisocisococisocisocisocisocisocisocisocisocisocisocisococisocisocisocisococisocisocisococisocisocisocisocisocisocisococisocisocisocisocisococisocisocisocisococisocisocisocisocisocisococisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisisocisocisocisocisocisocisocisocisocisocisocisocisocisisocisocisocisocisocisocisisocisocisisocisocisocisisocisocisocisocisisocisocisocisocisocisisocisocisocisocisocisocisisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisococisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisocisococisocisocisococisocisocisocisocisocisococisocisocisocisocisococisocisocisococisocisococisocisocisocisocisococisocisocisocisocisococisocisococisocisococisocisocisocisocisococisococisocisococisocisocisocisocisococisocisococisocisococisococisococisocisococisocisococisococisococisocisococisococisococisocisococisococisococisococisococisococisococisococisococisococisococisococisococisococisocococisococisococisocisococisocisocisococisocisocococisococisocococisocisococisococisococisococisococisocisococisococisococisococisococisococisocisococisococisocisocococisocisocococisococisococisococisococisocisococisococisococisocococisococisococisococisococisococisococisococisococisococisococisococisococisococisocisococisococisocococisococisococisococisococisococisococisococisococisocococisocisococisococisococisocisocococisocisococisococisococisococisococisococisocisococisocisocococisococisococisocococisococisococisococisococisocisococisocisococisocococisococisococisocisococisococisocisococisococisococisocisococisococisococisococisocisocisococisococisococisocisococisococisocisocococisococisococisocisocisococisococisococisocisococisocisocisococisocisococisocisococisococisococisocisocisococisocisocisocococisocisococisococisocisococisocisocisococisococisocisococisocisocisocisococisococisococisococisocisococisocisococisocisocisocisocisococisocisocisococisocisococisocisococisocisococisococisococisocisococisocisococisocisocisococisocisococisocisocisococisocisococisocisocisococisococisococisocisococisocisocisocisococisocisocisocisococisocisococisocisocisococisocisocisocisocisocococis
The GeNTE corpus
GeNTE is the first test set designed to evaluate MT models' ability to perform gender-neutral translations, but only under desirable circumstances. In fact, when referents' gender is unknown or irrelevant, undue gender inferences should not be made and translation should be neutral. However, neutralization should not be always enforced; for instance, when a referent's gender is relevant and known, MT should not over-generalize to neutral translations. The corpus hence consists of 1,500 English-Italian parallel sentences with mentions to human referents that equally represent two translation scenarios: **1)** Set-N, featuring gender-ambiguous source sentences that require to be neutrally rendered in translation; **2)** Set-G, featuring gender-unambiguous source sentences, which shall be properly rendered with gendered (masculine or feminine) forms in translation. Altogether, these sets allow to benchmark whether systems are able to perform gender-neutral translation, and if they do so when appropriate.
We build GeNTE on naturally occurring instances of both scenarios retrieved from Europarl (Koehn, 2005). Besides being a widely popular and high-quality MT resource, we chose this corpus inasmuch it represents formal communicative situations from the administrative/institutional domain. Accordingly, it reflects the context for which gender-neutral forms are traditionally intended, also in line with the stakeholders' preference highlighted in SS3. Also, as examined by Saunders (2022), Europarl exhibits a large amount of gender-ambiguous cases that - although translated with gendered forms in the original references of the corpus - lend themselves as suitable candidates for neutralization. As explained in the forthcoming paragraphs (SS4.2), for each of these original Europarl gendered target sentences, we create an additional gender-neutral reference translation.
### Data selection and annotation
Data extraction.To retrieve Europarf6 segments representing our two translation scenarios of interest, we crafted regular expressions to: _i)_ identify source sentences containing mentions to human referents, _ii)_ maximize the variability of linguistic phenomena included in the corpus, and _iii)_ ensure a balanced distribution of both unambiguous and ambiguous gender translation cases. To this aim, we targeted Set-G segments by matching source English sentences that contained explicit gender cues, e.g. lexically gendered words (_sister_, _woman_), titles (_Mr_, _Mrs_) and marked pronouns (_him_, _her_). Set-N, instead, was populated by matching several word classes that do not convey any gender distinction in English (e.g. _you_, _citizens_, _went_), but typically correspond to masculine/feminine expressions in the target language. Also, we searched for masculine terms used generically, such as _man_ and its derived compounds (e.g., _chairman_, _layman_). In fact, masculine generics are unreliable gender cues and, following the survey findings (SS3), should not be propagated in MT.
Footnote 6: [https://www.statmt.org/europarl/archives.html](https://www.statmt.org/europarl/archives.html)
Sentence editing.On the collected material, a first intervention was carried out to streamline the evaluation of gender-neutral translation. In fact, some of the source sentences contained mentions of multiple referents, which required the combination of different forms in translation (i.e. neut/masc/fem). In those cases, the parallel sentences were manually edited so as to ensure that they only include referents that require the same type of (either neutral or gendered) forms. In this way, each sentence pair can be handled as a whole coherent unit, thus avoiding the complexities of evaluating intricate combinations of phenomena. To ensure a balanced distribution of instances from both Set-N and Set-G, a second intervention was required to compensate for the under-representation of unambiguous cases.7 Although these edits slightly reduce the naturalness of the data, they allow for a simpler and sound evaluation, crucial to shed light on a complex task such as gender-neutral MT. Instead, other edits were made to enhance the quality of the corpus; all of them are reported in Appendix SSB.1. Once the editing phase was concluded, all sentence pairs were annotated as N in Set-N, and as F or M in Set-G. In the annotation process, it was verified that the initial pool of - automatically extracted - candidate sentences were correctly assigned to Set-N and Set-G by accounting for the sentence context. In this way, we could differentiate between the use of gendered words as either masculine generics (e.g. _It is up to an accused employer to prove **his** innocence_ - identified as N) or as informative of a referent's gender (e.g. _I would like to thank Commissioner Byrne for **his** cooperation._ - identified as G). |
2306.16268 | Emotion Analysis of Tweets Banning Education in Afghanistan | This paper introduces the first emotion annotated dataset for the Dari
variant of Persian spoken in Afghanistan. The LetHerLearn dataset contains
7,600 tweets posted in reaction to the Taliban ban of women rights to education
in 2022 and has been manually annotated according to Ekman emotion categories.
We here detail the data collection and annotation process, present relevant
dataset statistics as well as initial experiments on the resulting dataset,
benchmarking a number of different neural architectures for the task of Dari
emotion classification. | Mohammad Ali Hussiny, Lilja Øvrelid | 2023-06-28T14:50:49Z | http://arxiv.org/abs/2306.16268v1 | # Emotion Analysis of Tweets Banning Education in Afghanistan
###### Abstract
This paper introduces the first emotion-annotated dataset for the Dari variant of Persian spoken in Afghanistan. The LetHerLearn dataset contains 7,600 tweets posted in reaction to the Taliban's ban of women's rights to education in 2022 and has been manually annotated according to Ekman's emotion categories. We here detail the data collection and annotation process, present relevant dataset statistics as well as initial experiments on the resulting dataset, benchmarking a number of different neural architectures for the task of Dari emotion classification.
## 1 Introduction
Expression and recognition of feelings are crucial aspects of human communication and social interaction (Dolan, 2002). They significantly influence our experiences and shape our cognitive abilities, making emotional intelligence an essential component of artificial intelligence (Dolan, 2002). Emotion analysis is a growing research area that aims to enable machines to effectively recognize, analyze and understand human feelings and thinking (Mirzaee et al., 2022). Unlike sentiment analysis, emotion detection usually covers a broader range of responses, detecting a variety of emotions such as Anger, Sadness, Fear, Disgust, Happiness and more.
Online social media platforms allow people to express their views on a wide range of topics such as personal, social, political, or even commercial views. Twitter is one of the rich online sources for text analysis tasks as it is concise yet abundant in emotional context. On Twitter, communication is unrestricted by politics, age, culture, gender, and other barriers (Ghosh et al., 2020). In the current media landscape, knowledge about people's opinions and emotions as expressed on social media can be important for various objectives, such as customer service, online sale, the analysis of political and cultural events etc.
On December 20, 2022, the Taliban regime banned girls and women from pursuing education and employment in Afghanistan. This announcement shocked the world and the people of Afghanistan, and it was met with a serious and swift reaction from politicians and citizens of different countries, as well as the United Nations, political and civil figures, women activists, and citizens of Afghanistan. Many expressed their feelings against the Taliban's decision on Twitter, Facebook and other social media. In this paper, we present LetHerLearn: a Persian Dari corpus of emotion-annotated Twitter data based on the collection and analysis of tweets related to the ban of education in Afghanistan by the Taliban regime. The goal of this work is to provide insights into people's real-time perspectives, attitudes, concerns and reactions in the face of this oppression.
The paper is structured as follows. Section 2 discusses related work, focusing in particular on previous work for Persian, Section 3 then goes on to describe the creation of the LetHerLearn dataset, detailing the motivation for this work, data collection, annotation and relevant statistics. Section 4 presents details on modeling and results for experimental evaluations of a number of neural architectures trained and evaluated on LetHerLearn, and finally, Section 5 concludes the paper and describes some possible avenues for future work.
## 2 Related work
In recent years, research on emotion recognition from text has received increasing attention in the research community, and several annotated corpora have been created for this purpose (Ghosh et al., 2020). These corpora serve as valuable resources for researchers to develop and build emotion recognition models (Nandwani and Verma,
2021). While there has been significant progress in emotion recognition research from text, there are still some languages for which there is relatively little research. Persian is one such language, where there is currently not much research and limited availability of these types of datasets. Despite the relatively limited previous work on emotion detection in Persian language, there is some work on resource creation in the related area of Sentiment Analysis, such as the SentiPers dataset (Hosseini et al., 2018), the Digikala dataset (Zobeidi et al., 2019) and the Pars-ABSA dataset (Ataei et al., 2019), all based based on Iranian user comments.
When it comes to the task of Persian Emotion Detection, the ARMANEMO dataset (Mirzaee et al., 2022) contains user opinions from social media and the dataset is annotated using a mixture of manual and automatic steps, labeling 7500 comments into the 7 classes of Anger, Fear, Joy, Hated, Sadness, Surprise and Others. The authors trained and evaluated a number of neural models (CNN, RNN, ParsBERT, XLM-Roberta-base and XLM-Roberta-large models) on the dataset and the best performing model was XML-RoBERTa-large, achieving a macro-averaged F1 score of 75.39%. The EmoPars dataset (Sabri et al., 2021), contains 30,000 emotional tweets collected from Twitter using specific emotion-related keywords and the dataset was manually annotated into the Anger, Fear, Happiness, Hatred, Sadness and Wonder classes. This constitutes the most similar existing dataset to the one presented here. In the following we will discuss the rationale behind the data creation effort presented here.
## 3 Dataset creation
Below we detail the creation of the LetHerLearn dataset, we begin by discussing the demand that has motivated the creation of this dataset (3.1), the data collection method (3.2), continuing on to explaining the labeling and annotation process (3.3) and finally we provide some relevant statistics of our data set (3.4).
### Demand and Importance
Despite the previous research on emotion detection in Persian, as detailed in Section 2 above, there is still a lack of research and resources for different Persian varieties. The Persian language is an Indo-European language which has more than 110 million speakers worldwide and is an official language in Iran, Afghanistan and Tajikistan (Heydari, 2019). The Persian variant spoken in Iran is called Farsi, in Afghanistan it is called Dari and in Tajikistan Tajiki (Spooner, 2012). Farsi, Dari and Tajik have the same alphabet and grammar with different accents on words in each country. There are, however, clear differences in vocabulary, where Farsi tends to have more borrowings from French and Dari from English. Crucially, however, all the described datasets above are developed based on Iranian social media and speakers and none of these are based on textual data from Afghanistan and Tajikistan. The lack of an emotion annotated dataset from Dari speakers of Persian, has motivated the creation of the Dari LetHerLearn dataset described here. As mentioned earlier, the events on December 20, 2022, where The Taliban banned education and all work activities for girls and women in Afghanistan caused massive emotional reaction on social media. We decided to base the first emotion annotated Dari dataset on social media data in order to analyse the reaction and opinion of the people faced with this event.
### Data collection
The data constituting the LetHerLearn dataset was collected using Twitter's official developer API. We use the Tweepy library and Python language to extract Persian tweets from the Twitter API. We collected tweets using several relevant Hashtags such as #LetHerLearn, #AllOrNone, #LetHerwork, #LetAfghanistanGirlLearn and #letAfghangirllearn, which were used by Twitter users in support of the education and work for the women of Afghanistan. The included tweets were all posted from December 20, 2022 up to March 10, 2023. The search was conducted from December 20, 2022 up to March 10, 2023 and using the mentioned hashtags, we collected around fifty thousand tweets. Following removal of duplicated tweets, we selected 7600 tweets for manual labeling.
### Data annotation
Two annotators were involved in labeling the LetHerLearn corpus. Both of the annotators are Dari native speakers with good knowledge and understanding of Dari grammar. We annotated based on Ekman's Ekman (1992) set of fundamental emotions, which is widely used by annotators for annotation of emotions in text. The corpus includes 6 fundamental Emotions (Anger, Disgust,
Fear, Happiness, Sadness and Surprise) and we used the 'Other' category for tweets that do not fall into any of the six basic Emotions. Each tweet was assigned a maximum of one emotion. In the case of tweets containing several emotions, the annotators were instructed to assign the emotion they felt was dominant. The annotators were provided with a set of annotation guidelines written in Dari. The annotators were instructed to remove tweets in languages like Pashto and Uzbek, even if they were written in the Persian script. Incomplete tweets, for example, those missing parts of the content along with hashtags or external links, should also be removed. The full set of guidelines (in Dari and English translation) are distributed along with the dataset, however we provide a brief summary of the guidelines below.
Annotation guidelinesThe guidelines provided to the annotators contain detailed descriptions of the six emotions with example words typically associated with the different emotions. For instance, the Anger class was described as comprising tweets reflecting emotions of anger, criticism, or frustration where the text may be confrontational, express strong negative feelings, or carry a tone of harsh criticism. Words symbolizing anger might include terms such as 'lying','spy', 'traitor', 'hypocrite', 'oppression' etc.
In addition to instructions describing each emotion class, care was taken to delimit the class of "Other" which represents tweets that do not display any particular emotion and convey a neutral tone. For instance, tweets about mundane activities or more fact-based posts would fall under this category. Annotators were further instructed to do their best to not let personal agreement or disagreement with the opinions stated in the tweets influence the labeling process and to label without any bias or directionality. Rather, they were instructed to depart from their interpretation of the speaker's emotional state and attempt to describe it as accurately as possible using one of the provided emotion labels.
Table 1 shows some examples of tweets (with English translations) from the LetHerLearn dataset to further illustrate the annotation effort.
Inter-annotator agreementWe further assess the consistency of annotations and measure the agreement among two annotators using Cohen's Kappa [10] for the double labeling of 100 tweets. The agreement attained over the 100 tweets was 0.80.
### Dataset statistics
The total number of words in our dataset after removal of the tweet's Hashtag, URL and Mention is 88,875 words, where 16,276 words are unique and the average length of the tweets is 4.82 words long. Figure 1 shows the occurrences of tweets for each emotion class. Examining the content of the LetHerLearn dataset, we can see that Anger is the most observed emotion, followed by Happiness, and we find that Surprise is the least observed emotion, with only 503 occurrences. The dataset was further split into train-dev-test splits using a 80:10:10 split for experimentation. Table 2 shows the detailed class-wise distribution of train, validation, and test set.
Next, and in order to get some more insight into the contents of our dataset, we examine the distribution of most frequent words per class following stop word removal, as shown in table 3 which displays the top frequent words for each of the emotion classes. We observe that some words frequently occur in all classes such as 'Taliban', 'Afghanistan', 'girls', 'women', 'everyone'. There are also clear lexical indicators associated with each class, such as 'f
\begin{table}
\begin{tabular}{l c} \hline \hline Tweet & Label \\ \hline \(\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{ \mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}} \nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{ {\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{ \cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}}{{\mathit{\cup}}}\nicefrac{{25}} {{\mathit{\cup}}}\nicefrac{{25}}
for Fear and 'pain' for Sadness. We also observe lexical items describing the cause of emotion, e.g. 'explosion' and 'arrest' for Fear and 'justice' for Happiness.
## 4 Modeling
We evaluate a number of classic neural models on our dataset:
* Long Short-Term Memory Network (LSTM)
* Bi-directional Long Short-Term Memory Network(Bi-LSTM)
* Gated Recurrent Unit (GRU)
* Convolutional Neural Network (CNN)
All models made use of fastText Grave et al. (2018) word embeddings with 300 dimensions for Persian. Further hyperparameters of the models are specified in Appendix A.
Ensemble ModelAfter generating predicted probabilities from the LSTM, BiLSTM and GRU models, we develop an ensemble model Dashtipour et al. (2021) using the scikit-learn library's VotingClassifier Leon et al. (2017) class to combine the predictions result of the LSTM, BiLSTM, and GRU models.
ParsBERTWe use a pre-trained language model for Persian, ParsBERT Farahani et al. (2021) which is a monolingual BERT model. Hyperparameters are found in Appendix A.
XLM-RoBERTa-largeXLM-RoBERTa is a multilingual transformer-based language model pre-trained data from over 100 different languages Conneau et al. (2019). Hyperparameters are specified in the appendix.
### Results
The results of our experiments are summarized in Table 4, which shows the evaluation result of the different models described above. The results show that the ensemble model achieves better results compared to the LSTM, BiLSTM, GRU and CNN models on their own, as has been shown also in previous work Onishi and Natsume (2014).
We further find that the XLM-RoBERTa-large model outperforms
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Precision & Recall & F1 \\ \hline LSTM & 0.67 & 0.63 & 0.65 \\ BiLSTM & 0.66 & 0.63 & 0.64 \\ GRU & 0.65 & 0.62 & 0.60 \\ CNN & 0.66 & 0.60 & 0.62 \\ Ensemble & 0.69 & 0.64 & 0.66 \\ \hline ParsBERT & 0.65 & 0.65 & 0.65 \\ XML-RoBERTa & **0.70** & **0.70** & **0.70** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Macro Average Precision, Recall and F1 result of all models on the LetHerLearn test set.
Figure 1: Number of tweets for each emotion class in LetHerLearn.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Type & Train & Dev & Test \\ \hline Anger & 1366 & 174 & 187 \\ Disgust & 462 & 50 & 57 \\ Fear & 483 & 64 & 59 \\ Happiness & 1266 & 179 & 152 \\ Sadness & 1032 & 120 & 128 \\ Surprise & 394 & 46 & 50 \\ Other & 1082 & 128 & 128 \\ \hline Total & 6085 & 761 & 761 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data distribution for experiments
class results, shown in Table 6 show that the scores vary for the different emotion classes, with the highest results obtained for the Disgust and Fear classes, and the most difficult classes being the Other class, as well as the Anger class.
### Error analysis
We perform an error analysis on the outputs of our model in order to gain further insight into the classifications on the LetHerLearn dataset. It is clear that there is not a direct correlation between low-frequency classes (such as Disgust) and prediction performance. Figure 2 provides a confusion matrix heat map of the predictions. We find that Surprise is often mistaken for other categories, such as Happiness, Other and Anger. Not surprisingly perhaps, the Other class is also often mistaken for other classes.
Following our analysis of the misclassified predictions, we can infer some of the reasons: the assignment of a maximum of one emotion for each tweet is problematic for some of the tweets that have more than emotion. We also analyze the word overlap between the tweets and find that classes with a high degree of overlap tend to also suffer from misclassification. Table 5 shows some examples of misclassified predictions.
## 5 Conclusion
We have presented LetHerLearn: the first Dari emotion-annotated dataset of tweets collected following the Taliban's ban of women's education in 2022. All data and code will be made available.1 In future work, we would like to experiment with cross-variant Persian emotion detection as well as multitask learning of sentiment and emotion.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Class & Precision & Recall & F1\_Score \\ \hline Anger & 0.52 & 0.57 & 0.54 \\ Disgust & **0.86** & 0.84 & **0.85** \\ Fear & 0.84 & **0.86** & **0.85** \\ Happiness & 0.67 & 0.71 & 0.69 \\ Sadness & 0.58 & 0.61 & 0.59 \\ Surprise & 0.82 & 0.85 & 0.84 \\ Other & 0.62 & 0.44 & 0.52 \\ \hline Macro Average & 0.70 & 0.70 & 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Individual class performance using XLM-RoBERTa-large model. |
2304.00273 | Zinbiel superalgebras | Throughout the current paper, we extend the study of Zinbiel algebras to
Zinbiel superalgebras. In particular, we show that all the Zinbiel
superalgebras over an arbitrary field are nilpotent in the same way as occurs
for Zinbiel algebras. Moreover and since the most important cases of nilpotent
algebras or superalgebras are those with maximal nilpotency index, we study the
complex null-filiform Zinbiel superalgebra, i.e. the only one single generated,
proving that is unique up to isomorphism. After that, we characterise the
naturally graded filiform ones and obtain low-dimensional classifications. | Luisa María Camacho, Amir Fernández Ouaridi, Ivan Kaygorodov, Rosa Navarro | 2023-04-01T09:40:26Z | http://arxiv.org/abs/2304.00273v2 | # Zinbiel superalgebras+
# Zinbiel superalgebras+
Footnote †: The first part of this work is supported by Ministerio de Economía y Competitividad (Spain), grant PID2020-115155GB-I00 (European FEDER support included, EU); FCT UIDB/MAT/00212/2020, UIDP/MAT/00212/2020 and by the Spanish Government through the Ministry of Universities grant ‘Margarita Salas’, funded by the European Union - NextGenerationEU. The second part of this work is supported by the Russian Science Foundation under grant 22-71-10001.
Luisa Maria Camacho1
Amir Fernandez Ouaridi2
Ivan Kaygorodov4
Rosa M. Navarro5
Footnote 2: Dpto. Matematica Aplicada I, Universidad de Sevilla, Sevilla, Spain; [email protected]
Footnote 3: Centro de Matematica, Universidade de Coimbra, Coimbra, Portugal; University of Cadiz, Puerto Real, Spain; [email protected]
Footnote 4: CMA-UBI, Universidade da Beira Interior, Covilha, Portugal; Moscow Center for Fundamental and Applied Mathematics, Moscow, Russia; Saint Petersburg University, Russia; [email protected]
Footnote 5: Dpto. de Matematica, Universidad de Extremadura, Caceres, Spain; [email protected]
**Abstract:**_Throughout the current paper, we extend the study of Zinbiel algebras to Zinbiel superalgebras. In particular, we show that all the Zinbiel superalgebras over an arbitrary field are nilpotent in the same way as occurs for Zinbiel algebras. Moreover and since the most important cases of nilpotent algebras or superalgebras are those with maximal nilpotency index, we study the complex null-filiform Zinbiel superalgebra, i.e. the only one single generated, proving that is unique up to isomorphism. After that, we characterise the naturally graded filiform ones and obtain low-dimensional classifications._
**Keywords**: _Zinbiel superalgebra, dual Leibniz superalgebra, nilpotent superalgebra, algebraic classification._
**MSC2020**: primary 17A30; secondary 17A70.
###### Contents
* 1 Preliminaries and basic definitions
* 1.1 Zinbiel algebras
* 1.2 Zinbiel superalgebras
* 1.2.1 Zinbiel superalgebras
* 1.2.2 Null-filiform Zinbiel superalgebras
* 2.3 Naturally graded filiform Zinbiel superalgebras
* 3.1 The Zinbiel superalgebras
* 3.2.1 Zinbiel superalgebras
* 3.3.1 Zinbiel superalgebras
* 3.3.2 Zinbiel superalgebras
* 3.3.3 Zinbiel superalgebras
* 3.3.4 Zinbiel superalgebras
* 3.3.5 Finite-dimensional Zinbiel superalgebras are nilpotent
## Introduction
Loday introduced a class of symmetric operads generated by one bilinear operation subject to one relation making each left-normed product of three elements equal to a linear combination of right-normed products: \((a_{1}a_{2})a_{3}=\sum\limits_{\sigma\in\mathbb{S}_{3}}x_{\sigma}a_{\sigma(1)}( a_{\sigma(2)}a_{\sigma(3)});\) such an operad is called a parametrized one-relation operad. For a particular choice of parameters \(\{x_{\sigma}\}\), this operad is said to be regular if each of its components is the regular representation of the symmetric group; equivalently, the corresponding free algebra on a vector space \(V\) is, as a graded vector space, isomorphic to the tensor algebra of \(V\). Bremner and Dotsenko classified, over an algebraically closed field of characteristic zero, all regular parametrized one-relation operads. In fact, they proved that each such operad is isomorphic to one of the following five operads: the left-nilpotent operad, the associative operad, the Leibniz operad, the Zinbiel operad, and the Poisson operad [6]. Then, an algebra \(\mathbf{Z}\) is called a (left) _Zinbiel algebra_ if it satisfies the identity \((xy)z=x(yz+zy)\). Zinbiel algebras were introduced by Loday in [26]. Under the Koszul duality, the operad of Zinbiel algebras is dual to the operad of Leibniz algebras. Zinbiel algebras are also known as pre-commutative algebras [25] and chronological algebras [24]. Remark that a Zinbiel algebra is equivalent to a commutative dendriform algebra [3]. Also, the variety of Zinbiel algebras is a proper subvariety in the variety of right commutative algebras and each Zinbiel algebra with the commutator multiplication gives a Tortkara algebra [16]. Zinbiel algebras also give an example of algebras of slowly growing length [19]. Recently, the notion of matching Zinbiel algebras was introduced in [18] and the defined identities for mono and binary Zinbiel algebras are studied in [21]. Moreover, Zinbiel algebras also appeared in a study of rack cohomology [14], number theory [12] and in the construction of a Cartesian differential category [20]. Thus, we can assert that in recent years, there has been a strong interest in the study of Zinbiel algebras in the algebraic and the operad context, see for instance [1, 2, 4, 5, 7, 9, 11, 15, 16, 17, 18, 21, 22, 23, 27, 28, 29, 30].
Free Zinbiel algebras were shown to be precisely the shuffle product algebra [27], which is under a certain interest until now [13]. Naurazbekova proved that, over a field of characteristic zero, free Zinbiel algebras are the free associative-commutative algebras (without unity) with respect to the symmetrization multiplication and their free generators are found; also she constructed examples of subalgebras of the two-generated free Zinbiel algebra that are free Zinbiel algebras of countable rank [28]. Nilpotent algebras play an important role in the class of Zinbiel algebras. So, Dzhumadildaev and Tulenbaev proved that each complex finite dimensional Zinbiel algebra is nilpotent [17]; this result was generalized by Towers for an arbitrary field [30]. Naurazbekova and Umirbaev proved that in characteristic zero any proper subvariety of the variety of Zinbiel algebras is nilpotent [29]. Finite-dimensional Zinbiel algebras with a "big" nilpotency index are classified in [1, 7]. Central extensions of three-dimensional Zinbiel algebras were calculated in [22] and of filiform Zinbiel algebras in [9]. The description of all degenerations in the variety of complex four-dimensional Zinbiel algebras is given in [23] and the geometric classification of complex five-dimensional Zinbiel algebras is given in [4]. After that, Ceballos and Towers strudied abelian subalgebras and ideals of maximal dimension in Zinbiel algebras [11].
Our main goal then, for the present paper, is to extend the study of Zinbiel algebras to Zinbiel superalgebras. Thus, we prove that all the Zinbiel superalgebras over an arbitrary field are nilpotent analogously as occurs for Zinbiel algebras. Let us remark that the most important cases of nilpotent superalgebras are those with maximal nilpotency index, therefore we consider the null-filiform and filiform cases. For the former, i.e. the only one single generated, we show that is unique up to isomorphism. For the latter, we characterise the naturally graded for arbitrary dimensions. Note that among all the gradations, the most important for nilpotent algebras or superalgebras is the natural gradation which comes from the filtration defined by the descending central sequence. Finally, we complete the study of Zinbiel superalgebras providing low-dimensional classifications.
## 1. Preliminaries and basic definitions
### Zinbiel algebras
We recall first some definitions and basic results regarding Zinbiel algebras.
**Definition 1**.: _An algebra \(\mathbf{Z}\) is called Zinbiel algebra if it satisfies the identity_
\[(xy)z=x(yz+zy).\]
For a given Zinbiel (super)algebra \(\mathbf{Z}\), it is defined the following sequence:
\[\mathbf{Z}^{1}=\mathbf{Z},\quad\mathbf{Z}^{k+1}=\mathbf{Z}\mathbf{Z}^{k}.\]
**Definition 2**.: _A Zinbiel (super)algebra \(\mathbf{Z}\) is called nilpotent if there exists \(s\in\mathbb{N}\) such that \(\mathbf{Z}^{s}=0\). The minimal number \(s\) satisfying this property is called the nilpotency index of the (super)algebra \(\mathbf{Z}\)._
It is not difficult to see that the index of nilpotency of an arbitrary \(n\)-dimensional nilpotent Zinbiel (super)algebra does not exceed the number \(n+1\). Since every finite-dimensional Zinbiel algebra over a field is nilpotent [30], it made perfect sense to start studying those with the maximal index of nilpotency, i.e. null-filiform.
**Definition 3**.: _An \(n\)-dimensional Zinbiel (super)algebra \(\mathbf{Z}\) is called null-filiform if \(\dim\mathbf{Z}^{i}=n+1-i\)._
Let us note that a Zinbiel algebra is null-filiform if and only if it is one-generated. The classification of complex null-filiform Zinbiel algebras was given in [2].
**Theorem 4**.: _An arbitrary \(n\)-dimensional null-filiform Zinbiel algebra is isomorphic to the following algebra:_
\[e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j}.\]
After having obtained the aforementioned Zinbiel algebras, the next case for consideration was filiform. Let us denote by \(L_{x}\) the operator of left multiplication on element \(x\). Then, for the operator \(L_{x}\) it has been defined as a descending sequence \(C(x)=(n_{1},n_{2},\ldots,n_{k})\) with \(n_{1}+\ldots+n_{k}=n\), which consists of the dimensions of the Jordan blocks of the operator \(L_{x}\). In the set of such sequences, we consider the lexicographic order, that is, \(C(x)=(n_{1},n_{2},\ldots,n_{k})<C(y)=(m_{1},m_{2},\ldots,m_{s})\) if there exists \(i\) such that \(n_{i}<m_{i}\) and \(n_{j}=m_{j}\) for \(j<i\).
**Definition 5**.: _The sequence \(C(\mathbf{Z})=\text{max}\{C(x):x\in\mathbf{Z}^{1}\backslash\mathbf{Z}^{2}\}\) is called the characteristic sequence of the Zinbiel algebra \(\mathbf{Z}\)._
**Definition 6**.: _The Zinbiel algebra \(\mathbf{Z}\) is called \(p\)-filiform if \(C(\mathbf{Z})=(n-p,\underbrace{1,\ldots,1}_{p})\). If \(p=0\) (respectively, \(p=1\)), then \(\mathbf{Z}\) is called null-filiform (respectively, filiform) Zinbiel algebra._
Let \(\mathbf{Z}\) be a finite-dimensional complex Zinbiel algebra with the nilpotency index equal to \(s\). Let us consider \(\mathbf{Z}_{i}=\mathbf{Z}^{i}/\mathbf{Z}^{i+1}\) and it is denoted by \(\mathrm{gr}(\mathbf{Z})=\mathbf{Z}_{1}\oplus\mathbf{Z}_{2}\oplus\ldots\oplus \mathbf{Z}_{s-1}\). It can be easily checked that \(\mathrm{gr}(\mathbf{Z})\) is a graded Zinbiel algebra. If \(\mathbf{Z}\) and \(\mathrm{gr}(\mathbf{Z})\) are isomorphic, then \(\mathbf{Z}\) is said to be naturally graded. The classification of complex naturally graded filiform Zinbiel algebras was given in [2]:
**Theorem 7**.: _An arbitrary \(n\)-dimensional (\(n\geq 5\)) naturally graded complex filiform Zinbiel algebra is isomorphic to the following algebra:_
\[e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j},\quad\text{for}\quad\quad 2\leq i+j\leq n-1.\]
Recently in [5] there have been studied symmetric (left and right) Zinbiel superalgebras. Thus, we extract the following results concerning to (left) Zinbiel superalgebras which we will refer to just as Zinbiel superalgebras.
### Zinbiel superalgebras
The superversion of the Zinbiel algebras can be obtained in the usual way.
**Definition 8**.: _Let \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\) be a \(\mathbb{Z}_{2}\)-graded vector space with a bilinear map on \(\mathbf{Z}\) such that \(\mathbf{Z}_{i}\mathbf{Z}_{j}\subset\mathbf{Z}_{i+j}\). \(\mathbf{Z}\) is called a Zinbiel superalgebra if, for all homogeneous \(x,y,z\in\mathbf{Z}_{\bar{0}}\cup\mathbf{Z}_{\bar{1}}\) it satisfies_
\[(xy)z=x\big{(}yz+(-1)^{|y||z|}zy\big{)}.\]
_As usual, for \(x\in\mathbf{Z}_{\bar{0}}\cup\mathbf{Z}_{\bar{1}}\), it is defined the corresponding endomorphism of \(\mathbf{Z}\) by \(L_{x}(y)=xy\) for all \(y\in\mathbf{Z}_{\bar{0}}\cup\mathbf{Z}_{\bar{1}}\) which is called the left multiplication by \(x\)._
**Remark 9**.: _In the same way that any Zinbiel algebra is a right-commutative algebra, any Zinbiel superalgebra is a right-commutative superalgebra. Namely, for any homogeneous \(x,y\) and \(z\), it satisfies the superidentity:_
\[(xy)z=(-1)^{|y||z|}(xz)y.\]
Next, we extend the definitions and first results of Zinbiel algebras to Zinbiel superalgebras. Thus, for a given Zinbiel superalgebra \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\), we define the following sequence:
\[\mathbf{Z}^{1}=\mathbf{Z},\quad\mathbf{Z}^{k+1}=\mathbf{Z}\mathbf{Z}^{k}.\]
Let us note that also they can be defined two sequences as follows:
\[\mathbf{Z}_{\bar{0}}^{1}=\mathbf{Z}_{\bar{0}},\quad\mathbf{Z}_{\bar{0}}^{k+1}= \mathbf{Z}_{\bar{0}}\mathbf{Z}_{\bar{0}}^{k}\quad\text{ and }\quad\mathbf{Z}_{\bar{1}}^{1}=\mathbf{Z}_{\bar{1}},\quad\mathbf{Z}_{\bar{1} }^{k+1}=\mathbf{Z}_{\bar{0}}\mathbf{Z}_{\bar{1}}^{k}.\]
Along the last section of the present paper, we will show that all Zinbiel superalgebras over an arbitrary field are nilpotent, therefore the study of null-filiform and filiform is crucial for the understanding of finite-dimensional Zinbiel superalgebras.
Next, we introduce, following the spirit of the theory of nilpotent superalgebras for Lie and Leibniz cases (see for example [8] and reference therein), the concepts of characteristic sequence and filiform for superalgebras. Firstly, let us denote by \(L_{x}\) the operator of left multiplication on a homogeneous
even element \(x\in\mathbf{Z}_{\bar{0}}\). Therefore, we have: \(L_{x}:\mathbf{Z}_{\bar{0}}\rightarrow\mathbf{Z}_{\bar{0}}\) and we denote by \(C_{0}(x)\) the corresponding descending sequence of the dimensions of Jordan blocks of the operator \(L_{x}\) acting on \(\mathbf{Z}_{\bar{0}}\). Analogously and since \(L_{x}:\mathbf{Z}_{\bar{1}}\rightarrow\mathbf{Z}_{\bar{1}}\) we denote by \(C_{1}(x)\) the corresponding descending sequence of the dimensions of Jordan blocks of the operator \(L_{x}\) acting on \(\mathbf{Z}_{\bar{1}}\). Then, with regard to the lexicographic order we have the following definition.
**Definition 10**.: _The sequence_
\[C(\mathbf{Z})=\left(\max_{x\in\mathbf{Z}_{\bar{0}}\setminus\mathbf{Z}_{\bar{0 }}^{2}}C_{0}(x)\,\Bigg{|}\max_{y\in\mathbf{Z}_{\bar{0}}\setminus\mathbf{Z}_{ \bar{0}}^{2}}C_{1}(y)\right),\]
_is called the characteristic sequence of the Zinbiel superalgebra \(\mathbf{Z}\)._
Along the present work we assume that both characteristic sequences of the definition are obtained by the same generator element \(x\in\mathbf{Z}_{\bar{0}}\backslash\mathbf{Z}_{\bar{0}}^{2}\) which is usually called characteristic element.
**Definition 11**.: _A Zinbiel superalgebra \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\), with \(\dim(\mathbf{Z}_{\bar{0}})=n\) and \(\dim(\mathbf{Z}_{\bar{1}})=m\), is said to be filiform if its characteristic sequence is exactly \(C(\mathbf{Z})=(n-1|\,m)\)._
**Remark 12**.: _Note that if \(\mathbf{Z}\) is a filiform Zinbiel superalgebra, then \(\mathbf{Z}_{\bar{0}}\) is a filiform Zinbiel algebra._
Let us remark that among all the gradations, the most important for nilpotent structures is the natural gradation which comes from the filtration defined by the descending central sequence. Recently, it has been defined the concept of naturally graded for both nilpotent superalgebras, Lie and Leibniz [10]. Next we extend this concept for complex Zinbiel superalgebras which are all of them nilpotent.
Consider \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\) to be a complex Zinbiel superalgebra. It can be seen that the sequences aforementioned \(\{\mathbf{Z}_{\bar{0}}^{k}\}\) and \(\{\mathbf{Z}_{\bar{1}}^{k}\}\) define a filtration over \(\mathbf{Z}_{\bar{0}}\) and \(\mathbf{Z}_{\bar{1}}\), respectively. If we denote \(\mathfrak{z}_{\bar{0}}^{i}:=\mathbf{Z}_{\bar{0}}^{i-1}/\mathbf{Z}_{\bar{0}}^{i}\) and \(\mathfrak{z}_{\bar{1}}^{i}:=\mathbf{Z}_{\bar{1}}^{i-1}/\mathbf{Z}_{\bar{1}}^{i}\), then it is verified that \(\mathfrak{z}_{\bar{0}}^{i}\mathfrak{z}_{\bar{0}}^{j}\subset\mathfrak{z}_{\bar {0}}^{i+j}\) and \(\mathfrak{z}_{\bar{0}}^{i}\mathfrak{z}_{\bar{1}}^{j}\subset\mathfrak{z}_{\bar {1}}^{i+j}\).
**Definition 13**.: _Given a complex Zinbiel superalgebra \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\), consider \(\mathfrak{z}^{i}=\mathfrak{z}_{\bar{0}}^{i}\oplus\mathfrak{z}_{\bar{1}}^{i}\), with \(\mathfrak{z}_{\bar{0}}^{i}=\mathbf{Z}_{\bar{0}}^{i-1}/\mathbf{Z}_{\bar{0}}^{i}\) and \(\mathfrak{z}_{\bar{1}}^{i}=\mathbf{Z}_{\bar{1}}^{i-1}/\mathbf{Z}_{\bar{1}}^{i}\). Thus, \(\mathbf{Z}\) is said to be naturally graded if the following conditions hold:_
1. \(\mathrm{gr}(\mathbf{Z})=\sum_{i\in\mathbb{N}}\mathfrak{z}^{i}\) _is a graded superalgebra (_\(\mathfrak{z}^{i}\mathfrak{z}^{j}\subset\mathfrak{z}^{i+j}\)_),_
2. \(\mathbf{Z}\) _and_ \(\mathrm{gr}(\mathbf{Z})\) _are isomorphic._
## 2. Null-filiform Zinbiel superalgebras
**Theorem 14**.: _Let \(\mathbf{Z}\) be an \(n\)-dimensional null-filiform Zinbiel superalgebra with \(\dim(\mathbf{Z}_{\bar{1}})\neq 0.\) Then \(\mathbf{Z}\) is isomorphic to the following superalgebra which occurs only for the cases \(\dim(\mathbf{Z}_{\bar{0}})=\dim(\mathbf{Z}_{\bar{1}})\) and \(\dim(\mathbf{Z}_{\bar{1}})=\dim(\mathbf{Z}_{\bar{0}})+1:\)_
\[e_{2k+1}e_{2l} = C_{k+l}^{k}e_{2k+2l+1},\quad e_{2k}e_{2l} = C_{k+l-1}^{l}e_{2k+2l},\quad e_{2k+1}e_{2l+1} = C_{k+l}^{l}e_{2k+2l+2},\]
_where \(e_{2k},e_{2l}\in\mathbf{Z}_{\bar{0}}\) and \(e_{2k+1},e_{2l+1}\in\mathbf{Z}_{\bar{1}}\)._
Proof.: It is clear, that each null-filiform Zinbiel superalgebra is one-generated. If it is generated by an even element, then it has zero odd part. Hence, we can suppose that our superalgebra is generated by an odd element \(e_{1}\) and then, in the same way, as for null-filiform Zinbiel algebras, we can consider:
\[e_{2}:=e_{1}e_{1},\quad e_{3}:=e_{1}(e_{1}e_{1}),\quad\ldots,\quad e_{n}:=e_{1}( e_{1}\ldots(e_{1}(e_{1}e_{1}))).\]
Let us remark that the elements above are linearly independent. This latter fact allows us to regard \(\{e_{1},e_{2},\ldots,e_{n}\}\) as a basis of the superalgebra \(\mathbf{Z}\) being, \(e_{2k+1}\) odd basis vectors and \(e_{2k}\) even ones. Moreover, we have only two possibilities: either \(\dim(\mathbf{Z}_{\bar{0}})=\dim(\mathbf{Z}_{\bar{1}})\) or \(\dim(\mathbf{Z}_{\bar{1}})=\dim(\mathbf{Z}_{\bar{0}})+1\).
Let us note that by construction we have
\[e_{1}e_{i}=e_{i+1}. \tag{1}\]
Now, we prove by induction
\[e_{2k}e_{1}=0\text{ and }e_{2k+1}e_{1}=e_{1}e_{2k+1}=e_{2k+2}. \tag{2}\]
For \(k=1\) the equations hold by considering the following Zinbiel superidentity:
\[e_{2}e_{1}=(e_{1}e_{1})e_{1}=e_{1}(e_{1}e_{1})-e_{1}(e_{1}e_{1})=0\]
and then
\[e_{3}e_{1}=(e_{1}e_{2})e_{1}=e_{1}(e_{2}e_{1})+e_{1}(e_{1}e_{2})=e_{1}e_{3}\]
Suppose that the equations hold for \(k\), thus
\[e_{2k+2}e_{1}=(e_{1}e_{2k+1})e_{1}=e_{1}(e_{2k+1}e_{1})-e_{1}(e_{1}e_{2k+1})=0\]
and then
\[e_{2k+3}e_{1}=(e_{1}e_{2k+2})e_{1}=e_{1}(e_{2k+2}e_{1})+e_{1}(e_{1}e_{2k+2})=e_ {1}e_{2k+3}\]
and therefore we have equations (2) for \(k+1\). Following we prove, also by induction, the equations
\[e_{2k}e_{2}=ke_{2k+2},\;e_{2k+1}e_{2}=(k+1)e_{2k+3},\;e_{2}e_{2k}=e_{2k+2},\;e _{2}e_{2k+1}=0. \tag{3}\]
The equations hold for \(k=1\) by considering the following Zinbiel superidentities:
\[\begin{array}{cccccccc}e_{2}e_{2}&=&(e_{1}e_{1})e_{2}&=&e_{1}(e_{1}e_{2})+e _{1}(e_{2}e_{1})&=&e_{1}e_{3}&=&e_{4},\\ e_{3}e_{2}&=&(e_{1}e_{2})e_{2}&=&e_{1}(e_{2}e_{2})+e_{1}(e_{2}e_{2})&=&2e_{5}, \\ 0&=&(e_{2}e_{1})e_{2}&=&e_{2}(e_{1}e_{2})+e_{2}(e_{2}e_{1})&=&e_{2}e_{3},\\ \end{array}\]
supposing that the equations hold for \(k\) we get the equations for \(k+1\):
\[\begin{array}{cccccccc}e_{2(k+1)}e_{2}&=&(e_{1}e_{2k+1})e_{2}&=&e_{1}(e_{2 k+1}e_{2})+e_{1}(e_{2}e_{2k+1})&=&(k+1)e_{2(k+1)+2},\\ e_{2}e_{2(k+1)}&=&(e_{1}e_{1})e_{2k+2}&=&e_{1}(e_{1}e_{2k+2})+e_{1}(e_{2k+2}e_{ 1})&=&e_{2(k+1)+2},\\ e_{2(k+1)+1}e_{2}&=&(e_{1}e_{2k+2})e_{2}&=&e_{1}(e_{2k+2}e_{2})+e_{1}(e_{2}e_{2 k+2})&=&((k+1)+1)e_{2(k+1)+3},\\ 0&=&(e_{2}e_{1})e_{2k+2}&=&e_{2}(e_{1}e_{2k+2})+e_{2}(e_{2k+2}e_{1})&=&e_{2}e_{2 (k+1)+1}.\\ \end{array}\]
Next, on account of equations (2) together with the fact that \(e_{2k}e_{2l}\) is a multiple of \(e_{2k+2l}\) we have
\[0=(e_{2k}e_{2l})e_{1}=e_{2k}(e_{2l}e_{1})+e_{2k}(e_{1}e_{2l})=e_{2k}e_{2l+1},\]
which leads to the equation \(e_{2k}e_{2l+1}=0\). Following we prove by induction the equation
\[e_{2k+1}e_{2l}=C^{k}_{k+l}e_{2k+2l+1}. \tag{4}\]
From the equality (1) we get equation (4) for \(k=0\) and every \(l\). Now after supposing that equation (4) holds for \(k\) and every \(l\), we obtain that it also holds for \(k+1\) and every \(l\) taking into account equality (3):
\[e_{2(k+1)+1}e_{2l}=\tfrac{1}{k+1}(e_{2k+1}e_{2})e_{2l}=\tfrac{1}{k+1}\big{(}e_ {2k+1}(e_{2}e_{2l})+e_{2k+1}(e_{2l}e_{2})\big{)}=\tfrac{1+l}{k+1}e_{2k+1}e_{2l+ 2},\]
but since \(e_{2k+1}e_{2l+2}=C^{k}_{k+l+1}e_{2k+2l+3}\), then
\[\tfrac{1+l}{k+1}C^{k}_{k+l+1}=\tfrac{(1+l)(k+l+1)!}{(k+1)k!(l+1)!}=\tfrac{(k+ l+1)!}{(k+1)!!}=C^{k+1}_{(k+1)+l},\]
which concludes the proof of equality (4). Similarly, we prove the equation
\[e_{2k+1}e_{2l+1}=C^{l}_{k+l}e_{2k+2l+2}. \tag{5}\]
The equality (1) leads to equation (5) for \(k=0\) and every \(l\), and after supposing that equation (5) holds for \(k\) and every \(l\), we obtain that it also holds for \(k+1\) and every \(l\) on account of equality (3):
\[e_{2(k+1)+1}e_{2l+1}=\tfrac{1}{k+1}(e_{2k+1}e_{2})e_{2l+1}=\tfrac{1}{k+1}\big{(} e_{2k+1}(e_{2}e_{2l+1})+e_{2k+1}(e_{2l+1}e_{2})\big{)}=\tfrac{1+l}{k+1}e_{2k+1}e_{2l+ 3},\]
but since \(e_{2k+1}e_{2l+3}=C^{l+1}_{k+l+1}e_{2k+2l+4}\), then
\[\tfrac{1+l}{k+1}C^{l+1}_{k+l+1}=\tfrac{(1+l)(k+l+1)!}{(k+1)k!(l+1)!}=\tfrac{( k+l+1)!}{(k+1)!l!}=C^{l}_{(k+1)+l},\]
which concludes the proof of equality (5). Finally, we prove the last equality
\[e_{2k}e_{2l}=C^{l}_{k+l-1}e_{2k+2l},\quad k\geq 1,\ l\geq 1. \tag{6}\]
The equality (3) leads to equation (6) for \(k=1\) and every \(l\), and after supposing that equation (6) holds for \(k\) and every \(l\), we obtain that it also holds for \(k+1\) and every \(l\) on account of equality (3):
\[e_{2(k+1)}e_{2l}=\tfrac{1}{k}(e_{2k}e_{2})e_{2l}=\tfrac{1}{k}(e_{2k}(e_{2}e_{ 2l})+e_{2k}(e_{2l}e_{2})\big{)}=\tfrac{1+l}{k}e_{2k}e_{2l+2}\]
but as \(e_{2k}e_{2l+2}=C^{l+1}_{k+l}e_{2k+2l+2}\), then
\[\tfrac{1+l}{k}C^{l+1}_{k+l}=\tfrac{(1+l)(k+l)!}{(k-1)!k(l+1)!}=\tfrac{(k+l)!} {k!l!}=C^{l}_{(k+1)+l-1},\]
which concludes the proof of equality (6) and also of the statement of the Theorem.
## 3. Naturally graded filiform Zinbiel superalgebras
**Lemma 15**.: _Let \(\mathbf{Z}\) be a complex naturally graded filiform Zinbiel superalgebra with \(\dim(\mathbf{Z}_{\bar{0}})=n\), \(n\geq 5\) and \(\dim(\mathbf{Z}_{\bar{1}})=m\). Then, there are a basis \(\{e_{1},\ldots,e_{n}\}\) for \(\mathbf{Z}_{\bar{0}}\) and a basis \(\{f_{1},\ldots,f_{m}\}\) for \(\mathbf{Z}_{\bar{1}}\), for which we have the following multiplication table_
\[e_{i}e_{j} =C_{i+j-1}^{j}e_{i+j},\ \ 2\leq i+j\leq n-1,\] \[e_{1}f_{j} =f_{j+1},\
\[\mathfrak{nf}_{4} : \left\{\begin{array}{ll}e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j},&2\leq i+j \leq n-1.\\ e_{i}f_{j}=\frac{\prod\limits_{k=0}^{i-2}(n-3+j+k)}{(i-1)!}f_{j+i},&f_{j}e_{i}= \frac{\prod\limits_{k=0}^{i-1}(n-4+j+k)}{i!}f_{j+i},&1\leq i\leq n-1.\\ f_{1}f_{n-2}=e_{n-1}.\end{array}\right.\] \[\mathfrak{nf}_{5} : \left\{\begin{array}{ll}e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j},&2\leq i +j\leq n-1.\\ e_{i}f_{j}=\frac{\prod\limits_{k=0}^{i-2}(3-n+j+k)}{(i-1)!}f_{j+i},&f_{j}e_{i} =\frac{\prod\limits_{k=0}^{i-1}(2-n+j+k)}{i!}f_{j+i},&1\leq i\leq n-1.\\ e_{n}f_{n-2}=f_{n-1},&f_{1}f_{n-2}=e_{n-1}.\end{array}\right.\]
Proof.: We consider now, \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\) a complex naturally graded filiform Zinbiel superalgebra with \(\dim(\mathbf{Z}_{\bar{0}})=n\), \(n\geq 5\) and \(\dim(\mathbf{Z}_{\bar{1}})=m\) with \(m\geq 3\). Then, there exists a basis \(\{e_{1},\ldots,e_{n},f_{1},\ldots,f_{m}\}\) and we have one of the following possibilities depending on if \(m<n-1\), \(m=n-1\) or \(m>n-1\).
\[\underbrace{\langle e_{1},e_{n},f_{1}\rangle}_{\mathfrak{s}^{1}} \oplus \underbrace{\langle e_{2},f_{2}\rangle}_{\mathfrak{s}^{2}} \oplus \ldots \oplus \underbrace{\langle e_{m},f_{m}\rangle}_{\mathfrak{s}^{m}} \oplus \underbrace{\langle e_{m+1}\rangle}_{\mathfrak{s}^{m+1}} \oplus \ldots \oplus \underbrace{\langle e_{n-1}\rangle}_{\mathfrak{s}^{n-1}}\] \[\underbrace{\langle e_{1},e_{n},f_{1}\rangle}_{\mathfrak{s}^{1}} \oplus \underbrace{\langle e_{2},f_{2}\rangle}_{\mathfrak{s}^{2}} \oplus \ldots \oplus \underbrace{\langle e_{n-1},f_{n-1}\rangle}_{\mathfrak{s}^{n-1}} \oplus \underbrace{\langle f_{n}\rangle}_{\mathfrak{s}^{n}} \oplus \ldots \oplus \underbrace{\langle f_{m}\rangle}_{\mathfrak{s}^{m}}\]
We will study these three cases together. We consider \(f_{1}e_{1}=\alpha f_{2}\). By induction using the Zinbiel superidentity we obtain:
\[e_{i}f_{j} = \frac{\prod\limits_{k=0}^{i-2}(\alpha+j+k)}{(i-1)!}f_{j+i}, f_{j}e_{i} = \frac{\prod\limits_{k=0}^{i-1}(\alpha+j+k-1)}{i!}f_{j+i},\quad 1\leq i \leq n-1,\] \[f_{1}e_{n} = c_{1}f_{2}. e_{n}f_{j} = b_{j}f_{j+1},\quad\quad\quad\quad\quad\quad\quad\quad 1\leq j\leq m-1.\]
For every three homogeneous elements \(a,b,c\in\mathbf{Z}\), we define
\[\mathfrak{s}\mathfrak{I}\{a,b,c\}:=(ab)c-a\big{(}bc+(-1)^{|a||b|}cb\big{)}.\]
Now, applying the Zinbiel superidentity for basis elements gives the following relations.
\[\left\{\mathfrak{s}\mathfrak{I}\{e_{n},e_{1},f_{j}\}=0\right\}_{2 \leq j\leq m-1} \Rightarrow (\alpha+j-1)b_{j}=0\] \[\left\{\mathfrak{s}\mathfrak{I}\{e_{n},f_{j},e_{1}\}=0\right\}_{1 \leq j\leq m-2} \Rightarrow (\alpha+j)b_{j}=0\quad\quad\quad\quad\Rightarrow\boxed{b_{j}=0, \;2\leq j\leq m-2}\] \[\left\{\mathfrak{s}\mathfrak{I}\{f_{1},e_{n},e_{1}\}=0\right\} \Rightarrow (\alpha+1)c_{1}=0,\]
\[\left\{\mathfrak{s}\mathfrak{3}\{f_{2},e_{n},e_{1}\}=0\right\} \Rightarrow (\alpha+2)(c_{1}+b_{1})=0,\ \ \Rightarrow\boxed{c_{1}=-b_{1}}\]
thus, \(f_{1}e_{n}=-b_{1}f_{2}\) and \(f_{j}e_{n}=0\) with \(2\leq j\leq m-1\). Then, we have the following products:
\[\begin{array}{rclrcl}e_{i}e_{j}&=&C_{i+j-1}^{j}e_{i+j},&&2\leq i+j\leq n-1,\\ e_{i}f_{j}&=&\prod\limits_{k=0}^{i-2\choose(i-1)!}f_{j+i},&f_{j}e_{i}&=&\prod \limits_{k=0}^{i-1\choose k=0}(\alpha+j+k-1)\\ e_{n}f_{1}&=&b_{1}f_{2},&&e_{n}f_{m-1}&=&b_{m-1}f_{m},\\ f_{1}e_{n}&=&-b_{1}f_{2},&&f_{j}e_{n}&=&0,&2\leq j\leq m-1,\end{array}\]
with \((\alpha+1)b_{1}=(\alpha+m-2)b_{m-1}=0\).
Finally, only rest to compute the products \(f_{i}f_{j}=h_{ij}e_{i+j}\). We compute them by induction:
* **Step 1:** We have: \[\begin{array}{rclrcl}f_{1}f_{1}&=&h_{11}e_{2},\\ f_{2}f_{1}&=&(e_{1}f_{1})f_{1}=e_{1}(f_{1}f_{1})-e_{1}(f_{1}f_{1})=0,\\ f_{1}f_{2}&=&h_{12}e_{3}.\end{array}\] Moreover, we have \[\begin{array}{rclrcl}\mathfrak{s}\mathfrak{3}\{f_{1},f_{1},e_{1}\}=0& \Rightarrow&2h_{11}=(a+1)h_{12},\\ \mathfrak{s}\mathfrak{3}\{f_{1},e_{1},f_{1}\}=0&\Rightarrow&(\alpha+1)h_{12 }=0\end{array}\ \ \Rightarrow\ \ h_{11}=(a+1)h_{12}=0.\]
* **Step 2:** Now, we can write: \[\begin{array}{rclrcl}f_{1}f_{1}&=&f_{2},&&f_{2}f_{1}&=&0,\\ f_{1}f_{2}&=&h_{12}e_{3},&&f_{1}f_{3}&=&h_{13}e_{4},\\ f_{2}f_{2}&=&(e_{1}f_{1})f_{2}&=&e_{1}(f_{1}f_{2})-e_{1}(f_{2}f_{1})&=&h_{12}( e_{1}e_{3})&=&h_{12}e_{4},\\ f_{3}f_{1}&=&(e_{1}f_{2})f_{1}&=&e_{1}(f_{2}f_{1})-e_{1}(f_{1}f_{2})&=&-h_{12}( e_{1}e_{3})&=&-h_{12}e_{4},\end{array}\] with \((\alpha+1)h_{12}=0\). Applying the Zinbiel superidentity, we get: \[\begin{array}{rclrcl}\mathfrak{s}\mathfrak{3}\{f_{1},f_{2},e_{1}\}=0& \Rightarrow&3h_{12}=(\alpha+2)h_{13},\\ \mathfrak{s}\mathfrak{3}\{f_{1},e_{1},f_{2}\}=0&\Rightarrow&(\alpha+2)h_{13 }=\alpha h_{12}.\end{array}\] Therefore, joining the following equations: \[\left.\begin{array}{rclrcl}(\alpha+1)h_{12}&=&0\\ 3h_{12}&=&(\alpha+2)h_{13}\\ (\alpha+2)h_{13}&=&\alpha h_{12}\end{array}\right\}\Rightarrow h_{12}=0,\ (\alpha+2)h_{13}=0.\]
* **Step 3:** We suppose that \[\begin{array}{rclcl}f_{i}f_{k+1-i}&=&0,&&2\leq i\leq k,\\ f_{1}f_{k}&=&h_{1k}e_{k+1},\\ f_{1}f_{k+1}&=&h_{1\,k+1}e_{k+2},\\ f_{k+1}f_{1}&=&(e_{1}f_{k})f_{1}&=&e_{1}(f_{k}f_{1})-e_{1}(f_{1}f_{k})=-h_{1k}( e_{1}e_{k+1})=-h_{1k}e_{k+2},\\ f_{2}f_{k}&=&(e_{1}f_{1})f_{k}&=&e_{1}(f_{1}f_{k})-e_{1}(f_{k}f_{1})=h_{1k}(e_{1} e_{k+1})=h_{1k}e_{k+2},\\ f_{k}f_{2}&=&(e_{1}f_{k-1})f_{2}&=&e_{1}(f_{k-1}f_{2})-e_{1}(f_{2}f_{k-1})=0,\\ f_{i}f_{k+2-i}&=&(e_{1}f_{i-1})f_{k+2-i}&=&e_{1}(f_{i-1}f_{k+2-i})-e_{1}(f_{k+2 -i}f_{i-1})=0,\end{array}\] with \((\alpha+k-1)h_{1k}=0\). Now, we apply the Zinbiel superidentity obtaining: \[\begin{array}{rclcl}\mathfrak{s}\mathfrak{3}\{f_{1},f_{k},e_{1}\}=0& \Rightarrow&(k+1)h_{1k}&=&(\alpha+k)h_{1\,k+1},\\ \mathfrak{s}\mathfrak{3}\{f_{1},e_{1},f_{k}\}=0&\Rightarrow&(\alpha+k)h_{1\, k+1}&=&ah_{1k}.\end{array}\] Therefore, joining the following equations: \[\left.\begin{array}{rcl}(\alpha+k-1)h_{1k}&=&0\\ (k+1)h_{1k}&=&(\alpha+k)h_{1\,k+1}\\ (\alpha+k)h_{1\,k+1}&=&\alpha h_{1k}\end{array}\right\}\Rightarrow h_{1k}=0, \;(\alpha+k)h_{1\,k+1}=0.\]
This process is finite, then we get \(f_{1}f_{m}=h_{1m}e_{m+1}\) (the rest \(f_{i}f_{j}=0\)) with \((\alpha+m-1)h_{1m}=0\), if \(m<n-2\) and \(f_{1}f_{n-2}=h_{1\,n-2}e_{n-1}\) (the rest \(f_{i}f_{j}=0\)) with \((\alpha+n-3)h_{1\,n-2}=0\) if \(m\geq n-2\).
In the case \(m<n-2\): \(\mathfrak{s}\mathfrak{3}\{f_{1},f_{m},e_{1}\}=0\) gives \(h_{1m}=0\).
Thus, the Zinbiel superalgebra has the following multiplication table:
\[\begin{array}{rclclcl}e_{i}e_{j}&=&C^{j}_{i+j-1}e_{i+j},&&2\leq i+j\leq n-1, \\ &&\frac{\prod\limits_{k=0}^{i-2}(\alpha+j+k)}{(i-1)!}f_{j+i},&f_{j}e_{i}&=& \frac{\prod\limits_{k=0}^{i-1}(\alpha+j+k-1)}{i!}f_{j+i},&1\leq i\leq n-1,\\ e_{n}f_{1}&=&b_{1}f_{2},&&e_{n}f_{m-1}&=&b_{m-1}f_{m},&m\geq n-2\\ f_{1}e_{n}&=&-b_{1}f_{2},&&f_{1}f_{n-2}&=&he_{n-1},\end{array}\]
with \((\alpha+1)b_{1}=(\alpha+m-2)b_{m-1}=0\) and \((\alpha+n-3)h=0\) if \(m\geq n-2\). Now, we consider \(m>3\) and \(m<n-2\). We can distinguish the following cases:
**Case 1:**: \(b_{1}\neq 0\). In this case, we have \(\alpha=-1\) and \(b_{m-1}=0\). If \(m\geq n-2,\) we also have that \(h=0\). Then we obtain the Zinbiel superalgebra \(\mathfrak{nf}_{1}\).
**Case 2:**: \(b_{1}=0\) and \(b_{m-1}=0\). In this case, we have a family of Zinbiel superalgebras \(\mathfrak{nf}_{2}^{\alpha}\).
**Case 3:**: \(b_{1}=0\) and \(b_{m-1}\neq 0\). Then, \(\alpha=2-m\). We have the superalgebra \(\mathfrak{nf}_{3}\).
Now, we consider \(m>3\) and \(m\geq n-2.\) Analyzing the above cases, we have a new superalgebra in **Case 2** (when \(\alpha=n-3\) and \(h\neq 0,\) we can consider \(h=1\)), \(\mathfrak{nf}_{4},\) and another in **Case 3** (when \(m=n-1,\)\(\alpha=3-n,\) and \(h\neq 0\), we can consider \(h=1\)), \(\mathfrak{nf}_{5}.\)
Next, we study the particular case \(m=3.\) We consider now, \(\mathbf{Z}\) a complex naturally graded filiform Zinbiel superalgebra with \(\dim(\mathbf{Z}_{\bar{0}})=n\), \(n\geq 5\) and \(\dim(\mathbf{Z}_{\bar{1}})=3\). Then, there exists a basis \(\{e_{1},\ldots,e_{n},f_{1},f_{2},f_{3}\}\) and we have the gradation:
\[\underbrace{\langle e_{1},e_{n},f_{1}\rangle}_{\mathfrak{z}^{1}}\ \ \oplus\ \ \underbrace{\langle e_{2},f_{2}\rangle}_{\mathfrak{z}^{2}}\ \ \oplus\ \ \underbrace{\langle e_{3},f_{3}\rangle}_{\mathfrak{z}^{3}}\ \ \oplus\ \ \underbrace{\langle e_{4}\rangle}_{\mathfrak{z}^{4}}\ \ \oplus\ \ \underbrace{\langle e_{5}\rangle}_{\mathfrak{z}^{5}}\ \ \oplus\ \ \ldots\]
**Theorem 17**.: _Let \(\mathbf{Z}\) be a complex naturally graded filiform Zinbiel superalgebra with \(\dim(\mathbf{Z}_{\bar{0}})=n\), \((n\geq 5),\) and \(\dim(\mathbf{Z}_{\bar{1}})=3.\) Then \(\mathbf{Z}\) is isomorphic to one of the following superalgebras:_
* _If_ \(n>5,\) _then_ \(\mathbf{Z}\) _is isomorphic either to_ \(\mathfrak{nf}_{1},\)__\(\mathfrak{nf}_{2}^{\alpha},\)__\(\mathfrak{nf}_{3},\) _or_ \[\mathfrak{a}_{1}:\left\{\begin{array}{ll}e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j},&2 \leq i+j\leq n-1,\\ e_{1}f_{1}=f_{2},&e_{1}f_{2}=f_{3},&f_{1}e_{1}=-f_{2},\\ e_{n}f_{1}=f_{2},&e_{n}f_{2}=f_{3},&f_{1}e_{n}=-f_{2}.\end{array}\right.\]
* _If_ \(n=5,\) _then_ \(\mathbf{Z}\) _is isomorphic either to_ \(\mathfrak{nf}_{1},\)__\(\mathfrak{nf}_{2}^{\alpha\neq-2},\)__\(\mathfrak{nf}_{3},\)__\(\mathfrak{a}_{1}\) _or_ \[\mathfrak{a}_{2}:\left\{\begin{array}{ll}e_{i}e_{j}=C^{j}_{i+j-1}e_{i+j},&2 \leq i+j\leq n-1,\\ f_{1}e_{1}=-2f_{2},&f_{2}e_{1}=-f_{3},&f_{1}e_{2}=f_{3},\\ e_{1}f_{1}=f_{2},&e_{1}f_{2}=f_{3},&e_{2}f_{1}=-f_{2},&f_{1}f_{n-2}=e_{n-1}. \end{array}\right.\]
Proof.: Similar to general case (see the proof of Theorem 16) we get that \(\mathbf{Z}\) is isomorphic to
\[\begin{array}{
## 4. Classification of low-dimensional complex Zinbiel superalgebras
**Lemma 18**.: _Given a \(n\)-dimensional Zinbiel superalgebra \(\mathbf{Z}\) of type \((n-1,1)\), i.e. \(\dim(\mathbf{Z}_{\bar{0}})=n-1\) and \(\dim(\mathbf{Z}_{\bar{1}})=1\), then \(\mathbf{Z}_{\bar{0}}\mathbf{Z}_{\bar{1}}=\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{0 }}=\{0\}\). Moreover, \(\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{1}}\) is a subspace of \(\operatorname{Ann_{L}}\left(\mathbf{Z}_{\bar{0}}\right)\)._
Proof.: Let \(e_{1},\ldots,e_{n-1}\) be a base of \(\mathbf{Z}_{\bar{0}}\) and \(f_{1}\) a base of \(\mathbf{Z}_{\bar{1}}\). Denote by \(e_{i}f_{1}=a_{i}f_{1}\) and \(f_{1}e_{i}=b_{i}f_{1}\), for \(a_{i},b_{i}\in\mathbb{C}\) and \(i=1,\ldots,n-1\). On the one hand, for \(i=1,\ldots,n-1\) we have
\[(e_{i}f_{1})e_{i}=e_{i}(f_{1}e_{i})+e_{i}(e_{i}f_{1});\quad a_{i}b_{i}f_{1}=a_ {i}b_{i}f_{1}+a_{i}^{2}f_{1}.\]
Hence, we have \(a_{i}=0\) and \(\mathbf{Z}_{\bar{0}}\mathbf{Z}_{\bar{1}}=\{0\}\).
On the other hand, since \(\mathbf{Z}_{\bar{0}}\) is a Zinbiel algebra, it is nilpotent. Suppose \(\mathbf{Z}_{\bar{0}}^{s}=0\) and proceed by induction. For \(x\in\mathbf{Z}_{\bar{0}}^{s-1}\), we have
\[(f_{1}x)x=f_{1}(xx+xx)=2f_{1}(xx),\]
and since \(xx\in\mathbf{Z}_{\bar{0}}^{s}=\{0\}\), we obtain \((f_{1}x)x=0\), which implies \(f_{1}x=0\), and we may write \(f_{1}\mathbf{Z}_{\bar{0}}^{s-1}=\{0\}\). Now, suppose \(f_{1}\mathbf{Z}_{\bar{0}}^{s-k+1}=\{0\}\), then for \(x\in\mathbf{Z}_{\bar{0}}^{s-k}\), we have \((f_{1}x)x=2f_{1}(xx)=0\), because \(xx\in\mathbf{Z}_{\bar{0}}^{s-k+1}\). Therefore, \(f_{1}x=0\) and we have \(f_{1}\mathbf{Z}_{\bar{0}}^{s-k}=\{0\}\). For \(k=s-1\), we conclude that \(\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{0}}^{1}=\mathbf{Z}_{\bar{1}}\mathbf{Z}_{ \bar{0}}=\{0\}\).
Further, we have \((f_{1}f_{1})x=f_{1}(f_{1}x)+f_{1}(xf_{1})=0\) for \(x\in\mathbf{Z}_{\bar{0}}\). Therefore, \((f_{1}f_{1})\in\operatorname{Ann_{L}}\left(\mathbf{Z}_{\bar{0}}\right)\) and \(\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{1}}\) is a subspace of \(\operatorname{Ann_{L}}\left(\mathbf{Z}_{\bar{0}}\right)\) being \(\operatorname{Ann_{L}}\left(\mathbf{Z}_{\bar{0}}\right)=\{x\in A:x\mathbf{Z}_ {\bar{0}}=0\}\).
The converse is a straightforward verification.
**Remark 19**.: _Given a \((n-1)\)-dimensional Zinbiel algebra \(\mathbf{Z}_{\bar{0}}\). If we construct a superalgebra \(\mathbf{Z}\) such that \(\mathbf{Z}_{\bar{0}}\mathbf{Z}_{\bar{1}}=\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{0 }}=\{0\}\) and such that \(\mathbf{Z}_{\bar{1}}\mathbf{Z}_{\bar{1}}\) is a subspace of \(\operatorname{Ann_{L}}\left(\mathbf{Z}_{\bar{0}}\right)\). Then, \(\mathbf{Z}\) is a Zinbiel superalgebra of type \((n-1,1)\)._
**Remark 20**.: _Given a non-zero \(n\)-dimensional Zinbiel superalgebra \(\mathbf{Z}\) of type \((n-1,1)\) such that \(\mathbf{Z}_{\bar{0}}\) is the zero algebra. Then it is isomorphic to \(\mathbf{Z}_{n,0}:f_{1}f_{1}=e_{1}\), simply choosing \(\phi:A\to A_{n,0}\) such that \(\phi(f_{1}f_{1})=e_{1}\). The classification of the \(2\)-dimensional Zinbiel superalgebras follows by this statement._
We recover the classification of the \(n\)-dimensional Zinbiel algebras [2, 4], as it will be required.
**Theorem 21**.: _Given an \(n\)-dimensional, for \(n\leq 3\), non-trivial Zinbiel algebra, then it is isomorphic to only one of the following_
* _If_ \(n=2\)_, then it is_
* _If_ \(n=3\)_, then it is isomorphic to_ \(\mathfrak{Z}_{3,1}=\mathfrak{Z}_{2,1}\oplus\mathbb{C}\) _or to_
* \(\mathfrak{Z}_{3,3}:e_{1}e_{2}=e_{3},e_{2}e_{1}=-e_{3}.\)__
* \(\mathfrak{Z}_{3,4}:e_{1}e_{1}=e_{3},e_{1}e_{2}=e_{3},e_{2}e_{2}=\beta e_{3}.\)__
* \(\mathfrak{Z}_{3,5}:e_{1}e_{1}=e_{3},e_{1}e_{2}=e_{3},e_{2}e_{1}=e_{3}.\)__
In our classification, we will not consider non-proper superalgebras. So type \((n,0)\), which corresponds to Zinbiel algebras, and type \((0,n)\) (zero algebra) are omitted. Also, we omit the superalgebras with \(\mathfrak{Z}_{0}\mathfrak{Z}_{1}=\mathfrak{Z}_{1}\mathfrak{Z}_{0}=\mathfrak{Z}_{ 1}^{2}=0\), as they are split algebras.
### 3-dimensional Zinbiel superalgebras
#### 4.1.1. (1, 2) superalgebras
Let \(\{e_{1},f_{1},f_{2}\}\) be a basis of a superalgebra of \(\mathbf{Z}=\mathbf{Z}_{\bar{0}}\oplus\mathbf{Z}_{\bar{1}}\). Since \(\mathbf{Z}_{\bar{0}}\) is the trivial one dimensional algebra, we have the following multiplication table for \(\mathbf{Z}\):
\[e_{1}f_{i}=a_{i}^{1}f_{1}+a_{i}^{2}f_{2},f_{i}e_{1}=b_{i}^{1}f_{1}+b_{i}^{2}f_{ 2},f_{i}f_{j}=c_{ij}e_{1},\]
where \(1\leq i,j\leq 2\). We find the equations on variables the structural constants studying case by case:
\[\mathfrak{s}\mathfrak{Z}\{e_{1},e_{1},f_{1}\} = 0 \Rightarrow (a_{1}^{1})^{2}+a_{1}^{1}b_{1}^{1}+a_{1}^{2}a_{2}^{1}+a_{2}^{1}b _{1}^{2}=0\text{ and }a_{1}^{1}a_{1}^{2}+a_{1}^{2}a_{2}^{2}+a_{1}^{2}b_{1}^{1}+a_{2}^{2}b_{1}^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},e_{1},f_{2}\} = 0 \Rightarrow a_{1}^{1}a_{2}^{1}+a_{1}^{1}b_{2}^{1}+a_{2}^{1}a_{2}^{2}+a_{2}^{ 1}b_{2}^{2}=0\text{ and }a_{1}^{2}a_{2}^{1}+a_{2}^{2}b_{2}^{1}+(a_{2}^{2})^{2}+a_{2}^{2}b_{2}^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{1},e_{1}\} = 0 \Rightarrow (a_{1}^{1})^{2}+a_{1}^{2}a_{2}^{1}-a_{1}^{2}b_{1}^{1}+a_{2}^{1}b _{1}^{2}=0\] \[\text{ and }a_{1}^{1}a_{2}^{2}-a_{1}^{1}b_{1}^{1}+a_{1}^{2}a_{2}^{2}+a_{ 1}^{2}b_{1}^{1}-a_{1}^{2}b_{2}^{2}+a_{2}^{2}b_{1}^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{2},e_{1}\} = 0 \Rightarrow a_{1}^{1}a_{2}^{1}+a_{1}^{1}b_{2}^{1}+a_{2}^{1}a_{2}^{2}-a_{2}^{ 1}b_{1}^{1}+a_{2}^{1}b_{2}^{2}-a_{2}^{2}b_{2}^{2}=0\] \[\text{ and }a_{1}^{2}a_{2}^{1}+a_{1}^{2}b_{2}^{1}-a_{2}^{1}b_{1}^{ 2}+(a_{2}^{2})^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{1},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{11}+a_{1}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{1},f_{2}\} = 0 \Rightarrow a_{1}^{1}c_{12}+a_{1}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{2},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{11}+a_{2}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{e_{1},f_{2},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{12}+a_{2}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},e_{1}\} = 0 \Rightarrow (b_{1}^{1})^{2}+b_{1}^{2}b_{2}^{1}=0\text{ and }b_{1}^{1}b_{1}^{2}+b_{1}^{2}b_{2}^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},e_{1},e_{1}\} = 0 \Rightarrow b_{1}^{1}b_{2}^{1}+b_{2}^{1}b_{2}^{2}=0\text{ and }b_{1}^{2}b_{1}^{2}+(b_{2}^{2})^{2}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},e_{1},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{11}+a_{1}^{2}c_{12}+b_{1}^{2}c_{12}-b_{1}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},e_{1},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{11}+a_{2}^{2}c_{12}-b_{1}^{1}c_{12}-b_{1}^{2}c_{22}+b_{ 2}^{1}c_{11}+b_{2}^{2}c_{12}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},e_{1},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{21}+a_{1}^{2}c_{22}+b_{1}^{1}c_{21}+b_{1}^{2}c_{22}-b_{ 2}^{1}c_{11}-b_{2}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},e_{1},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{21}+a_{2}^{2}c_{22}-b_{2}^{2}c_{12}+b_{ 2}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},f_{1},e_{1}\} = 0 \Rightarrow a_{1}^{1}c_{11}+a_{1}^{2}c_{12}+b_{1}^{1}c_{11}+b_{1}^{2}c_{12}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1},e_{1}\} = 0 \Rightarrow a_{1}^{1}c_{21}+a_{1}^{2}c_{22}+b_{1}^{1}c_{21}+b_{ 2}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1},e_{1}\} = 0 \Rightarrow a_{1}^{1}c_{21}+a_{1}^{2}c_{22}+b_{1}^{1}c_{21}+b_{ 1}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1}\} = 0 \Rightarrow a_{2}^{1}c_{21}+a_{2}^{2}c_{22}+b_{1}^{1}c_{21}+b_{ 2}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1}\} = 0 \Rightarrow a_{2}^{1}c_{21}+a_{2}^{2}c_{22}+b_{2}^{1}c_{21}+b_{ 2}^{2}c_{22}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},f_{1},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{11}=0\text{ and }a_{1}^{2}c_{11}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},f_{1},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{11}-b_{1}^{1}c_{12}+b_{1}^{1}c_{21}=0\text{ and }a_{2}^{2}c_{11}-b_{1}^{2}c_{12}+b_{ 1}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},f_{2},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{12}+b_{1}^{1}c_{12}-b_{1}^{1}c_{21}=0\text{ and }a_{1}^{2}c_{12}+b_{1}^{2}c_{12}-b_{1}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{1},f_{2},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{12}=0\text{ and }a_{2}^{2}c_{12}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1},f_{2}\} = 0 \Rightarrow a_{2}^{1}c_{21}-b_{2}^{1}c_{21}+b_{2}^{1}c_{21}=0\text{ and }a_{2}^{2}c_{21}-b_{2}^{2}c_{12}+b_{ 2}^{2}c_{21}=0\] \[\mathfrak{s}\mathfrak{Z}\{f_{2},f_{1}\} = 0 \Rightarrow a_{1}^{1}c_{22}+b_{2}^{1}c_{12}-b_{2}^
**(a)**: \(f_{1}f_{1}=\lambda_{11}e_{1},\;\;f_{1}f_{2}=\lambda_{12}e_{1},\;\;f_{2}f_{1}= \lambda_{21}e_{1},\;\;f_{2}f_{2}=\lambda_{22}e_{1}.\)
\(f_{1}e_{1}=\mu f_{1}-\frac{\lambda_{11}}{\lambda_{12}}\mu f_{2},\;\;f_{2}e_{1}= \frac{\lambda_{12}}{\lambda_{11}}\mu f_{1}-\mu f_{2},\;\;f_{1}f_{1}=\lambda_{11 }e_{1},\)
**(b)**: \(f_{1}f_{2}=\lambda_{12}e_{1},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
* If \(\lambda_{11}=0\), \(\lambda_{12}\neq\lambda_{21}\) and (\(\lambda_{12}\neq-\lambda_{21}\) or \(\lambda_{22}\neq 0\)). Choose the map \[\phi(e_{1})=\frac{\lambda_{22}}{(\lambda_{12}-\lambda_{21})^{2}}e_{1}\text{, }\phi(e_{2})=\frac{\lambda_{21}}{\lambda_{12}-\lambda_{21}}e_{2}+e_{3}\text{ and }\phi(e_{3})=\frac{\lambda_{22}}{\lambda_{12}-\lambda_{21}}\text{,}\] we obtain \(\mathbf{z}_{3,1}^{\alpha}\).
* If \(\lambda_{11}=0\) and \(\lambda_{12}=\lambda_{21}\neq 0\), then we have \(\mathbf{z}_{3,2}\).
* If \(\lambda_{11}=\lambda_{12}=\lambda_{21}=0\), then we obtain \[\mathbf{z}_{3,3}:f_{1}f_{1}=e_{1},\] choosing \(\phi(e_{1})=\lambda_{22}^{-1}e_{1}\), \(\phi(e_{2})=e_{3}\) and \(\phi(e_{3})=e_{2}\).
* If \(\lambda_{11}=\lambda_{22}=0\) and \(\lambda_{12}=-\lambda_{21}\). Choose the map \[\phi(e_{1})=\lambda_{12}^{-1}e_{1}\text{, }\phi(e_{2})=e_{2}\text{ and }\phi(e_{3})=e_{3}\] to obtain \[\mathbf{z}_{3,4}:f_{1}f_{2}=e_{1},\quad f_{2}f_{1}=-e_{1}.\]
**Case (e)** is equivalent to one of the following, depending on the parameters.
* If \(\mu\neq 0\) and \(\mu^{\prime}\neq 0\), then the map given by \[\phi(e_{1})=\mu^{\prime-1}e_{1}\text{, }\phi(e_{2})=e_{2}\text{ and }\phi(e_{3})=\mu^{-1}\mu^{\prime-1}e_{3}\] shows that it is isomorphic to \[\mathbf{z}_{3,5}:f_{1}e_{1}=f_{2},\quad f_{1}f_{1}=e_{1}.\]
* If \(\mu=0\) and \(\mu^{\prime}\neq 0\), then, choosing \(\phi(e_{1})=\mu^{\prime-1}e_{1}\), \(\phi(e_{2})=e_{2}\) and \(\phi(e_{3})=e_{3}\), we have \(\mathbf{z}_{3,3}\).
* If \(\mu\neq 0\) and \(\mu^{\prime}=0\), choose \(\phi(e_{1})=e_{1}\), \(\phi(e_{2})=e_{2}\) and \(\phi(e_{3})=\mu^{-1}e_{3}\) to obtain \[\mathbf{z}_{3,6}:f_{1}e_{1}=f_{2}.\]
**Case (g)** is equivalent to one of the following, depending on the parameters.
* If \(\mu^{\prime}\neq 0\), then, with \(\phi\) such that \(\phi(e_{1})=e_{1}\), \(\phi(e_{2})=e_{2}\) and \(\phi(e_{3})=\mu^{\prime-1}e_{3}\), we have \[\mathbf{z}_{3,7}:e_{1}f_{1}=\alpha f_{2},f_{1}e_{1}=f_{2}.\]
* If \(\mu^{\prime}=0\), choosing \(\phi(e_{1})=e_{1}\), \(\phi(e_{2})=e_{2}\) and \(\phi(e_{3})=\mu^{-1}e_{3}\) then we obtain \[\mathbf{z}_{3,8}:e_{1}f_{1}=f_{2}.\]
#### 4.1.2. (2, 1) superalgebras
* Even part \(\mathfrak{Z}_{2,0}\). Then, by Remark 20, we have \(\mathbf{z}_{3,0}\).
* Even part \(\mathfrak{Z}_{2,1}\). By Lemma 18 and Remark 19, we have that every superalgebra constructed on \(\mathfrak{Z}_{2,1}\) is of the form \[e_{1}e_{1}=e_{2},\quad f_{1}f_{1}=\lambda_{2}e_{2}.\] Choose the linear map \(\phi\) such that \(\phi(e_{1})=\lambda_{2}^{-\frac{1}{2}}e_{1},\phi(e_{2})=\lambda_{2}^{-1}e_{2}, \phi(e_{3})=e_{3},\) to obtain the superalgebra \[\mathbf{z}_{3,9}:e_{1}e_{1}=e_{2},f_{1}f_{1}=e_{2}.\]
Summing up, we have the classification of the \(3\)-dimensional Zinbiel superalgebras.
**Theorem 23**.: _Given a \(3\)-dimensional complex non-split Zinbiel superalgebra \(\mathbf{Z}\), then it is isomorphic to a \(3\)-dimensional Zinbiel algebra or to only one of the following._
\[\begin{array}{cccccccccccc}\mathbf{z}_{3,1}^{\alpha}&:&f_{1}f_{1}&=&e_{1}&f_{2 }f_{1}&=&e_{1}&f_{2}f_{2}&=&\alpha e_{1}\\ \mathbf{z}_{3,2}&:&f_{1}f_{1}&=&e_{1}&f_{2}f_{2}&=&e_{1}\\ \mathbf{z}_{3,3}&:&f_{1}f_{1}&=&e_{1}&\\ \mathbf{z}_{3,4}&:&f_{1}f_{2}&=&e_{1}&f_{2}f_{1}&=&-e_{1}\\ \mathbf{z}_{3,5}&:&f_{1}e_{1}&=&f_{2}&f_{1}f_{1}&=&e_{1}\\ \mathbf{z}_{3,6}&:&f_{1}e_{1}&=&f_{2}&\\ \mathbf{z}_{3,7}&:&e_{1}f_{1}&=&\alpha f_{2}&f_{1}e_{1}&=&f_{2}\\ \mathbf{z}_{3,8}&:&e_{1}f_{1}&=&f_{2}&\\ \mathbf{z}_{3,9}&:&e_{1}e_{1}&=&e_{2}&f_{1}f_{1}&=&e_{2}.\end{array}\]
## 5. Finite-dimensional Zinbiel superalgebras are nilpotent
It is well-known that finite-dimensional Zinbiel algebras are nilpotent over an arbitrary field [30] (also see [17] for context). It is a natural question to wonder if this is also true in the case of Zinbiel superalgebras. Note, for instance, that all the 3-dimensional Zinbiel superalgebras are nilpotent (Theorem 23). It turns out that the answer is positive, as we will see in this section.
**Definition 24**.: _Given an algebra \(\mathbf{Z}\) we define the right annihilator of an element \(a\in\mathbf{Z}\) as the set_
\[RC(a)=\left\{x\in\mathbf{Z}:ax=0\right\}.\]
**Lemma 25**.: _Given a right-commutative superalgebra \(\mathbf{Z}\), then for homogeneous elements \(a_{1},a_{2}\in\mathbf{Z}\), we have \(RC(a_{1})\subseteq RC(a_{1}a_{2})\)._
Proof.: Given \(x\in RC(a_{1})\) and suppose \(x=x_{0}+x_{1}\), for \(x_{i}\in\mathbf{Z}_{\bar{i}}\). Then, since \(a_{1}\) is homogeneous \(a_{1}x=a_{1}x_{0}+a_{1}x_{1}=0\) implies \(a_{1}x_{0}=0\) and \(a_{1}x_{1}=0\). Hence, we have
\[(a_{1}a_{2})x=(a_{1}a_{2})x_{0}+(a_{1}a_{2})x_{1}=(a_{1}x_{0})a_{2}+(-1)^{|a_ {2}|}(a_{1}x_{1})a_{2}=0.\]
The first key lemma of this section is the following.
**Lemma 26**.: _Given a finite-dimensional Zinbiel superalgebra \(\mathbf{Z}\), there exists a homogeneous element \(e\) such that \(e\mathbf{Z}=0\)._
Proof.: Since \(\mathbf{Z}_{0}\) is a Zinbiel algebra, it is nilpotent. Assume it has nilpotency index \(N\), then we have some non-zero element \(e_{0}\in\mathbf{Z}_{0}^{N-1}\) such that \(e_{0}\mathbf{Z}_{\bar{0}}=\mathbf{Z}_{\bar{0}}e_{0}=0\). Construct \(e\) as follows.
1. Fix \(e=e_{0}\).
2. If there is some \(x\in\mathbf{Z}_{\bar{1}}\) such that \(ex\neq 0\), set \(e_{0}=ex\). Then \(x\in RC(e_{0})\), by the Zinbiel superidentity. Otherwise, set \(e=e_{0}\) and finish the iteration.
3. Repeat from (1).
Note that the element obtained in each iteration is homogeneous, so by Lemma 25, in each iteration, the right annihilator becomes bigger. Also, since the algebra is finite-dimensional, this process is finite, as it is enough to run it for a basis of \(\mathbf{Z}_{\bar{1}}\). So we conclude \(RC(e)=\mathbf{Z}\), that is \(e\mathbf{Z}=0\).
**Lemma 27**.: _Given \(I\) a right ideal of a Zinbiel superalgebra \(\mathbf{Z}\), then \(\mathbf{Z}I\) is an ideal._
Proof.: We have \(\mathbf{Z}(\mathbf{Z}I)\subseteq\mathbf{Z}^{2}I+\mathbf{Z}(I\mathbf{Z}) \subseteq\mathbf{Z}I\) and \((\mathbf{Z}I)\mathbf{Z}\subseteq\mathbf{Z}^{2}I\subseteq\mathbf{Z}I\).
The next result follows by the previous lemmas.
**Lemma 28**.: _Any Zinbiel superalgebra of dimension \(n>1\) has a proper graded ideal._
Proof.: Given a finite-dimensional Zinbiel superalgebra \(\mathbf{Z}\), by Lemma 26, there exists an element \(e\in\mathbf{Z}_{\bar{i}}\), for \(i=0\) or \(i=1\), such that \(e\mathbf{Z}=0\). Now, if \(\mathbf{Z}e=0\), then the vector space generated by \(E\) is a proper graded ideal. Conversely, if \(\mathbf{Z}e\neq 0\), choose \(I=\mathbf{Z}e\), then since the linear spam of \(e\) is a right ideal, \(I\) is an ideal, by Lemma 27.
To show that it is a proper ideal, we have to prove that its dimension is lower than \(n\). Choose a basis \(e_{1},e_{2},\ldots,e_{n}\) of \(\mathbf{Z}\) such that \(e_{1}=E\), then the ideal \(I\) is linearly generated by the elements \(e_{1}E=0,e_{2}E,\ldots,e_{n}E\), therefore, at most it has dimension \(n-1\).
The ideal \(I\) is graded as a consequence of \(e\) being homogeneous.
Now, we can prove the first main result of this section.
**Lemma 29**.: _Any finite-dimensional Zinbiel superalgebra is solvable._
Proof.: Let \(\mathbf{Z}\) be a finite-dimensional Zinbiel superalgebra. We proceed by induction on the dimension \(n\). If \(n=1\), we have the trivial one-dimensional algebra, which is solvable. Now, if \(n>1\) and the statement is true for up to dimension \(n-1\), then \(A\) has a proper graded ideal \(I\), so \(I\) and \(\mathbf{Z}/I\) are Zinbiel superalgebras of dimension lower than \(n\), therefore they are solvable. Hence, \(\mathbf{Z}\) itself is solvable.
**Proposition 30**.: _Let \(I\) be a minimal ideal of a finite-dimensional Zinbiel superalgebra \(\mathbf{Z}\). Let \(J\) be a minimal right ideal of \(\mathbf{Z}\) such that \(J\subseteq I\). Then \(I=J\)._
Proof.: The proof is identical to the proof of [30, Proposition 2.2].
**Corollary 31**.: _Let \(I\) be a minimal ideal of a finite-dimensional Zinbiel superalgebra \(\mathbf{Z}\), then we have \(I\mathbf{Z}=\mathbf{Z}I=0\). Hence, we have \(\dim I=1\)._
Proof.: The proof is identical to the proof of [30, Corollary 2.3].
Observe that the previous result implies that any finite-dimensional Zinbiel superalgebra has a non-trivial annihilator. However, this is not enough to prove that any finite-dimensional Zinbiel superalgebra is nilpotent, we need the next straightforward remark.
**Remark 32**.: _Let \(I\) be a minimal ideal of a Zinbiel superalgebra \(\mathbf{Z}\). Suppose it is generated by some element \(e=e_{0}+e_{1}\in\mathbf{Z}\), where \(e_{i}\in\mathbf{Z}_{i}\). Note that \(e\in\mathrm{Ann}(\mathbf{Z})\). Then \(e\mathbf{Z}_{\bar{i}}=0\) implies \(e_{0}\mathbf{Z}_{\bar{i}}=0\) and \(e_{1}\mathbf{Z}_{\bar{i}}=0\) (resp. \(\mathbf{Z}_{\bar{i}}e=0\) implies \(\mathbf{Z}_{\bar{i}}e_{0}=0\) and \(\mathbf{Z}_{\bar{i}}e_{1}=0\)), and we have \(e_{i}\in\mathrm{Ann}(A)\). Moreover, \(I_{i}=\langle e_{i}\rangle\) is an ideal. Furthermore, if \(e_{i}\neq 0\), then \(I_{i}\) is a proper graded ideal. Hence, we have the following corollary._
**Corollary 33**.: _Any finite-dimensional Zinbiel superalgebra \(\mathbf{Z}\) has a minimal ideal which is graded. Moreover, there exist a homonegeous element \(e\in\mathbf{Z}\) such that \(e\in\operatorname{Ann}(\mathbf{Z})\)._
Finally, Corollary 33 enables us to prove the main result of this section.
**Theorem 34**.: _Any finite-dimensional Zinbiel superalgebra is nilpotent._
Proof.: Let \(\mathbf{Z}\) be a finite-dimensional Zinbiel superalgebra. We proceed by induction on the dimension \(n\). If \(n=1\), we have the trivial one-dimensional algebra, which is nilpotent. Now, suppose we have that any finite-dimensional Zinbiel superalgebra is nilpotent up to dimension \(n-1\) for \(n>1\). Since \(\mathbf{Z}\) has a graded ideal \(I\) of dimension one (generated by a homogeneous element) such that \(\mathbf{Z}I=I\mathbf{Z}=0\), then \(\mathbf{Z}/I\) is a Zinbiel superalgebra of dimension \(n-1\), and so it is nilpotent. Therefore, \(\mathbf{Z}\) is nilpotent.
|
2301.04813 | On effective log Iitaka fibrations and existence of complements | We study the relationship between Iitaka fibrations and the conjecture on the
existence of complements, assuming the good minimal model conjecture. In one
direction, we show that the conjecture on the existence of complements implies
the effective log Iitaka fibration conjecture. As a consequence, the effective
log Iitaka fibration conjecture holds in dimension $3$. In the other direction,
for any Calabi-Yau type variety $X$ such that $-K_X$ is nef, we show that $X$
has an $n$-complement for some universal constant $n$ depending only on the
dimension of $X$ and two natural invariants of a general fiber of an Iitaka
fibration of $-K_X$. We also formulate the decomposable Iitaka fibration
conjecture, a variation of the effective log Iitaka fibration conjecture which
is closely related to the structure of ample models of pairs with non-rational
coefficients, and study its relationship with the forestated conjectures. | Guodu Chen, Jingjun Han, Jihao Liu | 2023-01-12T04:52:48Z | http://arxiv.org/abs/2301.04813v1 | # On effective log Iitaka fibrations and existence of complements
###### Abstract.
We study the relationship between Iitaka fibrations and the conjecture on the existence of complements, assuming the good minimal model conjecture. In one direction, we show that the conjecture on the existence of complements implies the effective log Iitaka fibration conjecture. As a consequence, the effective log Iitaka fibration conjecture holds in dimension \(3\). In the other direction, for any Calabi-Yau type variety \(X\) such that \(-K_{X}\) is nef, we show that \(X\) has an \(n\)-complement for some universal constant \(n\) depending only on the dimension of \(X\) and two natural invariants of a general fiber of an Iitaka fibration of \(-K_{X}\). We also formulate the decomposable Iitaka fibration conjecture, a variation of the effective log Iitaka fibration conjecture which is closely related to the structure of ample models of pairs with non-rational coefficients, and study its relationship with the forestated conjectures.
2020 Mathematics Subject Classification: 14C20,14E05,14E30,14J30
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Canonical bundle formulas
* 4 Non-vanishing orders, middle Betti numbers, and complements
* 5 Effective Iitaka fibrations
* 6 Existence of decomposable Iitaka fibrations
## 1. Introduction
We work over the field of complex numbers \(\mathbb{C}\).
Let \(X\) be a smooth projective variety with non-negative Kodaira dimension. By a well-known construction of Iitaka, there exists a birational morphism \(X_{\infty}\to X\) from a smooth projective variety \(X_{\infty}\), and a contraction \(f_{\infty}:X_{\infty}\to Z_{\infty}\) onto a projective variety \(Z_{\infty}\), such that a very general fiber of \(f_{\infty}\) is smooth with Kodaira dimension zero, and \(\dim Z_{\infty}=\kappa(X,K_{X})\). The morphism \(f_{\infty}:X_{\infty}\to Z_{\infty}\) is referred to as an _Iitaka fibration_ of \(K_{X}\) (see Definition 2.20). It is conjectured that the pluricanonical system \(|mK_{X}|\) defines a
map which is birational to an Iitaka fibration whenever the positive integer \(m\) is divisible by a positive integer depending only on the dimension of \(X\):
**Conjecture 1.1** (Effective Iitaka fibration, cf. [16, Conjecture 1.7]).: _Let \(d\) be a positive integer. Then there exists a positive integer \(m_{d}\) depending only on \(d\), such that for any smooth projective variety \(X\) of dimension \(d\) with non-negative Kodaira dimension, \(|mK_{X}|\) defines an Iitaka fibration for any positive integer \(m\) divisible by \(m_{d}\)._
Conjecture 1.1 was proved when \(K_{X}\) is big [16, 17] (see also [18]), when \(d=2\) due to Enriques (see also [19]), and when \(d=3\)[19, 20, 21]. An important progress towards Conjecture 1.1 is made in [1], where the authors showed that there exists a positive integer \(m\) depending only on \(d\) and two natural invariants of the very general fibers of an Iitaka fibration of \(K_{X}\) (the non-vanishing order and the middle Betti number), such that \(|mK_{X}|\) defines an Iitaka fibration. Unfortunately, we don't know the boundedness of the middle Betti numbers in dimension \(\geq 3\), which leaves Conjecture 1.1 open in dimension \(\geq 4\).
In practice, it is also natural to consider the following generalized version of Conjecture 1.1, which is known as the _effective log Iitaka fibration conjecture_.
**Conjecture 1.2** (Effective log Iitaka fibration, cf. [20, Conjecture 1.1]).: _Let \(d\) be a positive integer and \(\Gamma\subset[0,1]\) a DCC set. Then there exists a positive integer \(m\) depending only on \(d\) and \(\Gamma\) satisfying the following._
_Assume that \((X,B)\) is an lc pair of dimension \(d\) such that \(B\in\Gamma\) and \(\kappa(X,K_{X}+B)\geq 0\). Then \(|\lfloor m(K_{X}+B)\rfloor|\) defines a map which is birational to an Iitaka fibration of \(K_{X}+B\)._
Conjecture 1.2 was proved when \(K_{X}+B\) is big [16, Theorem 1.3]. When \(\Gamma\subset[0,1]\cap\mathbb{Q}\) and \((X,B)\) is klt, Conjecture 1.2 was proved when \(d\leq 2\)[19], when \(d\leq 3\) and \(\kappa(X,K_{X})>0\)[20, 19], and when \(d=4\) and \(\kappa(X,K_{X})=2\)[20].
Recently, the theory of _complements_, which was introduced by Shokurov in the study of the existence of flips for threefolds [21], has gradually become one of the major topics in birational geometry. This theory has played an important role in the proof of the BAB conjecture [1, 18], the proof of the singular Yau-Tian-Donaldson conjecture and the stable degeneration conjecture [22, 23, 24, 25], and recent studies on Shokurov's ascending chain condition conjecture for minimal log discrepancies [16, 21, 22, 23]. For other related work, we refer the readers to [21, 20, 23, 24, 25, 26, 27, 28, 29]. Although the existence of \(n\)-complements [18, 20] is settled for Fano type varieties, it is natural to consider the existence of \(n\)-complements for pairs admitting an lc Calabi-Yau structure, that is, _\(\mathbb{R}\)-complementary varieties_. We have the following conjecture:
**Conjecture 1.3** (Existence of complements).: _Let \(d\) be a positive integer and \(\Gamma\subset[0,1]\) a DCC set. Then there exists a positive integer \(n\) depending only on \(d\) and \(\Gamma\) satisfying the following._
_Assume that \((X/Z\ni z,B)\) is an \(\mathbb{R}\)-complementary pair of dimension \(d\) such that \(B\in\Gamma\), then \((X/Z\ni z,B)\) has an \(n\)-complement \((X/Z\ni z,B^{+})\). Moreover, if the closure of \(\Gamma\) belongs to \([0,1]\cap\mathbb{Q}\), then we can pick \(B^{+}\geq B\)._
**Relationship between the conjectures**. It is interesting to ask whether we can establish some connections between Conjectures 1.1, 1.2 and Conjecture 1.3. At first glance, it is difficult to observe their relationships, as these conjectures are considering structures of varieties and pairs with completely different positivity properties: Conjectures 1.1, 1.2 are concentrating on pairs with positive canonical bundle (i.e., \(K_{X}+B\) is effective), while Conjecture 1.3 is concentrating on varieties with negative canonical bundle (i.e., \(-(K_{X}+B)\) is effective). Surprisingly, we have the following theorems, which show that these conjectures are actually deeply related with each other in multiple directions.
_From complements to Iitaka fibrations_. First, we prove that Conjecture 1.3 implies Conjecture 1.2 assuming the good minimal model conjecture.
**Theorem 1.4**.: _Let \(d\) be a positive integer. Assume that the good minimal model conjecture (Conjecture 2.11) and the existence of complements (Conjecture 1.3) hold in dimension \(d\). Then the effective log Iitaka fibration conjecture (Conjecture 1.2) holds in dimension \(d\)._
As an immediate corollary, we prove Conjecture 1.2 in dimension \(\leq 3\):
**Corollary 1.5**.: _Conjecture 1.2 holds when \(d\leq 3\)._
_From Iitaka fibrations to complements_. Next, we show that the existence of \(n\)-complements is deeply related to some invariants associated to Iitaka fibrations. We have the following theorem:
**Theorem 1.6**.: _Let \(d,b,\) and \(\beta\) be three positive integers. Assume that the good minimal model conjecture (Conjecture 2.11) holds in dimension \(d\). Then there exists a positive integer \(n\) depending only on \(d,b,\) and \(\beta\) satisfying the following._
_Assume that \(X\) is a \(\mathbb{Q}\)-factorial normal projective variety of dimension \(d\), \(f_{\infty}:X_{\infty}\to Z_{\infty}\) is an Iitaka fibration of \(-K_{X}\), \(h:X_{\infty}\to X\) is the induced birational morphism, and \(F\) is a general fiber of \(f_{\infty}\). Suppose that_
1. \(-K_{X}\) _is nef,_
2. \(X\) _has a klt_ \(\mathbb{R}\)_-complement,_
3. \(\kappa(X,-K_{X})\geq 0\)_, and_ \(b\) _is the non-vanishing order of_ \(-h^{*}K_{X}|_{F}\)_, i.e.,_ \[b=\min\left\{a\in\mathbb{Z}_{>0}\mid|-ah^{*}K_{X}|_{F}|\neq\emptyset\right\},\] _and_
4. \(\beta:=\dim H^{\dim F}(\tilde{F},\mathbb{C})\)_, where_ \(\tilde{F}\) _is a smooth model of the cover of_ \(F\) _associated to the unique divisor of_ \(|-bh^{*}K_{X}|_{F}|\)_._
_Then \(X\) has an \(n\)-complement._
We remark that the assumptions on \(b\) and \(\beta\) in Theorem 1.6 are natural assumptions, and are exactly the additional assumptions in [1, Theorem 1.2] on the effective Iitaka fibration conjecture. We also note that, modulo the good minimal model conjecture, the boundedness of \(b\) follows immediately from the effective log Iitaka conjecture (Conjecture 1.2) for log pairs with Iitaka dimension \(0\).
**Decomposable Iitaka fibrations**. For any semi-ample divisor \(D\), the ample model of \(D\) is clearly birational to an Iitaka fibration of \(D\). However, if \(D\) is only assumed to be an
\(\mathbb{R}\)-divisor, then it is possible that \(D\) does not have any Iitaka fibration although the ample model of \(D\) exists [12, Example 1.2]. An approach to resolve this issue is to define the _invariant Iitaka fibration_ (see Definition 6.1). A question arises naturally: _do we expect any kind of uniform effectively on invariant Iitaka fibrations for log pairs_, similar to the effective log Iitaka fibration conjecture?
The question is expected to have a positive answer. More precisely, suppose that \((X,B)\) is an lc pair such that \(K_{X}+B\) induces a map \(X\dasharrow Z\) that is birational to an invariant Iitaka fibration of \(K_{X}+B\). Then we expect that \(X\dasharrow Z\) will actually be birational to an Iitaka fibration of \(K_{X}+B^{\prime}\) after we (uniformly) perturb the coefficients of the boundary \(B\) to get a new boundary \(B^{\prime}\). Therefore, the effective log Iitaka fibration conjecture should induce some kind of uniform effectivity on an invariant Iitaka fibration induced by \(K_{X}+B\).
The difficulty is to show that we can make a uniform perturbation to switch the boundary \(B\) to a new boundary \(B^{\prime}\). In [12, Theorem 1.1], the authors prove a weaker version, which shows that, modulo the non-vanishing conjecture, there exists such uniform perturbation such that an Iitaka dimension of \(K_{X}+B^{\prime}\) is equal to an invariant Iitaka dimension of \(K_{X}+B\). In this paper, we will show that an Iitaka fibration of \(K_{X}+B^{\prime}\) is actually equal to an Iitaka fibration as \(K_{X}+B\). To make our statements more clear, we introduce the concept of _decomposable Iitaka fibrations_.
**Definition 1.7** (Decomposable Iitaka fibrations).: Let \(\Gamma_{0}:=\{a_{1},\ldots,a_{k}\}\subset(0,1]\) be a finite set such that \(\sum_{i=1}^{k}a_{i}=1\), and \(\Gamma^{\prime}\subset[0,1]\) a set. Assume that \((X,B)\) is an lc pair such that \(\kappa_{\iota}(X,K_{X}+B)\geq 0\). We say that \((X,B)\) has a _\((\Gamma_{0},\Gamma^{\prime})\)-decomposable Iitaka fibration_ if there exist \(\mathbb{R}\)-divisors \(B_{1},\ldots,B_{k}\in\Gamma^{\prime}\) such that
1. \(B=\sum_{i=1}^{k}a_{i}B_{i}\),
2. \((X,B_{i})\) is lc for any \(1\leq i\leq k\), and
3. any invariant Iitaka fibration of \(K_{X}+B\) is an Iitaka fibration of \(K_{X}+B_{i}\) for any \(1\leq i\leq k\).
In addition, if
1. the map defined by \(|\lfloor m(K_{X}+B_{i})\rfloor|\) is birational to any invariant Iitaka fibration of \(K_{X}+B\) for any integer \(1\leq i\leq k\),
then we say that \((X,B)\) has an _\((m,\Gamma_{0},\Gamma)\)-decomposable Iitaka fibration_.
As an analogue of the effective log Iitaka fibration conjecture, we propose the following conjecture on the existence of decomposable Iitaka fibrations. Roughly speaking, the conjecture indicates that we can uniformly perturb an invariant Iitaka fibration to get an effective log Iitaka fibration.
**Conjecture 1.8** (Decomposable Iitaka fibrations).: _Let \(d\) be a positive integer and \(\Gamma\subset[0,1]\) a DCC set. Then there exist a positive integer \(m\), a finite set \(\Gamma_{0}\subset(0,1]\), and a DCC set \(\Gamma^{\prime}\subset[0,1]\) depending only on \(d\) and \(\Gamma\) satisfying the following. Assume that \((X,B)\) is a \(\mathbb{Q}\)-factorial lc pair of dimension \(d\) such that \(B\in\Gamma\) and \(\kappa_{\iota}(X,K_{X}+B)\geq 0\). Then:_
1. _(Weak version)_ \((X,B)\) _has a_ \((\Gamma_{0},\Gamma^{\prime})\)_-decomposable Iitaka fibration._
2. _(Strong version)_ \((X,B)\) _has an_ \((m,\Gamma_{0},\Gamma^{\prime})\)_-decomposable Iitaka fibration._
Conjecture 1.8 will help us to understand the structure of good minimal models and their ample models for pairs with real coefficients. We show that Conjecture 1.8 follows from the non-vanishing conjecture (Conjecture 2.12) and the effective log Iitaka fibration conjecture (Conjecture 1.8(2)).
**Theorem 1.9**.: _Let \(d\) be a positive integer. Assume that the non-vanishing conjecture (Conjecture 2.12) holds in dimension \(d\). Then:_
1. _Conjecture_ 1.8(1) _holds in dimension_ \(d\)_._
2. _Assume that Conjecture_ 1.2 _holds in dimension_ \(d\)_. Then Conjecture_ 1.8(2) _holds in dimension_ \(d\)_._
As an immediate corollary, we have:
**Corollary 1.10**.: _Conjecture 1.8 holds when \(d\leq 3\)._
Combining Theorems 1.4 and 1.9, we show that Conjecture 1.8 follows from the good minimal model conjecture and the existence of complements.
**Theorem 1.11**.: _Let \(d\) be a positive integer. Assume that the good minimal model conjecture (Conjecture 2.11) and the existence of complements (Conjecture 1.3) hold in dimension \(d\). Then Conjecture 1.8 holds in dimension \(d\)._
Finally, we remark that we expect all theorems in our paper to hold in the relative setting. That is, instead of considering Iitaka fibrations for projective varieties and projective pairs, we may also consider Iitaka fibrations in the relative case (cf. [10, Definition 3.19]).
**Structure of the paper**. In Section 2, we introduce some notation and tools which will be used in this paper. In Section 3, we recall the canonical bundle formulas and prove Proposition 3.1. In Section 5, we prove Theorem 1.4. In Section 4, we prove Theorem 1.6. In Section 6, we introduce invariant Iitaka fibrations and prove Theorems 1.9 and 1.11.
**Acknowledgement**. The second and the third named authors began this work when they worked on [11] in Summer 2020. Part of this work was done while the first named author visited Chuyu Zhou at EPFL in Summer 2022. He would like to thank their hospitality. The authors would like to thank Qianyu Chen, Christopher D. Hacon, Fei Hu, Chen Jiang, Junpeng Jiao, Zhan Li, Haidong Liu, Wenfei Liu, Yuchen Liu, Yujie Luo, Fanjun Meng, Lingyao Xie, Qingyuan Xue, and Chuyu Zhou for valuable discussions and suggestions. The authors would like to thank Enrica Floris for answering questions about Theorem 1.6. The first named author was supported by the China post-doctoral grants BX2021269 and 2021M702925. The second named author was supported by National Key Research and Development Program of China (Grant No. 2020YFA0713200). The second named author is a member of LMNS, Fudan University.
## 2. Preliminaries
We adopt the standard notation and definitions in [13, 1] and will freely use them.
### Divisors
**Definition 2.1**.: Let \(\Gamma\subset\mathbb{R}\) be a set. We say that \(\Gamma\) satisfies the _ascending chain condition_ (ACC) if any increasing sequence in \(\Gamma\) stabilizes. We say that \(\Gamma\) satisfies the _descending chain condition_ (DCC) if any decreasing sequence in \(\Gamma\) stabilizes.
**Definition 2.2** ([10, 3.2], [11, 2.2]).: Let \(\mathfrak{R}\subset[0,1]\cap\mathbb{Q}\) be a finite set, we define
\[\Phi(\mathfrak{R}):=\left\{1-\frac{\gamma}{n}\mid\gamma\in\mathfrak{R},n\in \mathbb{Z}_{\geq 1}\right\}.\]
We say that a set \(\Gamma\subset[0,1]\) is a _hyperstandard set_ if there exists a finite set \(\mathfrak{R}\subset[0,1]\cap\mathbb{Q}\) such that \(0,1\in\mathfrak{R}\) and \(\Gamma=\Phi(\mathfrak{R})\).
**Definition 2.3**.: We say \(f:X\to Z\) is a _contraction_ if \(\pi\) is a projective morphism, and \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Z}\). We say that a birational map \(\phi:X\dasharrow Y\) is a _birational contraction_ if \(\phi\) is projective and \(\phi^{-1}\) does not contract any divisors.
**Definition 2.4**.: Let \(\Gamma\subset\mathbb{R}\) be a set, \(X\) a variety, and \(B:=\sum_{i=1}^{s}b_{i}B_{i}\) an \(\mathbb{R}\)-divisor on \(X\), where \(B_{i}\) are the irreducible components of \(B\). We write \(B\in\Gamma\) if \(b_{i}\in\Gamma\) for every \(i\). We define
\[|B|:=\max_{1\leq i\leq s}|b_{i}|,\lfloor B\rfloor:=\sum_{i=1}^{s}\lfloor b_{i} \rfloor B_{i},\text{ and }\{B\}:=\sum_{i=1}^{s}\{b_{i}\}B_{i}.\]
We may denote by \(\mathcal{K}(X)\) the rational function field of \(X\).
Let \(f:X\to Z\) be a contraction between normal quasi-projective varieties and \(D\) an \(\mathbb{R}\)-divisor on \(X\). We say that \(D\) is _vertical_ over \(Z\) if \(f(\operatorname{Supp}D)\) is a proper subset of \(Z\). We say that \(D\) is _horizontal_ over \(Z\) if \(D\) is not vertical over \(Z\). We may uniquely write \(D=D^{h}+D^{v}\) such that \(D^{v}\) is vertical over \(Z\) and each component of \(D^{h}\) is horizontal over \(Z\). We call \(D^{h}\) the _horizontal\(/Z\) part_ of \(D\) and \(D^{v}\) the _vertical\(/Z\) part_ of \(D\).
**Lemma 2.5**.: _Let \(f:X\to Z\) be a contraction between normal quasi-projective varieties and \(L\) a Cartier divisor on \(X\) such that \(L\sim_{Z}0\). Suppose either \(L\) is vertical over \(Z\) or \(L\geq 0\). Then there is a Cartier divisor \(L_{Z}\) on \(Z\) such that \(L=f^{*}L_{Z}\)._
Proof.: If \(L\geq 0\), then \(L\) is vertical over \(Z\). Thus it suffices to prove the statement under the condition that \(L\) is vertical over \(Z\). By assumption there exist a rational function \(s\in\mathcal{K}(X)\) and a Cartier divisor \(L_{Z}^{\prime}\) on \(Z\) such that \(L-f^{*}L_{Z}^{\prime}=(s).\) Since \(L\) is vertical over \(Z\), we may find an open subset \(V\subset Z\) such that \((s)|_{U}\geq 0\), where \(U:=f^{-1}V\subset X\). In particular, \(s\in\mathcal{O}_{X}(U)\) and thus \(s\in\mathcal{O}_{Z}(V)\) as \(f\) is a contraction and \(\mathcal{O}_{Z}(V)=(f_{*}\mathcal{O}_{X})(V)=\mathcal{O}_{X}(U)\). Hence \(s\in\mathcal{K}(V)=\mathcal{K}(Z)\hookrightarrow\mathcal{K}(X)\) and \(\phi(s)=s\), where \(\phi:\mathcal{K}(Z)\hookrightarrow\mathcal{K}(X)\). Thus \(f^{*}(s)=(\phi(s))=(s)\). Set \(L_{Z}:=L_{Z}^{\prime}+(s)\) and we are done.
**Lemma 2.6**.: _Let \(f:X\to Z\) be a contraction between normal quasi-projective varieties, \(D\) a divisor on \(X\), and \(U_{1},U_{2}\subset Z\) two open subsets such that \(D|_{f^{-1}(U_{i})}\sim_{U_{i}}0\) for \(i=1,2\). Then \(D|_{f^{-1}(U_{1}\cup U_{2})}\sim_{U_{1}\cup U_{2}}0.\)_
Proof.: Replacing \(Z\) by \(U_{1}\cup U_{2}\), we may assume that \(Z=U_{1}\cup U_{2}\). By assumption,
\[D|_{f^{-1}(U_{i})}=f_{i}^{*}D_{i}+(s_{i})\]
for some Cartier divisor \(D_{i}\) on \(U_{i}\) and \(s_{i}\in\mathcal{K}(X)\), where \(f_{i}:=f|_{f^{-1}(U_{i})}\) for \(i=1,2\). Let \(f_{12}:=f|_{f^{-1}(U_{1}\cap U_{2})}\). Then \((f_{12})^{*}(D_{1}-D_{2})=(\frac{s_{2}}{s_{1}})\) on \(f^{-1}(U_{1}\cap U_{2})\). By the projection formula, we have
\[\mathcal{O}_{U_{1}\cap U_{2}} \cong(f_{12})_{*}\left((\frac{s_{2}}{s_{1}})\mathcal{O}_{X}|_{f^ {-1}(U_{1}\cap U_{2})}\right)\] \[=(f_{12})_{*}(f_{12})^{*}\mathcal{O}_{U_{1}\cap U_{2}}(D_{1}-D_{ 2})\] \[=\mathcal{O}_{U_{1}\cap U_{2}}(D_{1}-D_{2}).\]
Thus \((D_{1}-D_{2})|_{U_{1}\cap U_{2}}=(s_{Z})\) and \((f_{12})^{*}(s_{Z})=(\frac{s_{2}}{s_{1}})\) over \(U_{1}\cap U_{2}\) for some \(s_{Z}\in\mathcal{K}(Z)\). In particular, \((\frac{s_{2}}{s_{1}})-f^{*}(s_{Z})\) is vertical over \(Z\). Note that \((\frac{s_{2}}{s_{1}})-f^{*}(s_{Z})\sim_{Z}0\). By Lemma 2.5, there exists a Cartier divisor \(L_{Z}\) on \(Z\) such that
\[(\frac{s_{2}}{s_{1}})-f^{*}(s_{Z})=f^{*}L_{Z}.\]
Moreover, we have \(\operatorname{Supp}L_{Z}\cap(U_{1}\cap U_{2})=\emptyset\), and \(L_{Z}+(s_{Z})=D_{1}-D_{2}\) on \(U_{1}\cap U_{2}\) as \((D_{1}-D_{2})|_{U_{1}\cap U_{2}}=(s_{Z})\). Hence there exists a Cartier divisor \(D^{\prime}\) on \(Z\), such that
\[D^{\prime}=D_{1}\text{ on }U_{1}\text{ \ and \ }D^{\prime}=D_{2}+(s_{Z})+L_{Z} \text{ on }U_{2}.\]
It follows that \(f^{*}D^{\prime}=f^{*}D_{1}=D-(s_{1})\) over \(U_{1}\) and \(f^{*}D^{\prime}=f^{*}(D_{2}+(s_{Z})+L_{Z})=D-(s_{2})+(\frac{s_{2}}{s_{1}})=D-( s_{1})\) over \(U_{2}\). Hence \(D=f^{*}D^{\prime}-(s_{1})\), and thus \(D\sim_{Z}0\).
### Pairs and singularities
**Definition 2.7** (Pairs, cf. [1, Definition 2.2]).: A _pair_\((X/Z\ni z,B)\) consists of a contraction \(f:X\to Z\) between normal quasi-projective varieties, a (not necessarily closed) point \(z\in Z\), and an \(\mathbb{R}\)-divisor \(B\geq 0\) on \(X\), such that \(K_{X}+B\) is \(\mathbb{R}\)-Cartier over a neighborhood of \(z\). We may also call it an _\(\mathbb{R}\)-pair_. If \(B\in\mathbb{Q}\), then we call \((X/Z\ni z,B)\) a _\(\mathbb{Q}\)-pair_. If \(f=id\) and \(z=x\in X\), then we may use \((X\ni x,B)\) instead of \((X/Z\ni z,B)\). If \((X/Z\ni z,B)\) (resp., \((X\ni x,B)\)) is a pair for any point \(z\in Z\) (resp., \(x\in X\)), then we call \((X/Z,B)\) (resp., \((X,B)\)) a pair.
**Definition 2.8** (Singularities of pairs).: Let \((X/Z\ni z,B)\) be a pair associated with the contraction \(f:X\to Z\), and let \(E\) be a prime divisor over \(X\) such that \(z\in f(\operatorname{center}_{X}E)\). Let \(g:Y\to X\) be a log resolution of \((X,B)\) such that \(\operatorname{center}_{Y}E\) is a divisor, and suppose that \(K_{Y}+B_{Y}=g^{*}(K_{X}+B)\) over a neighborhood of \(z\). We define \(a(E,X,B):=1-\operatorname{mult}_{E}B_{Y}\) to be the _log discrepancy_ of \(E\) with respect to \((X,B)\).
We say that a prime divisor \(E\) is _over_\(X/Z\ni z\) if \(E\) is a prime divisor \(E\) over \(X\) and \(f(\operatorname{center}_{X}E)=\bar{z}\). We say that \((X/Z\ni z,B)\) is _lc_ (resp., _klt_) if \(a(E,X,B)\geq 0\) (resp., \(>0\)) for any prime divisor \(E\) over \(X/Z\ni z\). We say that \((X/Z,B)\) is _lc_ (resp., _klt_) if \((X\ni x,B)\) is lc (resp., klt) for any codimension \(\geq 1\) point \(x\in X\). We say that \((X/Z,B)\) is _dlt_ if there exists a log resolution \(g:Y\to X\) of \((X,B)\) such that \(a(E,X,B)>0\) for any \(g\)-exceptional prime divisor \(E\).
**Lemma 2.9**.: _Assume that \((X/Z,B)\) is an lc pair and \(D\) is an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X\) such that \(K_{X}+B\sim_{\mathbb{R},Z}D\) and \(D\) is vertical over \(Z\). Suppose that \(\psi:X\dashrightarrow X^{\prime}\) is a
partial \((K_{X}+B)\)-MMP over \(Z.\) Then there exists a proper open subset \(U\subset Z\) such that \(X_{z}\) is isomorphic to \(X_{z}^{\prime}\) for any \(z\in U\), where \(X_{z}\) (resp., \(X_{z}^{\prime}\)) is the fiber of \(X\to Z\) (resp., \(X^{\prime}\to Z\)) over \(z\)._
Proof.: Let \(f:X\to Z\) be the associated morphism. Since \(K_{X}+B\sim_{\mathbb{R}}0\) over \(Z\backslash f(\operatorname{Supp}D)\), \(\psi\) is an isomorphism over \(Z\backslash f(\operatorname{Supp}D)\), so we may choose \(U:=Z\backslash f(\operatorname{Supp}D)\).
**Lemma 2.10**.: _Assume that \((X/Z\ni z,B)\) is an lc pair such that \(Z\) is a curve, \(K_{X}+B\sim_{\mathbb{R}}0\) over a neighborhood of \(z\), \(B^{h}\in\mathbb{Q}\), and \(z\) is a closed point. Then \(B+sf^{*}z\) is a \(\mathbb{Q}\)-divisor over a neighborhood of \(z\), where \(s:=\operatorname{lct}(X/Z\ni z,B;f^{*}z)\) and \(f:X\to Z\)._
Proof.: Possibly shrinking \(Z\) we may assume that \(K_{X}+B\sim_{\mathbb{R},Z}0\) and \(f(\operatorname{Supp}B^{v})=\{z\}\). There exist real numbers \(r_{1},\dots,r_{c}\), \(\mathbb{Q}\)-linear functions \(s_{1},\dots,s_{m}:\mathbb{R}^{c+1}\to\mathbb{R}\), and Weil divisors \(B_{1},\dots,B_{m}\) on \(X\), such that \(1,r_{1},\dots,r_{c}\) are linearly independent over \(\mathbb{Q}\), and \(B=\sum_{i=1}^{m}s_{i}(1,\mathbf{r}_{0})B_{i}\), where \(\mathbf{r}_{0}:=(r_{1},\dots,r_{c})\). Let \(B(\mathbf{v}):=\sum_{i=1}^{m}s_{i}(1,\mathbf{v})B_{i}\) for any \(\mathbf{v}\in\mathbb{R}^{c}\). Since \(B^{h}\in\mathbb{Q}\), \(B^{h}=B(\mathbf{v})^{h}\in\mathbb{Q}\) for any \(\mathbf{v}\in\mathbb{R}^{c}\).
By [11, Lemma 5.3], \(K_{X}+B(\mathbf{v})\sim_{\mathbb{R},Z}0\) for any \(\mathbf{v}\in\mathbb{R}^{c}\) and thus \(B(\mathbf{v})-B\sim_{\mathbb{R},Z}0\). Moreover, as \(f(\operatorname{Supp}(B(\mathbf{v})-B))=f(\operatorname{Supp}B^{v})=\{z\}\), we see that \(B-B(\mathbf{v})=l_{\mathbf{v}}f^{*}z\) for some real number \(l_{\mathbf{v}}\). Pick \(\mathbf{v}_{0}\in\mathbb{Q}^{c}\), then
\[s+l_{\mathbf{v}_{0}}=\operatorname{lct}(X/Z\ni z,B(\mathbf{v}_{0});f^{*}z)\in\mathbb{Q}\]
as \(B(\mathbf{v}_{0})\in\mathbb{Q}\). Since
\[B+sf^{*}z=B(\mathbf{v}_{0})+(s+l_{\mathbf{v}_{0}})f^{*}z,\]
\(B+sf^{*}z\) is a \(\mathbb{Q}\)-divisor. This finishes the proof.
**Conjecture 2.11** (Good minimal model conjecture).: _Let \(d\) be a positive integer. Assume that \((X/Z,B)\) is an lc pair of dimension \(d\) such that \(K_{X}+B\) is pseudo-effective over \(Z\). Then \((X/Z,B)\) has a good minimal model over \(Z\)._
**Conjecture 2.12** (Non-vanishing conjecture).: _Let \(d\) be a positive integer. Assume that \((X/Z,B)\) is an lc pair of dimension \(d\) such that \(K_{X}+B\) is pseudo-effective over \(Z\). Then \(|(K_{X}+B)/Z|_{\mathbb{R}}\neq\emptyset\)._
### Complements
**Definition 2.13**.: Let \(n\) be a positive integer, \(\Gamma\subset(0,1]\) a set, and \((X/Z\ni z,B)\) and \((X/Z\ni z,B^{+})\) two pairs. We say that \((X/Z\ni z,B^{+})\) is an \(\mathbb{R}\)_-complement_ of \((X/Z\ni z,B)\) if \((X,B^{+})\) is lc, \(B^{+}\geq B\), and \(K_{X}+B^{+}\sim_{\mathbb{R}}0\) over a neighborhood of \(z\). We say that \((X/Z\ni z,B)\) is \(\mathbb{R}\)_-complementary_ if \((X/Z\ni z,B)\) has an \(\mathbb{R}\)-complement.
We say that \((X/Z\ni z,B^{+})\) is an \(n\)_-complement_ of \((X/Z\ni z,B)\) if
* \((X/Z\ni z,B^{+})\) is lc,
* \(nB^{+}\geq\lfloor(n+1)\{B\}\rfloor+n\lfloor B\rfloor\), and
* \(n(K_{X}+B^{+})\sim 0\) over a neighborhood of \(z\).
We say that \((X/Z\ni z,B^{+})\) is an \((n,\Gamma)\)_-decomposable \(\mathbb{R}\)-complement_ of \((X/Z\ni z,B)\) if there exist real numbers \(a_{1},\dots,a_{k}\in\Gamma\), and \(\mathbb{Q}\)-divisors \(B_{1}^{+},\dots,B_{k}^{+}\) on \(X\), such that
* \(\sum_{i=1}^{k}a_{i}=1\) and \(\sum_{i=1}^{k}a_{i}B_{i}^{+}=B^{+}\),
* \((X/Z\ni z,B^{+})\) is an \(\mathbb{R}\)-complement of \((X/Z\ni z,B)\), and
* \((X/Z\ni z,B^{+}_{i})\) is an \(n\)-complement of itself for any integer \(1\leq i\leq k\).
Conjecture 1.3 holds true when \(d=3\). More precisely, we have the following.
**Theorem 2.14**.: _Let \(l\) be a positive integer and \(\Gamma\subset[0,1]\) a DCC set. Then there exists a positive integer \(n\) which is divisible by \(l\) depending only on \(l\) and \(\Gamma\) satisfying the following._
_Assume that \((X/Z\ni z,B)\) is an \(\mathbb{R}\)-complementary pair of dimension \(3\) with \(B\in\Gamma\). Then \((X/Z\ni z,B)\) has an \(n\)-complement \((X/Z\ni z,B^{+})\). Moreover, if \(\operatorname{Span}_{\mathbb{Q}\geq 0}(\overline{\Gamma}\backslash\mathbb{Q}) \cap(\mathbb{Q}\backslash\{0\})=\emptyset\), then we can pick \(B^{+}\geq B\)._
Proof.: This follows from [12, Theorem 1] and [16, Theorem 8.25].
### Iitaka dimensions and invariant Iitaka dimensions
**Definition 2.15** (Iitaka dimensions, cf. [20, II 3.2 Definition]).: Let \(X\) be a normal projective variety and \(D\) an \(\mathbb{R}\)-divisor on \(X\). For any positive integer \(m\) such that \(|\lfloor mD\rfloor|\neq\emptyset\), we denote \(\Phi_{m}:X\dashrightarrow\mathbb{P}(H^{0}(X,\lfloor mD\rfloor)).\) The _Iitaka dimension_\(\kappa(X,D)\) of \(D\) is defined in the following way. If \(|\lfloor mD\rfloor|\neq\emptyset\) for some positive integer \(m\), then
\[\kappa(X,D):=\max\{\dim\Phi_{m}(X)\mid m\in\mathbb{Z}_{>0},|\lfloor mD\rfloor |\neq\emptyset\}.\]
Otherwise, let \(\kappa(X,D):=-\infty\). Note that if \(|\lfloor mD\rfloor|\neq\emptyset\), then by [20, II 3.8 Corollary],
\[\kappa(X,D)=\max\left\{k\in\mathbb{Z}_{>0}\mid\limsup_{m\to+\infty}\!\!\frac {\dim H^{0}(X,\lfloor mD\rfloor)}{k^{m}}>0\right\}.\]
**Definition 2.16** (Invariant Iitaka dimensions, cf. [1, Definition 2.2.1]).: Let \(X\) be a normal projective variety and \(D\) an \(\mathbb{R}\)-divisor on \(X\). The _invariant Iitaka dimension_\(\kappa_{\iota}(X,D)\) of \(D\) is defined as follows. If \(|D|_{\mathbb{R}}\neq\emptyset\), then we define
\[\kappa_{\iota}(X,D):=\kappa(X,D^{\prime})\]
for some \(\mathbb{R}\)-divisor \(D^{\prime}\in|D|_{\mathbb{R}}\). Otherwise, we let \(\kappa_{\iota}(X,D):=-\infty\). Note that \(\kappa_{\iota}(X,D)\) is independent of the choice of \(D^{\prime}\)[1, Corollary 2.1.4].
We gather some basic properties of \(\kappa\) and \(\kappa_{\iota}\) which will be used in the rest of this paper.
**Proposition 2.17** ([1, Propositions 2.1.2, 2.2.2, Corollary 2.1.4]).: _Let \(X\) be a normal projective variety, and \(D\sim_{\mathbb{R}}D^{\prime}\) two \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisors on \(X\). Then_
1. \(\kappa(X,D)\leq\kappa_{\iota}(X,D)=\kappa_{\iota}(X,D^{\prime})\)_, and_ \(\kappa(X,D)<\kappa_{\iota}(X,D)\) _if and only if_ \(\kappa(X,D)=-\infty\) _and_ \(\kappa_{\iota}(X,D)\geq 0\)_._
2. _If_ \(D^{\prime}\geq 0\)_, then_ \(\kappa(X,D)\leq\kappa(X,D^{\prime})\)_._
[1, Example 2.10] shows that we may have strict inequality in Proposition 2.17(1).
**Lemma 2.18** ([1, Proposition 3.20], [20, II Lemma 3.11]).: _Let \(g:Y\to X\) be a surjective morphism between normal projective varieties and \(D\) an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X\). Then_
1. \(\kappa(X,D)=\kappa(Y,g^{*}D)\) _and_ \(\kappa_{\iota}(X,D)=\kappa_{\iota}(Y,g^{*}D)\)_, and_
2. _if_ \(g\) _is birational, then_ \(\kappa(X,D)=\kappa(Y,g^{*}D+E)\) _and_ \(\kappa_{\iota}(X,D)=\kappa_{\iota}(Y,g^{*}D+E)\) _for any_ \(g\)_-exceptional_ \(\mathbb{R}\)_-Cartier_ \(\mathbb{R}\)_-divisor_ \(E\geq 0\) _on_ \(Y\)
### Iitaka fibrations for \(\mathbb{R}\)-divisors
**Definition 2.19**.: Assume that \(f:X\dashrightarrow Z\) is a rational map and \(f_{\infty}:X_{\infty}\to Z_{\infty}\) is a projective morphism. We say that \(f\)_is birational to \(f_{\infty}\)_ if there exist a birational morphism \(h:X_{\infty}\to X\) and a birational map \(g:Z_{\infty}\dashrightarrow Z^{\prime}\) such that \(f\circ h=g\circ f_{\infty}\), i.e., the following diagram is commutative:
**Definition 2.20** (cf. [10, Definition 3.19]).: Let \(X\) be a normal projective variety and \(D\) an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X\) such that \(\kappa(X,D)\geq 0\). A projective morphism \(f_{\infty}:X_{\infty}\to Z_{\infty}\) between quasi-projective smooth varieties is called an _Iitaka fibration_ of \(D\) if the following hold:
1. \(\dim Z_{\infty}=\kappa(X,D)\),
2. \(f_{m}:X\dashrightarrow Z_{m}\subset\mathbb{P}H^{0}(X,\lfloor mD\rfloor)\), the map associated with the complete linear system \(|\lfloor mD\rfloor|\), is birational to \(f_{\infty}\) with the morphism \(h:X_{\infty}\to X\), for any sufficiently divisible large integer \(m\), and
3. for any sufficiently large integer \(n\), we have \[\kappa\left(F,\left(h_{*}^{-1}D+nE\right)|_{F}\right)=0,\] where \(F\) is a very general fiber of \(f_{\infty}\) and \(E\) is the sum of all the \(h\)-exceptional prime divisors.
By [10, Proposition 3.20], for any \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(D\) such that \(\kappa(X,D)\geq 0\), there always exists an Iitaka fibration of \(D\). The following lemmas are well-known to experts.
**Lemma 2.21**.: _Notation as in Definition 2.20._
1. _Assume that_ \(D_{\infty}\) _is an_ \(\mathbb{R}\)_-divisor on_ \(X_{\infty}\) _such that_ \(D_{\infty}-h^{*}D\geq 0\) _and is_ \(h\)_-exceptional, then_ \(f_{\infty}\) _is an Iitaka fibration of_ \(D_{\infty}\)_._
2. _Assume that_ \(X^{\prime}_{\infty}\to X_{\infty}\) _is a birational morphism from a smooth variety, then_ \(X^{\prime}_{\infty}\to Z_{\infty}\) _is an Iitaka fibration of_ \(D\)_._
Proof.: (1) By assumption, \(H^{0}(X,\lfloor mD\rfloor)=H^{0}(X_{\infty},\lfloor mD_{\infty}\rfloor)\) and \(X_{\infty}\dashrightarrow Z_{m}\) is also the map associated with the complete linear system \(|\lfloor mD_{\infty}\rfloor|\). Then (1) follows from the definition.
(2) We only need to show \(\kappa(F^{\prime},((h^{\prime})_{*}^{-1}D+nE^{\prime})|_{F^{\prime}})=0\) for any sufficiently large integer \(n\), where \(h^{\prime}:X^{\prime}_{\infty}\to X\), \(F^{\prime}\) is a very general fiber of \(X^{\prime}_{\infty}\to Z_{\infty}\) and \(E^{\prime}\) is the sum of all the \(h^{\prime}\)-exceptional prime divisors. Let \(\psi^{\prime}_{\infty}:W\to X^{\prime}_{\infty}\) be a resolution which resolves the map \(X\dashrightarrow Z_{m}\), and we denote \(\psi:W\to X\) the induced morphism. By [10, Lemma 3.10], for any sufficiently large integer \(n\), we have that
\[\kappa\left(F_{m},\left((\psi)_{*}^{-1}D+nE_{W}\right)|_{F_{m}}\right)=0,\]
where \(F_{m}\) is a very general fiber of \(W\to Z_{m}\) and \(E_{W}\) is the sum of all the \(\psi\)-exceptional prime divisors. Let \(E_{1}\) be the sum of all the \(\psi_{\infty}^{\prime}\)-exceptional prime divisors. Since \(E_{W}=(\psi_{\infty}^{\prime})_{*}^{-1}E^{\prime}+E_{1}\) and
\[\psi_{*}^{-1}D+n\left(\psi_{\infty}^{\prime}\right)_{*}^{-1}E^{\prime}+(n+m)E_{ 1}-\left(\psi_{\infty}^{\prime}\right)^{*}\left(\left(h^{\prime}\right)_{*}^{ -1}D+nE^{\prime}\right)\geq 0\]
for any sufficiently large integers \(n\) and \(m\), we see that
\[\kappa\left(F_{m},\left(\left(\psi_{\infty}^{\prime}\right)^{*}\left(h_{*}^{-1 }D+nE_{1}\right)\right)|_{F_{m}}\right)=0.\]
Note that \(Z_{\infty}\) is birational to \(Z_{m}\), thus there exists a birational morphism from \(F_{m}\) to \(F^{\prime}\). Therefore \(\kappa(F^{\prime},((h^{\prime})_{*}^{-1}D+nE^{\prime})|_{F^{\prime}})=0\). This finishes the proof.
**Lemma 2.22**.: _Notation as in Definition 2.20. Suppose that \(\phi:X^{\prime}\to X\) is a birational morphism from a normal projective variety and \(D^{\prime}\) is an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X^{\prime}\) such that \(D^{\prime}-\phi^{*}D\geq 0\) and is \(\phi\)-exceptional. Then_
1. _possibly replacing_ \(X_{\infty}\) _with a high model,_ \(f_{\infty}\) _is an Iitaka fibration of_ \(D^{\prime}\)_, and_
2. _any Iitaka fibration of_ \(D^{\prime}\) _is an Iitaka fibration of_ \(D\)_._
Proof.: By our assumption, \(H^{0}(X,\lfloor mD\rfloor)=H^{0}(X_{\infty},\lfloor mD_{\infty}\rfloor)\) and \(X\dasharrow Z_{m}\) is the map associated with the complete linear system \(\left|\lfloor mD\rfloor\right|\) if and only if \(X^{\prime}\dasharrow Z_{m}\) is the map associated with the complete linear system \(\left|\lfloor mD^{\prime}\rfloor\right|\).
(1) By Lemma 2.21(2), possibly replacing \(X_{\infty}\) with a high model, we may assume that the map \(X_{\infty}\dasharrow X^{\prime}\) is a morphism and we denote it by \(h_{0}\). It suffices to prove that for any sufficiently large integer \(n\),
\[\kappa\left(F,\left((h_{0})_{*}^{-1}D^{\prime}+nE_{0}\right)|_{F}\right)=0,\]
where \(E_{0}\) is the sum of all the \(h_{0}\)-exceptional prime divisors. This follows from the fact that
\[(h_{0})_{*}^{-1}D^{\prime}+nE_{0}\leq h_{*}^{-1}D+nE\]
for any sufficiently large integer \(n\).
(2) Suppose that \(f_{\infty}^{\prime}:X_{\infty}^{\prime}\to Z_{\infty}^{\prime}\) is an Iitaka fibration of \(D^{\prime}\). Let \(W\to X_{\infty}^{\prime}\) and \(W\to X_{\infty}\) be a common resolution of \(X_{\infty}\) and \(X_{\infty}^{\prime}\). By Lemma 2.21(2), \(W\to Z_{\infty}\) is also an Iitaka fibration of \(D\). It follows that \(X_{\infty}^{\prime}\to Z_{\infty}\) is also an Iitaka fibration of \(D\). Since \(Z_{\infty}\) is birational to \(Z_{\infty}^{\prime}\), \(X_{\infty}^{\prime}\to Z_{\infty}^{\prime}\) is an Iitaka fibration of \(D\).
**Lemma 2.23**.: _Notation as in Definition 2.20. Assume that \(\psi:X\dasharrow X^{\prime}\) is a \(D\)-non-negative birational contraction. Then possibly replacing \(X_{\infty}\) with a high model, \(f_{\infty}\) is an Iitaka fibration of \(D^{\prime}\)._
Proof.: By Lemma 2.21(2), possibly replacing \(X_{\infty}\) with a high model, we may assume that the map \(h^{\prime}:X_{\infty}\dasharrow X^{\prime}\) is a morphism. By our assumption, \(0\leq h^{*}D-h^{\prime*}D^{\prime}\) is \(h^{\prime}\)-exceptional, and \(X^{\prime}\dasharrow Z_{m}\) is the map associated with the complete linear system \(\left|\lfloor mD^{\prime}\rfloor\right|\). By Lemma 2.21(1), \(f_{\infty}\) is an Iitaka fibration of \(h^{*}D\). Then by Lemma 2.22(2), \(f_{\infty}\) is also an Iitaka fibration of \(D^{\prime}\).
**Remark 2.24**.: In the rest of this paper, we will use Lemmas 2.21, 2.22 and 2.23 frequently to replace \(X\) with another birational model without citing.
**Lemma 2.25**.: _Assume that \((X,C)\) is a projective klt pair such that \(K_{X}+C\sim_{\mathbb{R}}0\). Assume that Conjecture 2.12 holds in dimension \(\leq\dim X\). Suppose that \(D\) is a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(X\) such that \(\kappa(X,D)\geq 0\), and \(f_{\infty}:X_{\infty}\to Z_{\infty}\) is an Iitaka fibration of \(D\). Then \(\kappa(F,h^{*}D|_{F})=0\), where \(F\) is a general fiber of \(f_{\infty}\) and \(h:X_{\infty}\to X\) is the induced birational morphism._
Proof.: By Shokurov type polytopes, we may assume that \(C\in\mathbb{Q}\) and thus \(K_{X}+C\sim_{\mathbb{Q}}0\). Take a positive rational number \(\epsilon\) such that \((X_{\infty},h_{*}^{-1}C+(1-\epsilon)E)\) is klt and
\[E^{\prime}:=K_{X_{\infty}}+h_{*}^{-1}C+(1-\epsilon)E-h^{*}(K_{X}+C)\geq 0,\]
where \(E\) is the sum of all the \(h\)-exceptional prime divisors. Since \(\kappa(X,D)\geq 0\), one can find a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(D^{\prime}\geq 0\) such that \(D^{\prime}\sim_{\mathbb{Q}}D\). Let \(\epsilon^{\prime}\) be a positive rational number such that \((X_{\infty},h_{*}^{-1}C+(1-\epsilon)E+\epsilon^{\prime}h^{*}D^{\prime})\) is still klt. As \(f_{\infty}\) is an Iitaka fibration of \(D\) and \(E^{\prime}\) is \(h\)-exceptional, one can see that
\[\kappa\left(F_{v},\left(K_{X_{\infty}}+h_{*}^{-1}C+(1-\epsilon)E+\epsilon^{ \prime}h^{*}D^{\prime}\right)|_{F_{v}}\right)=\kappa\left(F_{v},\left(E^{ \prime}+\epsilon^{\prime}h^{*}D^{\prime}\right)|_{F_{v}}\right)=0,\]
where \(F_{v}\) is a very general fiber of \(f_{\infty}\). According to [10, Corollary 1.4], we have
\[\kappa\left(F,\left(K_{X_{\infty}}+h_{*}^{-1}C+(1-\epsilon)E+\epsilon^{\prime }h^{*}D^{\prime}\right)|_{F}\right)=0.\]
Therefore \(\kappa(F,h^{*}D|_{F})=0\).
## 3. Canonical bundle formulas
We refer the reader to [1, 1, 1, 2] for the definitions and basic properties for generalized pair (g-pair for short), and we denote by \((X/Z,B+\mathbf{M})\) a g-pair throughout this paper. We refer the reader to [1, 2] for the definition and basic properties of the canonical bundle formula. To sum up, given an lc pair \((X/Z,B)\) and a contraction \(\phi:X\to T\) between normal quasi-projective normal varieties over \(Z\) such that \(K_{X}+B\sim_{\mathbb{R},T}0\). Then we can find an \(\mathbb{R}\)-divisor \(B_{T}\geq 0\) and a nef over \(Z\) b-\(\mathbb{R}\)-divisor \(\mathbf{M}_{\phi}\) on \(T\), such that \((T/Z,B_{T}+\mathbf{M}_{\phi})\) is a glc g-pair, and
\[K_{X}+B\sim_{\mathbb{R}}\phi^{*}\left(K_{T}+B_{T}+\mathbf{M}_{\phi,T}\right).\]
Here \(B\) (resp., \(\mathbf{M}_{\phi}\)) is called the _discriminant part_ (resp., a _moduli part_) of the canonical bundle formula for \((X/Z,B)\) over \(T\) which is uniquely determined (resp., determined only up to \(\mathbb{R}\)-linear equivalence). We may also call \(\mathbf{M}_{\phi,T}\) the moduli part of the canonical bundle formula for \((X/Z,B)\) over \(T\). Moreover, if \((X/Z,B)\) is klt, then \((T/Z,B_{T}+\mathbf{M}_{\phi})\) is gklt.
Here we emphase that there are many choices of \(\mathbf{M}_{\phi}\), some of which could behave badly. But we can always choose one with the required properties in the following results.
For convenience, we say that two g-pairs \((X/Z,B+\mathbf{M})\) and \((X^{\prime}/Z,B^{\prime}+\mathbf{M}^{\prime})\) are _crepant_ if \(X\) is birational to \(X^{\prime}\), \(\mathbf{M}=\mathbf{M}^{\prime}\), and \(p^{*}(K_{X}+B+\mathbf{M}_{X})=q^{*}(K_{X^{\prime}}+B^{\prime}+\mathbf{M}^{ \prime}_{X^{\prime}})\) for some common resolution \(p:W\to X\) and \(q:W\to X^{\prime}\). We also call \((X^{\prime}/Z,B^{\prime}+\mathbf{M})\) a _crepant model_ of \((X/Z,B+\mathbf{M})\).
**Proposition 3.1**.: _Let \(d\) be a positive integer and \(\Phi\subset[0,1]\) a DCC set. Assume that Conjectures 1.3 and 2.11 hold in dimension \(d\). Then there exist a positive integer \(p\) and a DCC set \(\Phi^{\prime}\) depending only on \(d\) and \(\Phi\) satisfying the following._
_Assume that \((X/Z,B)\) is an lc pair of dimension \(d\) and \(\phi:X\to T\) is a contraction over \(Z\) such that \(\dim T>0\), \(B\in\Phi\), \(K_{X}+B\sim_{\mathbb{R},T}0\), and \(K_{X}+B\sim_{\mathbb{Q},T}0\) over the generic point of \(T\). Then we can choose a moduli part \(\mathbf{M}_{\phi}\) of the canonical bundle formula for \((X/Z,B)\) over \(T\) such that \(B_{T}\in\Phi^{\prime}\), \(p\mathbf{M}_{\phi}\) is \(\boldsymbol{b}\)-Cartier, and_
\[p(K_{X}+B)\sim p\phi^{*}(K_{T}+B_{T}+\mathbf{M}_{\phi,T}),\]
_where \(B_{T}\) is the discriminant part of the canonical bundle formula for \((X/Z,B)\) over \(T\)._
_Moreover, if \(\Phi\) is a hyperstandard set, then \(\Phi^{\prime}\) is a hyperstandard set._
The proof is similar to the proof of [1, Proposition 6.3]. For the convenience of the reader, we give a proof here. We also remark that the moreover part of the proposition will not be used in this paper, but it is useful in some other situations (cf. [1]).
Proof.: **Step 1**. In this step, we construct \(p\) and make a choice of \(\mathbf{M}_{\phi,T}\). Note that here we only make a choice of \(\mathbf{M}_{\phi,T}\) rather than \(\mathbf{M}_{\phi}\).
By [11, Theorems 1.8 and 8.25], there exist a positive integer \(p\) and a finite set \(\Gamma_{0}\subset(0,1]\) depending only on \(d\) and \(\Phi\), such that for any \(t\in T\), \((X/T\ni t,B)\) has a \((p,\Gamma_{0})\)-decomposable \(\mathbb{R}\)-complement \((X/T\ni t,B+G)\) for some \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(G\geq 0\), and moreover if \(B\in\Gamma\cap\mathbb{Q}\), then \((X/T\ni t,B)\) has a monotonic \(p\)-complement. In particular, \(G\sim_{\mathbb{R}}0\) over a neighborhood of \(t\), and hence \(G\) is vertical over \(T\). Since \(K_{X}+B\sim_{\mathbb{Q},T}0\) over the generic point \(\eta_{T}\) of \(T\), \(p(K_{X}+B)\sim 0\) over a neighborhood of \(\eta_{T}\), and there exists \(\alpha\in\mathcal{K}(X)\) such that \(pL:=p(K_{X}+B)+(\alpha)\) is zero near \(\eta_{T}\). In particular, \(pL\sim p(K_{X}+B)\sim_{\mathbb{R},T}0\) and \(L\) is vertical over \(T\). By [10, Lemma 2.11], we may find an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(L_{T}\) on \(T\) such that \(L=\phi^{*}L_{T}\). Let \(B_{T}\) be the discriminant part of the canonical bundle formula of \((X,B)\) over \(T\), and \(\mathbf{M}_{\phi,T}:=L_{T}-K_{T}-B_{T}.\) Then
\[p(K_{X}+B)\sim pL=p\phi^{*}L_{T}=p\phi^{*}(K_{T}+B_{T}+\mathbf{M}_{\phi,T}).\]
**Step 2**. In this step, we show that we can reduce to the case \(\dim T=1\) to show the existence of \(\Phi^{\prime}\) and prove that \(p\mathbf{M}_{\phi,T}\) is integral.
Assume \(\dim T>1\). Let \(H\) be a general hyperplane section of \(T\), \(G:=\phi^{*}H\), and \(g:G\to H\) the induced morphism. We may write \(K_{G}+B_{G}=(K_{X}+G+B)|_{G}.\) It is clear that \((G,B_{G})\) is an lc pair with \(K_{G}+B_{G}\sim_{\mathbb{Q},H}0\). Suppose that \(B_{H}\) is the discriminant part of the canonical bundle formula for \((G,B_{G})\) over \(H\). Note that as \(G\) is a general member of a free linear system, every lc center \(S_{G}\) of \((G,B_{G})\) is a component of \(S_{0}\cap G\) for some lc center \(S_{0}\) of \((X,B)\).
We claim that \(\operatorname{mult}_{D}B_{T}=\operatorname{mult}_{C}B_{H}\) for any prime divisor \(D\) on \(T\) and any component \(C\) of \(D\cap H\). Indeed, let \(s_{D}\) be the lc threshold of \(\phi^{*}D\) with respect to \((X,B)\) over the generic point of \(D\). Then there is an lc center \(F\) of \((X,B+s_{D}\phi^{*}D)\) such that \(\phi(F)=D\). Note that \(F\) is also an lc center of \((X,B+G+s_{D}\phi^{*}D)\) as \(G\) is general. Hence \(F\cap G\) is an lc center of \((G,B_{G}+s_{D}g^{*}C)\), by inversion of adjunction [10]. Moreover, as \(\phi(F\cap G)=C\), we see that \(s_{D}\) is the lc threshold of \(g^{*}C\) with respect to \((G,B_{G})\) over the generic point of \(C\), and the claim holds. Since \(\Phi\) is a DCC set, there is a DCC set \(\Phi_{1}\) depending only on \(\Phi\) such that \(B_{G}\in\Phi_{1}\) (cf. [1, Corollary 16.7]). If \(\Phi\) is a hyperstandard set, then we
can take \(\Phi_{1}\) to be a hyperstandard set by [1, Lemma 3.3]. In both case, by induction, there is a DCC set (resp. hyperstandard set) \(\Phi_{1}^{\prime}\) such that \(B_{Z}\in\Phi_{1}^{\prime}\).
Pick a general \(H^{\prime}\sim H\) and let \(K_{H}:=(K_{T}+H^{\prime})|_{H}.\) Note that the restriction is well defined as \(H\) is a general hyperplane section and \(K_{H}\) is determined as a Weil divisor, although \(K_{T}\) may not be \(\mathbb{Q}\)-Cartier. Let
\[\mathbf{M}_{g,H}:=(L_{T}+H^{\prime})|_{H}-(K_{H}+B_{H}).\]
Then \(B_{H}+\mathbf{M}_{g,H}=(B_{T}+\mathbf{M}_{\phi,T})|_{H}\), and
\[p(K_{G}+B_{G})\sim p(L+G)|_{G}\sim pg^{*}(L_{T}+H^{\prime})|_{H}\sim pg^{*}(K_ {H}+B_{H}+\mathbf{M}_{g,H}).\]
Hence \(\mathbf{M}_{g,H}\) is the moduli part of the canonical bundle formula for \((G,B_{G})\) over \(H\), and
\[\operatorname{mult}_{C}(B_{H}+\mathbf{M}_{g,H})=\operatorname{mult}_{D}(B_{T }+\mathbf{M}_{\phi,T})\]
which implies that \(\operatorname{mult}_{C}\mathbf{M}_{g,H}=\operatorname{mult}_{D}\mathbf{M}_{ \phi,T}\) as \(\operatorname{mult}_{C}B_{H}=\operatorname{mult}_{D}B_{T}\). Therefore \(p\operatorname{mult}_{D}\mathbf{M}_{\phi,T}\) is integral if and only if \(p\operatorname{mult}_{C}\mathbf{M}_{g,H}\) is integral. Repeating the process we may finish this step.
**Step 3**. In this step we show the existence of \(\Phi^{\prime}\) and that \(p\mathbf{M}_{\phi,T}\) is integral. Note that by **Step 2**, we may assume that \(\dim T=1\).
**Step 3.1**. We construct the set \(\Phi^{\prime}\). If \(\Phi\) is a hyperstandard set which is not a hyperstandard set, then by [1, Theorem 1.1], \(B_{T}\in\Phi^{\prime}\) for some DCC set \(\Phi^{\prime}\) which only depends on \(d\) and \(\Phi\). If \(\Phi=\Phi(\mathfrak{R})\) is a hyperstandard set, then we show \(B_{T}\in\Phi^{\prime}:=\Phi(\mathfrak{R}^{\prime})\), where
\[\mathfrak{R}^{\prime}:=\left\{r,\frac{r}{l_{1}}-\frac{l_{2}}{p}\mid r\in \mathfrak{R},\,l_{1}\,,l_{2}\in\mathbb{Z}_{>0}\right\}\cap[0,1]\]
is a finite set of rational numbers.
**Claim 3.2**.: _Suppose that \(t\in T\) is a closed point. Then \((X/T\ni t,B+s\phi^{*}t)\) is a monotonic \(p\)-complement of \((X/T\ni t,B)\), where \(s:=\operatorname{lct}(X/T\ni t,B;\phi^{*}t)\). In particular, \(B+s\phi^{*}t\) is a \(\mathbb{Q}\)-divisor over a neighborhood of \(t\)._
Proof of Claim 3.2.: By Lemma 2.10, \(B_{t}:=B+s\phi^{*}t\) is a \(\mathbb{Q}\)-divisor over a neighborhood of \(t\). Possibly shrinking \(T\) around \(t\), we may assume that \(B_{t}\in\mathbb{Q}\). Let \((X^{\prime},B_{t}^{\prime})\) be a \(\mathbb{Q}\)-factorial dlt modification of \((X,B_{t})\). Then \(\lfloor B_{t}^{\prime}\rfloor\) has a component mapping to \(t\) and \(K_{X^{\prime}}+B_{t}^{\prime}\sim_{\mathbb{Q},T}0\). There exists a \(\mathbb{Q}\)-divisor \(B^{\prime}\) on \(X^{\prime}\) such that \(B^{\prime}\in\Phi\cap\mathbb{Q}\), \(\lfloor B^{\prime}\rfloor=\lfloor B_{t}^{\prime}\rfloor\), and \(B_{X^{\prime}}\leq B^{\prime}\leq B_{t}^{\prime}\), where \(B_{X^{\prime}}\) is the strict transform of \(B\) on \(X^{\prime}\). By assumption, \((X^{\prime}/T\ni t,B^{\prime})\) has a monotonic \(p\)-complement \((X^{\prime}/T\ni t,B^{\prime+})\). Let \(B^{+}\) be the strict transform of \(B^{\prime+}\) on \(X\). Then \((X/T\ni t,B^{+})\) is a monotonic \(p\)-complement of \((X/T\ni t,B)\). Since \(B^{+}-B\geq 0\) and \(B^{+}-B\sim_{\mathbb{Q}}0\) over a neighborhood of \(t\), \(B^{+}-B\) is vertical over \(T\). Moreover, as \(B^{\prime+}\geq B^{\prime}\), \(\lfloor B^{\prime+}\rfloor\) has a component mapping to \(t\), and thus \((X,B^{+})\) has an lc center mapping to \(t\). Therefore \(B^{+}-B=s\phi^{*}t\) over \(t\). The claim holds.
Pick a closed point \(t\in T\). By Claim 3.2, \((X/T\ni t,B^{+}:=B+s\phi^{*}t)\) is a \(p\)-complement of \((X/T\ni t,B)\), where \(s:=\operatorname{lct}(X/T\ni t,B;\phi^{*}t)\). For any component \(S\) of \(\phi^{*}t\), let
\(b:=\operatorname{mult}_{S}B\), \(b^{+}:=\operatorname{mult}_{S}B^{+}\) and \(m:=\operatorname{mult}_{S}\phi^{*}t\). Then \(b^{+}=b+sm\) and thus \(s=\frac{b^{+}-b}{m}\). Since \(b\in\Phi\), we may write \(b=1-r/l\) for some \(r\in\mathfrak{R}\) and \(l\in\mathbb{Z}_{>0}\). In particular,
\[s=\frac{b^{+}-1+r/l}{m}=\frac{1}{m}\left(\frac{r}{l}-\left(1-b^{+}\right) \right).\]
Then \(\operatorname{mult}_{t}B_{T}=1-s\in\Phi^{\prime}\), as \(b^{+}\in\frac{1}{p}\mathbb{Z}\cap[0,1]\).
**Step 3.2**. We show that \(p\mathbf{M}_{\phi,T}\) is integral.
We may assume that \(p(K_{X}+B)\sim 0\) over some non-empty open subset \(U_{0}\subseteq T\) such that \(\operatorname{Supp}B_{T}\subseteq T\setminus U_{0}\). Let
\[\Theta:=B+\sum_{t\in T\setminus U_{0}}s_{t}\phi^{*}t,\]
where \(s_{t}:=\operatorname{lct}(X/T\ni t,B;\phi^{*}t)\). Let \(\Theta_{T}\) be the discriminant part of the canonical bundle formula for \((X,\Theta)\) over \(T\). Then
\[\Theta_{T}=B_{T}+\sum_{t\in T\setminus U_{0}}s_{t}t\]
which is a reduced divisor. Moreover, \((X/T\ni t,\Theta)\) is a \(p\)-complement of \((X/T\ni t,B)\) for every \(t\in T\setminus U_{0}\) by Claim 3.2. Hence \(p(K_{X}+\Theta)\sim_{T}0\) by Lemma 2.6. Since
\[p(K_{X}+\Theta) =p(K_{X}+B)+p(\Theta-B)\] \[\sim p\phi^{*}(K_{T}+B_{T}+\mathbf{M}_{\phi,T})+p\phi^{*}(\Theta_ {T}-B_{T})\] \[=p\phi^{*}(K_{T}+\Theta_{T}+\mathbf{M}_{\phi,T}),\]
\(p(K_{T}+\Theta_{T}+\mathbf{M}_{\phi,T})\) is Cartier. It follows that \(p\mathbf{M}_{\phi,T}\) is integral as \(K_{T}+\Theta_{T}\) is reduced.
**Step 4**. In this step, we finish the proof by showing that \(p\mathbf{M}_{\phi}\) is b-Cartier and nef. Note that in this step we do not assume \(\dim T=1\).
According to [1, Theorem 3.6], we only need to show that \(p\mathbf{M}_{\phi}\) is b-Cartier. Let \(T^{\prime}\to T\) be a high resolution and \(Y\to X\) a log resolution of \((X,B)\) such that \(Y\to T^{\prime}\) is a morphism. Let \(U\subseteq T\) be a non-empty open subset such that \(U^{\prime}\to U\) is an isomorphism where \(U^{\prime}\subseteq T^{\prime}\) is the inverse image of \(U\). Let \(B_{Y}\) be the sum of the strict transform of \(B\) and the reduced exceptional divisor of \(Y\to X\) but with all the components mapping outside \(U\) removed. In particular, the generic point of any lc center of \((Y,B_{Y})\) maps into \(U\). We may run an MMP on \(K_{Y}+B_{Y}\) over \(X\times_{T}T^{\prime}\) with scaling of some ample divisor. By [1, Theorem 1.9], the MMP terminates over \(U^{\prime}\). In fact, we reach a model \(X^{\prime}\) such that over \(U^{\prime}\) the pair \((X^{\prime},B^{\prime})\) is a \(\mathbb{Q}\)-factorial dlt modification of \((X,B)\), where \(B^{\prime}\) is the strict transform of \(B_{Y}\) on \(X^{\prime}\). Hence \(K_{X^{\prime}}+B^{\prime}\sim_{\mathbb{Q}}0\) over \(U^{\prime}\). Now by [1, Theorem 1.1] (see also [1, Theorem 1.1]), we can run an MMP on \(K_{X^{\prime}}+B^{\prime}\) over \(T^{\prime}\) which terminates with a good minimal model \(X^{\prime\prime}\) over \(T^{\prime}\) as the generic point of every lc center of \((X^{\prime},B^{\prime})\) is mapped into \(U^{\prime}\). Let \(B^{\prime\prime}\) be the strict transform of \(B^{\prime}\) on \(X^{\prime\prime}\). Then \(K_{X^{\prime\prime}}+B^{\prime\prime}\) is semi-ample over \(T^{\prime}\).
Let \(\phi^{\prime\prime}:X^{\prime\prime}\to T^{\prime\prime}\) be the contraction defined by \(K_{X^{\prime\prime}}+B^{\prime\prime}\) over \(T^{\prime}\). Note that \(T^{\prime\prime}\to T^{\prime}\) is birational as \(K_{X^{\prime}}+B^{\prime}\sim_{\mathbb{Q}}0\) over \(U^{\prime}\). Assume that \((T^{\prime\prime},B^{\prime\prime}_{T^{\prime\prime}}+\mathbf{M}_{\phi})\) is the crepant model of \((T,B_{T}+\mathbf{M}_{\phi})\). Then \(L_{T^{\prime\prime}}=K_{T^{\prime\prime}}+B^{\prime\prime}_{T^{\prime\prime}} +\mathbf{M}_{\phi,T^{\prime\prime}}\), where \(L_{T^{\prime\prime}}\) is the pullback of \(L_{T}\) on \(T^{\prime\prime}\). Let \(f:W\to X\) and \(f^{\prime\prime}:W\to X^{\prime\prime}\) be a common resolution, \(K_{X^{\prime\prime}}+\Delta^{\prime\prime}:=f^{\prime\prime}_{*}f^{*}(K_{X}+B)\). Since \(K_{X}+B\) and \(K_{X^{\prime\prime}}+B^{\prime\prime}\) are crepant over \(U\), we see that \(B^{\prime\prime}-\Delta^{\prime\prime}\) is vertical over \(T^{\prime\prime}\). Note that \(B^{\prime\prime}-\Delta^{\prime\prime}\sim_{\mathbb{R},T^{\prime\prime}}0\), \(B^{\prime\prime}-\Delta^{\prime\prime}=(\phi^{\prime\prime})^{*}P_{T^{\prime \prime}}\) for some \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(P_{T^{\prime\prime}}\) on \(T^{\prime\prime}\) by Lemma 2.5. Denote by \(B_{T^{\prime\prime}}\) the discriminant part of the canonical bundle formula for \((X^{\prime\prime},B^{\prime\prime})\) over \(T^{\prime\prime}\). Then \(B_{T^{\prime\prime}}=B^{\prime\prime}_{T^{\prime\prime}}+P_{T^{\prime\prime}}\) and
\[p\left(K_{X^{\prime\prime}}+B^{\prime\prime}\right) =p\left(K_{X^{\prime\prime}}+\Delta^{\prime\prime}+B^{\prime\prime }-\Delta^{\prime\prime}\right)\sim p\left(f^{\prime\prime}_{*}f^{*}L+B^{ \prime\prime}-\Delta^{\prime\prime}\right)\] \[=p\left(\phi^{\prime\prime}\right)^{*}\left(L_{T^{\prime\prime}}+ P_{T^{\prime\prime}}\right)=p\left(\phi^{\prime\prime}\right)^{*}\left(K_{T^{ \prime\prime}}+B_{T^{\prime\prime}}+\mathbf{M}_{\phi,T^{\prime\prime}}\right).\]
Now by **Step 3.2**, \(p\mathbf{M}_{\phi,T^{\prime\prime}}\) is an integral divisor, hence \(p\mathbf{M}_{\phi,T^{\prime}}\) is integral. As \(T^{\prime}\) is smooth, we conclude that \(p\mathbf{M}_{\phi}\) is b-Cartier.
**Lemma 3.3**.: _Assume that \(X\to Z\) is a proper morphism of varieties. Let \(X^{\nu}\) be the normalization of \(X\). Then \(X^{\nu}_{z}\) is the normalization of \(X_{z}\) for any general point \(z\in Z\), where \(X_{z}\) (resp., \(X^{\nu}_{z}\)) is the fiber over \(z\). In particular, if \(X_{z}\) is normal, then \(X^{\nu}_{z}\) is isomorphic to \(X_{z}\)._
Proof.: By assumption, \(X^{\nu}\to X\) is birational and finite. Thus \(X^{\nu}_{z}\to X_{z}\) is birational and finite. Let \(\tilde{X}_{z}\) be the normalization of \(X_{z}\). The universal property of the normalization implies that there is a morphism \(\tilde{X}_{z}\to X^{\nu}_{z}\). Moreover, the morphism \(\tilde{X}_{z}\to X^{\nu}_{z}\) is birational and finite. Since both \(\tilde{X}_{z}\) and \(X^{\nu}_{z}\) are normal, we see that \(\tilde{X}_{z}\) is isomorphic to \(X^{\nu}_{z}\).
**Proposition 3.4**.: _Let \(d,b,\beta\) be three positive integers and \(\Gamma\subset[0,1]\cap\mathbb{Q}\) a finite set. Then there exist a positive integer \(p\) and a DCC set \(\Gamma^{\prime}\subseteq[0,1]\) depending only on \(d,b,\beta\) and \(\Gamma\) such that \(\bar{\Gamma}^{\prime}\subseteq[0,1]\cap\mathbb{Q}\) and satisfying the following._
_Assume that \((X,B)\) is an lc pair of dimension \(d\), \(\phi:X\to T\) is a contraction between quasi-projective normal varieties, and \(F\) is a general fiber of \(\phi\). Suppose that_
1. \(B\in\Gamma\)_, and_ \(K_{X}+B\sim_{\mathbb{Q},T}0\)_,_
2. \(\dim T>0\) _and_ \((X,B)\) _is klt over the generic point of_ \(T\)_,_
3. \(b\) _is the non-vanishing order of_ \((K_{X}+B)|_{F}\)_, and_
4. \(\beta:=\dim H^{\dim F}(\tilde{F},\mathbb{C})\)_, where_ \(\tilde{F}\) _is a smooth model of the cover of_ \(F\) _associated to the unique divisor of_ \(|b(K_{X}+B)|_{F}|\)
_Then we can choose a moduli part \(\mathbf{M}_{\phi}\) of the canonical bundle formula for \((X,B)\) over \(T\) such that \(B_{T}\in\Gamma^{\prime}\), \(p\mathbf{M}_{\phi}\) is b-Cartier, and_
\[p\left(K_{X}+B\right)\sim p\phi^{*}\left(K_{T}+B_{T}+\mathbf{M}_{\phi,T}\right),\]
_where \(B_{T}\) is the discriminant part of the canonical bundle formula for \((X,B)\) over \(T\)._
Proof.: Let \(B_{T}\) be the discriminant part of the canonical bundle formula for \((X,B)\) over \(T\). Then, by [12, Theorem 1.11], \(B_{T}\in\Gamma^{\prime}\) for some DCC set \(\Gamma^{\prime}\) which only depends on \(d\) and \(\Gamma\) such that \(\bar{\Gamma}^{\prime}\subset[0,1]\cap\mathbb{Q}\). It suffices to prove the existence of \(p\) with the required properties.
Let \(T^{\prime}\to T\) be a resolution, \(X^{\prime}\) the normalization of the main component of \(X\times_{T}T^{\prime}\), and denote by \(\phi^{\prime}:X^{\prime}\to T^{\prime}\). Let \((X^{\prime},B^{\prime})\) be the crepant model of \((X,B)\). Then \(K_{X^{\prime}}+B^{\prime}\sim_{\mathbb{Q},T^{\prime}}0\) and \((X^{\prime},B^{\prime})\) is sub-lc and is klt over the generic point of \(T^{\prime}\). By Lemma 3.3, the general fiber \(F^{\prime}\) of \(\phi^{\prime}\) is isomorphic to \(F\). Therefore it holds that
* \(b\) is the non-vanishing order of \((K_{X^{\prime}}+B^{\prime})|_{F^{\prime}}\), and
* \(\tilde{F}\) is a smooth model of the cover of \(F^{\prime}\) associated to the unique divisor of \(|b(K_{X^{\prime}}+B^{\prime})|_{F^{\prime}}|\).
According to [13, Theorem 5.1], one can find a positive integer \(p_{0}\) depending only on \(\beta\) and a choice of a moduli part \(\mathbf{M}_{\phi}\) of the canonical bundle formula for \((X^{\prime},B^{\prime})\) over \(T^{\prime}\) such that \(p_{0}\mathbf{M}_{\phi,T^{\prime}}\) is integral. In particular, \(p_{0}\mathbf{M}_{\phi,T^{\prime}}\) is Cartier and thus \(p_{0}\mathbf{M}_{\phi}\) is b-Cartier. Since \(b\) is the non-vanishing order of \((K_{X}+B)|_{F}\), by the choice of \(\mathbf{M}_{\phi}\) again, we see that
\[b\left(K_{X}+B\right)\sim b\phi^{*}\left(K_{T}+B_{T}+\mathbf{M}_{\phi,T} \right).\]
Therefore we may conclude that \(p:=bp_{0}\) and \(\Gamma^{\prime}\) have the required properties.
## 4. Non-vanishing orders, middle Betti numbers, and complements
In this section, we prove Theorem 1.6. More generally, we proof Theorem 4.1.
**Theorem 4.1**.: _Let \(d,b,\beta\) be three positive integers and \(\Gamma\subset[0,1]\cap\mathbb{Q}\) a DCC set. Assume that the good minimal model conjecture (Conjecture 2.11) holds in dimension \(d\). Then there exists a positive integer \(n\) depending only on \(d,b,\beta\) and \(\Gamma\) satisfying the following._
_Assume that \((X,B)\) is a \(\mathbb{Q}\)-factorial projective pair of dimension \(d\), \(f_{\infty}:X_{\infty}\to Z_{\infty}\) is an Iitaka fibration of \(-(K_{X}+B)\), \(h:X_{\infty}\to X\) is the induced birational morphism, and \(F\) is a general fiber of \(f_{\infty}\). Suppose that_
1. \(B\in\Gamma\)_,_
2. \(-(K_{X}+B)\) _is nef,_
3. \((X,B)\) _has a klt_ \(\mathbb{R}\)_-complement,_
4. \(\kappa(X,-(K_{X}+B))\geq 0\)_, and_ \(b\) _is the non-vanishing order of_ \(-h^{*}(K_{X}+B)|_{F}\)_, i.e.,_ \[b=\min\left\{a\in\mathbb{Z}_{>0}\mid|-ah^{*}(K_{X}+B)|_{F}|\neq\emptyset \right\},\] _and_
5. \(\beta:=\dim H^{\dim F}(\tilde{F},\mathbb{C})\)_, where_ \(\tilde{F}\) _is a smooth model of the cover of_ \(F\) _associated to the unique divisor of_ \(|-bh^{*}(K_{X}+B)|_{F}|\)_._
_Then \((X,B)\) has an \(n\)-complement._
We remark here that \(\kappa(F,-h^{*}(K_{X}+B)|_{F})=0\) by Lemma 2.25. We first show the existence of \((n_{0},\Gamma_{0})\)-decomposable \(\mathbb{R}\)-complements under the assumption of Theorem 4.1.
**Theorem 4.2**.: _Notation and assumptions as in Theorem 4.1. Then \((X,B)\) has an \((n_{0},\Gamma_{0})\)-decomposable \(\mathbb{R}\)-complement, where \(n_{0}\) is a positive integer and \(\Gamma_{0}\subset[0,1]\) is a finite set depending only on \(d,b,\beta\) and \(\Gamma\)._
Proof.: **Step 1**. In this step, we show that \(-(K_{X}+B)\) is semi-ample.
Let \((X,C)\) be a klt \(\mathbb{R}\)-complement of \((X,B)\) for some effective \(\mathbb{R}\)-divisor \(C\geq B\). Pick some positive real number \(\epsilon_{0}\) such that \((X,C+\epsilon_{0}(C-B))\) is klt. Since \(K_{X}+C+\epsilon_{0}(C-B)\sim_{\mathbb{R}}-\epsilon_{0}(K_{X}+B)\) is nef, we see that \(K_{X}+C+\epsilon_{0}(C-B)\) is semi-ample by [1, Lemma 2.9.1] and hence \(-(K_{X}+B)\) is also semi-ample. Denote by \(f:X\to Z\) the ample model of \(-(K_{X}+B)\) and \(F_{0}\) a general fiber of \(f\). Moreover, by [1, Proposition 3.21], \(f_{\infty}\) is birational to \(f\), so there exists a naturally induced morphism \(h:F\to F_{0}\) such that \(b(K_{X}+B)|_{F_{0}}\sim 0\).
If \(\dim Z=0\), then \(b(K_{X}+B)\sim 0\) and thus \((X,B)\) has a \((b,\{1\})\)-decomposable \(\mathbb{R}\)-complement. Therefore, we may assume that \(\dim Z>0\).
**Step 2**. In this step, we construct finite sets \(\Gamma_{0}\subset(0,1]\) and \(\Gamma_{2}^{\prime}\subset[0,1]\cap\mathbb{Q}\) depending only on \(d,b\) and \(\Gamma\).
Let \(\alpha\) be a positive real number such that
\[\alpha<\min\left\{|\gamma-\frac{i}{b}|>0\mid\gamma\in\bar{\Gamma},i\in \mathbb{Z}_{\geq 0}\right\}.\]
By [1, Theorem 5.20], there exist a finite set \(\Gamma_{1}\) depending only on \(d,\alpha\) and \(\Gamma\), and an \(\mathbb{R}\)-divisor \(\bar{B}\) on \(X\), such that
* \(\bar{B}\in\Gamma_{1}\),
* \(\alpha\operatorname{Supp}B\geq\bar{B}-B\geq 0\), and
* \((X,\bar{B})\) is \(\mathbb{R}\)-complementary.
In particular, for any component \(S\) of \(B\) such that \(b\operatorname{mult}_{S}B\in\mathbb{Z}\), \(\operatorname{mult}_{S}\bar{B}=\operatorname{mult}_{S}B\). Hence \((\bar{B})^{h}=B^{h}\). By [1, Theorem 5.16], there exist a point \(\mathbf{v}_{0}:=(v_{1}^{0},\dots,v_{m}^{0})\) and an open subset \(V\) of the rational envelope of \(\mathbf{v}_{0}\) (that is the smallest affine subspace containing \(\mathbf{v}_{0}\) which is defined over \(\mathbb{Q}\)) in \(\mathbb{R}^{m}\) depending only on \(d\) and \(\Gamma_{1}\), and Weil divisors \(B_{1},\dots,B_{m}\geq 0\) on \(X\), such that \(B(\mathbf{v}_{0})=\bar{B}\), and \((X,B(\mathbf{v}))\) is \(\mathbb{R}\)-complementary for any \(\mathbf{v}\in V\), where \(B(\mathbf{v}):=\sum_{i=1}^{m}v_{i}B_{i}\) for any \(\mathbf{v}:=(v_{1},\dots,v_{m})\in\mathbb{R}^{m}\). Moreover, possibly replacing \(V\), we may assume that
\[B(\mathbf{v})\geq D:=\bar{B}-B\text{, and }|B(\mathbf{v})-\bar{B}|<\alpha\]
for any \(\mathbf{v}\in V\). We remark that \((B(\mathbf{v}))^{h}=(\bar{B})^{h}=B^{h}\) for any \(\mathbf{v}\in V\).
Pick points \(\mathbf{v}_{1},\dots,\mathbf{v}_{k}\in V\cap\mathbb{Q}^{m}\), such that \(\mathbf{v}_{0}\) is in the interior of the convex hull spanned by \(\mathbf{v}_{1},\dots,\mathbf{v}_{k}\). For any integer \(1\leq i\leq k\), set
\[B^{(i)}:=B(\mathbf{v}_{i})-D\text{ and }\bar{B}^{(i)}:=B(\mathbf{v}_{i}).\]
There exist finite sets \(\Gamma_{0}:=\{a_{1},\ldots,a_{k}\}\cup\{1\}\subset(0,1]\) and \(\Gamma_{2}^{\prime}\subset[0,1]\cap\mathbb{Q}\) such that \(\bar{B}^{(i)}\in\Gamma_{2}^{\prime}\) for any integer \(1\leq i\leq k\), \(\sum_{i=1}^{k}a_{i}=1\), and \(\sum_{i=1}^{k}a_{i}\mathbf{v}_{i}=\mathbf{v}_{0}\). In particular, \(\sum_{i=1}^{k}a_{i}B^{(i)}=B\).
**Step 3**. In this step, we run an MMP.
Let \((X,\bar{B}^{(i)}+\bar{G}_{i})\) be an \(\mathbb{R}\)-complement of \((X,\bar{B}^{(i)})\) for some effective \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(\bar{G}_{i}\) for any integer \(1\leq i\leq k\). Pick a positive real number \(\epsilon\) such that \((X,C+\epsilon\bar{G}_{i})\) is klt. Then we may run an MMP on \(K_{X}+C+\epsilon\bar{G}_{i}\) over \(Z\), which terminates with a model \(W_{i}\) over \(Z\) such that \(K_{W_{i}}+C_{W_{i}}+\epsilon\bar{G}_{W_{i}}\) is semi-ample over \(Z\), where \(C_{W_{i}}\) and \(\bar{G}_{W_{i}}\) are the strict transforms of \(C\) and \(\bar{G}_{i}\) on \(W_{i}\) respectively. This MMP is also an MMP on \(-(K_{X}+\bar{B}^{(i)})\) over \(Z\), hence \(-(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)})\) is semi-ample over \(Z\). Let \(g_{i}:W_{i}\to T_{i}\) be the ample model of \(-(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)})\) over \(Z\). Recall that by construction and by Lemma 5.1, \(B-\bar{B}^{(i)}\) is vertical over \(Z\). As \(-(K_{X}+\bar{B}^{(i)})\sim_{\mathbb{R},Z}B-\bar{B}^{(i)}\), one can see that \(F_{0}\) is isomorphic to a general fiber of \(W_{i}\to Z\) by Lemma 2.9, and \(T_{i}\) is birational to \(Z\).
**Step 4**. In this step, we construct a positive integer \(p\), and a hyperstandard set \(\Gamma_{2}\) depending only on \(d,b,\beta\) and \(\Gamma_{2}^{\prime}\), and make a choice of a moduli part \(\mathbf{M}_{i}\) of the canonical bundle formula for \((W_{i},\bar{B}_{W_{i}}^{(i)})\) over \(T_{i}\) such that
\[p\left(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)}\right)\sim pg_{i}^{*}\left(K_{T_{i}}+B _{T_{i}}+\mathbf{M}_{i,T_{i}}\right),\]
\(B_{T_{i}}\in\Gamma_{2}\), and \(p\mathbf{M}_{i}\) is b-Cartier, where \(B_{T_{i}}\) is the discriminant part of the canonical bundle formula for \((W_{i},\bar{B}_{W_{i}}^{(i)})\) over \(T_{i}\).
To see this, we first claim that
1. \(b\) is the non-vanishing order of \(-(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)})|_{F_{i}^{\prime}}\), and \(b(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)})|_{F_{i}^{\prime}}\sim 0\), and
2. \(\tilde{F}\) is a smooth model of the cover of \(F_{i}^{\prime}\) associated to the unique divisor of \(|b(K_{W_{i}}+\bar{B}_{W_{i}}^{(i)})|_{F_{i}^{\prime}}|\), where \(F_{i}^{\prime}\) is a general fiber of \(g_{i}\).
Assume that \(b_{0}\) is the non-vanishing order of \((K_{X}+B)|_{F_{0}}\sim_{\mathbb{Q}}0\). Then \(b_{0}(K_{X}+B)|_{F_{0}}\sim 0\) and \(b_{0}\leq b\). Since \(h_{0}^{*}((K_{X}+B)|_{F_{0}})=h^{*}(K_{X}+B)|_{F}\), we have \(b_{0}=b\) and \(bh^{*}(K_{X}+B)|_{F}\sim 0\). Therefore \(b\) is also the non-vanishing order of \((K_{X}+B)|_{F_{0}}\). Note that
\[\operatorname{Spec}\left(\bigoplus_{i=0}^{b-1}\mathcal{O}_{F}\left(\lfloor ih^ {*}(K_{X}+B)|_{F}\rfloor\right)\right)\to F\;\;(\text{resp.,}\;\operatorname{ Spec}\left(\bigoplus_{i=0}^{b-1}\mathcal{O}_{F_{0}}\left(\lfloor i(K_{X}+B)|_{F_{0}} \rfloor\right)\right)\to F_{0})\]
is the cover associated to the unique element of \(\lfloor h^{*}(K_{X}+B)|_{F}\rfloor\), see [13, SS2.3]. By [13, II Lemma 2.11], there is a natural isomorphism
\[(h_{0})_{*}\mathcal{O}_{F}\left(\lfloor i(h^{*}(K_{X}+B)|_{F})\rfloor\right) \to\mathcal{O}_{F_{0}}\left(\lfloor i((K_{X}+B)|_{F_{0}})\rfloor\right).\]
Thus \(\tilde{F}\) is a smooth model of the cover of \(F_{0}\) associated to the unique divisor of \(|b(K_{X}+B)|_{F_{0}}|\). Furthermore, by our construction, \(X\) is isomorphic to \(W_{i}\) and \(B=\bar{B}^{(i)}\) over the generic point of \(Z\), and \(F_{i}^{\prime}\) is isomorphic to \(F_{0}\) (see Lemma 2.9). Thus the claim holds.
Since \((X,B)\) has a klt \(\mathbb{R}\)-complement, \((W_{i},B_{W_{i}})\) has a klt \(\mathbb{R}\)-complement, where \(B_{W_{i}}\) is the strict transform of \(B\) on \(W_{i}\). Moreover, as \(B_{W_{i}}=\bar{B}_{W_{i}}^{(i)}\) over the generic point of \(T_{i}\), we
can see that \((W_{i},\bar{B}^{(i)}_{W_{i}})\) is klt over the generic point of \(T_{i}\). Recall that \(\Gamma_{2}^{\prime}\subset[0,1]\cap\mathbb{Q}\) is a finite set and \(\bar{B}^{(i)}_{W_{i}}\in\Gamma_{2}^{\prime}\). By Proposition 3.4, we may find a positive integer \(p\), and a DCC set \(\Gamma_{2}\) depending only on \(d,b,\beta\) and \(\Gamma_{2}^{\prime}\) such that \(\bar{\Gamma}_{2}\subset[0,1]\cap\mathbb{Q}\), and make a choice of a moduli part \(\mathbf{M}_{i}\) of the canonical bundle formula for \((W_{i},\bar{B}^{(i)}_{W_{i}})\) over \(T_{i}\), such that
\[p\left(K_{W_{i}}+\bar{B}^{(i)}_{W_{i}}\right)\sim pg_{i}^{*}\left(K_{T_{i}}+B_ {T_{i}}+\mathbf{M}_{i,T_{i}}\right),\]
\(B_{T_{i}}\in\Gamma_{2}\), and \(p\mathbf{M}_{i}\) is b-Cartier, where \(B_{T_{i}}\) is the discriminant part of the canonical bundle formula for \((W_{i},\bar{B}^{(i)}_{W_{i}})\) over \(T_{i}\). Note that \((T_{i},B_{T_{i}}+\mathbf{M}_{i})\) is glc.
**Step 5**. In this step, we find an integer \(n_{0}\) which only depends on \(d,bp\) and \(\Gamma_{2}\) satisfying our requirements and thus finish the proof.
We first show that \(T_{i}\) is of Fano type. Indeed, according to [1, Theorem 0.2], there exist klt pairs \((Z,\Delta)\) and \((T_{i},\Delta_{T_{i}})\) such that
\[K_{X}+B\sim_{\mathbb{R}}f^{*}(K_{Z}+\Delta)\text{ and }K_{W_{i}}+B_{W_{i}} \sim_{\mathbb{R}}g_{i}^{*}(K_{T_{i}}+\Delta_{T_{i}}).\]
In particular, \(K_{T_{i}}+\Delta_{T_{i}}\sim_{\mathbb{R}}\tau_{i}^{*}(K_{Z}+\Delta)\) where \(\tau_{i}\) denotes the induced morphism \(T_{i}\to Z\). Since \(Z\) is the ample model of \(-(K_{X}+B)\), \(-(K_{Z}+\Delta)\) is ample. It follows that \(-(K_{T_{i}}+\Delta_{T_{i}})\) is big and nef as \(\tau_{i}\) is a birational morphism. Thus \(T_{i}\) is of Fano type.
Since \((W_{i},\bar{B}^{(i)}_{W_{i}})\) is \(\mathbb{R}\)-complementary, \((T_{i},B_{T_{i}}+\mathbf{M}_{i})\) is \(\mathbb{R}\)-complementary, that is there is an \(\mathbb{R}\)-divisor \(P_{i}\geq 0\) such that \((T_{i},B_{T_{i}}+P_{i}+\mathbf{M}_{i})\) is glc and \(K_{T_{i}}+B_{T_{i}}+P_{i}+\mathbf{M}_{i,T_{i}}\sim_{\mathbb{R}}0\). As \(T_{i}\) is of Fano type, by [1, Theorem 1.10] (see also [1, Theorem 1.3]), there exists a positive integer \(n_{0}\) divisible by \(bp\) depending only on \(d,bp\) and \(\Gamma_{2}\), and a \(\mathbb{Q}\)-divisor \(B_{T_{i}}^{+}\geq B_{T_{i}}\) on \(T_{i}\), such that \((T_{i},B_{T_{i}}^{+}+\mathbf{M}_{i})\) is glc and \(n_{0}(K_{T_{i}}+B_{T_{i}}^{+}+\mathbf{M}_{i,T_{i}})\sim 0\).
It is enough to show that \((W_{i},\bar{B}^{(i),+}_{W_{i}}:=\bar{B}^{(i)}_{W_{i}}+g_{i}^{*}(B_{T_{i}}^{+}- B_{T_{i}}))\) is a monotonic \(n_{0}\)-complement of \((W_{i},\bar{B}^{(i)}_{W_{i}})\). Indeed, by [1, Corollary 7.18(1)], \((W_{i},\bar{B}^{(i),+}_{W_{i}})\) is lc. Since
\[n_{0}\left(K_{W_{i}}+\bar{B}^{(i),+}_{W_{i}}\right)=n_{0}\left(K_ {W_{i}}+\bar{B}^{(i)}_{W_{i}}\right)+n_{0}g_{i}^{*}\left(B_{T_{i}}^{+}-B_{T_{ i}}\right)\] \[\sim n_{0}g_{i}^{*}\left(K_{T_{i}}+B_{T_{i}}+\mathbf{M}_{i,T_{i}} \right)+n_{0}g_{i}^{*}\left(B_{T_{i}}^{+}-B_{T_{i}}\right)\] \[=n_{0}g_{i}^{*}\left(K_{T_{i}}+B_{T_{i}}^{+}+\mathbf{M}_{i,T_{i}} \right)\sim 0,\]
one can see that \((W_{i},\bar{B}^{(i),+}_{W_{i}})\) is a monotonic \(n_{0}\)-complement of \((W_{i},\bar{B}^{(i)}_{W_{i}})\). Remark that by **Step 3**, \(X\dashrightarrow W_{i}\) is \(-(K_{X}+\bar{B}^{(i)})\)-negative. Therefore \((X,\bar{B}^{(i)})\) also has a monotonic \(n_{0}\)-complement \((X,\bar{B}^{(i),+})\). This immediately implies that \((X,\sum_{i=1}^{k}a_{i}\bar{B}^{(i),+})\) is an \((n_{0},\Gamma_{0})\)-decomposable \(\mathbb{R}\)-complement of \((X,B)\). We may finish the proof.
Proof of Theorem 4.1.: By Theorem 4.2, there exist a positive integer \(n_{0}\) and a finite set \(\Gamma_{0}\subset(0,1]\) depending only on \(d,b,\beta,\Gamma\), such that \((X,B)\) has an \((n_{0},\Gamma_{0})\)-decomposable \(\mathbb{R}\)-complement. Theorem 4.1 follows from Diophantine approximation as in the proof of [1, Theorem 1.8] (see [1, Section 6] for details).
Proof of Theorem 1.6.: It follows from Theorem 4.1.
## 5. Effective Iitaka fibrations
In this section, we prove Theorem 1.4.
**Lemma 5.1**.: _Assume that \(f:X\to Z\) is a contraction between normal quasi-projective varieties, and \(D\) is an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X\). Suppose that either_
* \(\dim Z=0\) _and_ \(D\sim_{\mathbb{R}}0\)_, or_
* \(\dim Z>0\) _and_ \(D\sim_{\mathbb{R}}f^{*}D_{Z}\) _for some big_ \(\mathbb{R}\)_-Cartier_ \(\mathbb{R}\)_-divisor_ \(D_{Z}\) _on_ \(Z\)_._
_Then \(\kappa(X,D)\geq 0\) if and only if \(D^{h}\) is a \(\mathbb{Q}\)-divisor._
Proof.: We first assume that \(\kappa(X,D)\geq 0\). Suppose on the contrary that \(D^{h}\) is not a \(\mathbb{Q}\)-divisor. Let \(F\) be a very general fiber of \(f\). Then \(\{mD|_{F}\}=\{mD^{h}|_{F}\}\neq 0\) for any positive integer \(m\). By our assumption, \(D|_{F}\sim_{\mathbb{R}}0\) and
\[\lfloor mD|_{F}\rfloor=mD|_{F}-\{mD|_{F}\}\sim_{\mathbb{R}}-\{mD|_{F}\}\]
is not pseudo-effective for any positive integer \(m\). Thus \(\lfloor mD\rfloor\) is not pseudo-effective for any positive integer \(m\), which implies that \(\kappa(X,D)=-\infty\), a contradiction. Therefore, \(D^{h}\) is a \(\mathbb{Q}\)-divisor.
Now suppose that \(D^{h}\) is a \(\mathbb{Q}\)-divisor. If \(\dim Z=0\), then \(D\sim_{\mathbb{Q}}0\) and thus \(\kappa(X,D)=0\). Assume that \(\dim Z>0\). Let \(b\) be a positive integer, such that \(bD\sim 0\) on the generic fiber of \(f\). Then there exists \(\alpha\in\mathcal{K}(X)\) such that \(bD+(\alpha)\) is vertical over \(Z\). Since \(bD+(\alpha)\sim_{\mathbb{R},Z}0\), \(bD+(\alpha)=f^{*}D^{\prime}_{Z}\) for some \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(D^{\prime}_{Z}\) on \(Z\) by [13, Lemma 2.11]. Thus \(D^{\prime}_{Z}\sim_{\mathbb{R}}bD_{Z}\), so \(D^{\prime}_{Z}\) is big. By Lemma 2.18,
\[\kappa(X,D)=\kappa(X,bD)=\kappa(X,bD+(\alpha))=\kappa(Z,D^{\prime}_{Z})=\dim Z >0,\]
and we are done.
Proof of Theorem 1.4.: Let \(\Gamma\subset[0,1]\) be a DCC set. Without loss of generality, we may assume that \(1\in\Gamma\). Assume that \((X,B)\) is a projective lc pair of dimension \(d\) such that \(\kappa(X,K_{X}+B)\geq 0\) and \(B\in\Gamma\). Since we assume Conjecture 2.11 in dimension \(d\), \((X,B)\) has a good minimal model \((X^{\prime},B^{\prime})\). Possibly replacing \((X,B)\) with \((X^{\prime},B^{\prime})\), we may assume that \(K_{X}+B\) is semi-ample. Let \(f:X\to Z\) be the ample model of \(K_{X}+B\).
Suppose that \(\dim Z=0\). Then \(B\in\Gamma_{0}\) for some finite set \(\Gamma_{0}\subset\Gamma\cap\mathbb{Q}\) which only depends on \(d\) and \(\Gamma\) by the global ACC [12, Theorem D] and Lemma 5.1, and \(K_{X}+B\sim_{\mathbb{R}}0\). As Conjecture 1.3 holds in dimension \(d\), we may find a positive integer \(m_{0}\) depending only on \(d\) and \(\Gamma_{0}\) such that \(m_{0}(K_{X}+B)\sim 0\). In what follows, we may assume that \(\dim Z>0\).
According to Proposition 3.1, there exist a positive integer \(p\) and a DCC set \(\Gamma^{\prime}\subset[0,1]\) depending only on \(d\) and \(\Gamma\), and a choice of the moduli part \(\mathbf{M}_{f}\) of the canonical bundle formula for \((X,B)\) over \(Z\), such that \(B_{Z}\in\Gamma^{\prime}\), \(p\mathbf{M}_{f}\) is b-Cartier, and
\[p(K_{X}+B)\sim pf^{*}\left(K_{Z}+B_{Z}+\mathbf{M}_{f,Z}\right),\]
where \(B_{Z}\) is the discriminant part of the canonical bundle formula for \((X,B)\) over \(Z\). Recall that \(Z\) is the ample model of \(K_{X}+B\), hence \(K_{Z}+B_{Z}+\mathbf{M}_{f,Z}\) is big. By [1, Theorem 1.3], there is a positive integer \(m_{1}\) depending only on \(d,p\), and \(\Gamma^{\prime}\), such that
\(||m_{1}(K_{Z}+B_{Z}+\mathbf{M}_{f,Z})|\) defines a birational map. By [14, II 2.11 Lemma], we have that
\[H^{0}\left(X,\lfloor pm_{1}(K_{X}+B)\rfloor\right) =H^{0}\left(X,\lfloor pm_{1}f^{*}\left(K_{Z}+B_{Z}+\mathbf{M}_{f,Z} \right)\rfloor\right)\] \[\cong H^{0}(Z,\lfloor pm_{1}(K_{Z}+B_{Z}+\mathbf{M}_{f,Z})\rfloor).\]
Therefore \(|\lfloor pm_{1}(K_{X}+B)\rfloor|\) defines a map which is birational to \(f_{\infty}\).
Let \(m:=pm_{0}m_{1}\), then \(m\) satisfies our required property.
Proof of Corollary 1.5.: It follows from Theorem 1.4, the existence of good minimal models in dimension \(\leq 3\)[13, 13], and Theorem 2.14.
## 6. Existence of decomposable Iitaka fibrations
In this section, we recall the definition of _invariant Iitaka fibrations_, which generalizes Iitaka fibrations to the category of all pairs with non-negative invariant Iitaka dimensions. We then show the existence of decomposable Iitaka fibrations.
**Definition 6.1** (Invariant Iitaka fibrations).: Let \(X\) be a normal projective variety and \(D\) an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor on \(X\) such that \(\kappa_{\iota}(X,D)\geq 0\). A morphism \(f_{\infty}:X_{\infty}\to Z_{\infty}\) between smooth varieties is called an _invariant Iitaka fibration of \(D\)_ if there exists an \(\mathbb{R}\)-Cartier \(\mathbb{R}\)-divisor \(D^{\prime}\) on \(X\), such that \(D\sim_{\mathbb{R}}D^{\prime}\), \(\kappa(X,D^{\prime})\geq 0\), and \(f_{\infty}\) is an Iitaka fibration of \(D^{\prime}\).
By [15, Lemma 2.3], an invariant Iitaka fibration of \(D\) always exists, and is independent of the choice of \(D^{\prime}\).
We propose the following conjecture, which is a little stronger than Conjecture 1.8.
**Conjecture 6.2**.: _Let \(d\) be a positive integer and \(\Gamma\subset[0,1]\) a DCC set. Then there exist a positive integer \(m\), a finite set \(\Gamma_{0}\subset(0,1]\), and a DCC set \(\Gamma^{\prime}\subset[0,1]\) depending only on \(d\) and \(\Gamma\) satisfying the following. Assume that \((X,B)\) is an lc pair of dimension \(d\) such that \(B\in\Gamma\), \(\kappa_{\iota}(X,K_{X}+B)\geq 0\), and either \(\Gamma\) is a finite set or all component of \(B\) are \(\mathbb{Q}\)-Cartier. Then_
1. _(Weak version)_ \((X,B)\) _has a_ \((\Gamma_{0},\Gamma^{\prime})\)_-decomposable Iitaka fibration._
2. _(Strong version)_ \((X,B)\) _has an_ \((m,\Gamma_{0},\Gamma^{\prime})\)_-decomposable Iitaka fibration._
**Theorem 6.3**.: _Let \(d\) be a positive integer. Assume that the non-vanishing conjecture (Conjecture 2.12) holds in dimension \(d\). Then:_
1. _Conjecture_ 6.2_(1) holds in dimension_ \(d\)_._
2. _Assume the effective log Iitaka conjecture (Conjecture_ 1.2_) holds in dimension_ \(d\)_. Then Conjecture_ 6.2_(2) holds in dimension_ \(d\)_._
Proof of Theorem 6.3.: Let \(\Gamma\subset[0,1]\) be a DCC set and possibly replacing \(\Gamma\) with \(\Gamma\cup\{1\}\), we may assume that \(1\in\Gamma\).
Assume that \((X,B)\) is an lc pair of dimension \(d\) such that \(B\in\Gamma\), \(\kappa_{\iota}(X,K_{X}+B)\geq 0\), and either \(\Gamma\) is a finite set or every component of \(B\) is \(\mathbb{Q}\)-Cartier. Let \(f_{\infty}:X_{\infty}\to Z_{\infty}\) be an invariant Iitaka fibration of \(K_{X}+B\), \(h:X_{\infty}\to X\) the induced morphism, and \(F\) a very
general fiber of \(f_{\infty}\). Let \(B_{\infty}:=h_{*}^{-1}B+E\), where \(E\) is the sum of all the \(h\)-exceptional prime divisors. Then \((X_{\infty},B_{\infty})\) is log smooth, \(\kappa_{\iota}(X_{\infty},K_{X_{\infty}}+B_{\infty})=\kappa_{\iota}(X,K_{X}+B)\), and \(\kappa_{\iota}(F,(K_{X_{\infty}}+B_{\infty})|_{F})=0\).
If \(\Gamma\) is a finite set, then we let \(\tilde{\Gamma}:=\Gamma\), otherwise we let \(\tilde{\Gamma}:=\emptyset\). We let \(\mathbf{v}_{0}:=(v_{1}^{0},\ldots,v_{m}^{0}),g\), and \(V\) be as in [14, Proposition 5.1] which depend only on \(d,\Gamma,\tilde{\Gamma}\). We may write \(B_{\infty}=\sum b_{j}B_{\infty}^{(j)}\), where \(B_{\infty}^{(j)}\) are the irreducible components of \(B_{\infty}\). Then there exist distinct Weil divisors \(B_{\infty,1},\ldots,B_{\infty,m}\geq 0\) on \(X_{\infty}\), such that
* \(g(\gamma)\geq\gamma\) for any \(\gamma\in\bar{\Gamma}\), and \(g(\gamma^{\prime})=\gamma^{\prime}\) for any \(\gamma^{\prime}\in\tilde{\Gamma}\),
* \(B_{\infty}(\mathbf{v}_{0})=\sum g(b_{j})B_{\infty}^{(j)}\), where \(B_{\infty}(\mathbf{v}):=\sum_{i=1}^{m}v_{i}B_{\infty,i}\) for any \(\mathbf{v}:=(v_{1},\ldots,v_{m})\in\mathbb{R}^{m}\),
* both \((X_{\infty},B_{\infty}(\mathbf{v}))\) and \((X_{\infty},B_{\infty}(\mathbf{v})-D_{\infty})\) are lc for any \(\mathbf{v}\in V\), where \(D_{\infty}:=B_{\infty}(\mathbf{v}_{0})-B_{\infty}\geq 0\), and
* \(\kappa(X_{\infty},K_{X_{\infty}}+B_{\infty}(\mathbf{v})-D_{\infty})=\kappa_{\iota }(X_{\infty},K_{X_{\infty}}+B_{\infty})=\dim Z\) for any \(\mathbf{v}\in V\cap\mathbb{Q}^{m}\).
Let \(D_{F}:=D_{\infty}|_{F}\), \(D:=h_{*}D_{\infty}\), \(B_{F}(\mathbf{v}):=B_{\infty}(\mathbf{v})|_{F}\) and \(B(\mathbf{v}):=h_{*}B_{\infty}(\mathbf{v})\) for any \(\mathbf{v}\in\mathbb{R}^{m}\). By the construction of \(\mathbf{v}_{0},g\), and \(V\), we have
* \((X,B(\mathbf{v})-D)\) is lc for any \(v\in V\), and
* \(\kappa(F,K_{F}+B_{F}(\mathbf{v})-D_{F})=\kappa_{\iota}(F,(K_{X_{\infty}}+B_{ \infty})|_{F})=0\) for any \(\mathbf{v}\in V\cap\mathbb{Q}^{m}\).
Note that if \(\Gamma\) is a finite set, then \(D_{\infty}=0\) and \(D_{F}=0\). By the definition of Iitaka fibrations, \(f_{\infty}\) is an Iitaka fibration of \(K_{X_{\infty}}+B_{\infty}(\mathbf{v})-D_{\infty}\) for any \(\mathbf{v}\in V\cap\mathbb{Q}^{m}\). As \((X,B(\mathbf{v})-D)\) is lc for any \(\mathbf{v}\in V\),
\[K_{X_{\infty}}+B_{\infty}(\mathbf{v})-D_{\infty}-h^{*}(K_{X}+B(\mathbf{v})-D)\geq 0\]
and is \(h\)-exceptional for any \(\mathbf{v}\in V\). Therefore \(f_{\infty}\) is an Iitaka fibration of \(K_{X}+B(\mathbf{v})-D\) for any \(\mathbf{v}\in V\cap\mathbb{Q}^{m}\).
Let \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\in V\cap\mathbb{Q}^{m}\) be rational points, such that \(\mathbf{v}_{0}\) is contained in the interior of the convex hull of \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\). There exist a DCC set \(\Gamma^{\prime}\ni 1\) and a finite set \(\Gamma_{0}:=\{a_{1},\ldots,a_{k}\}\subset(0,1]\) depending only on \(d\) and \(\Gamma\) such that \(B_{i}:=B(\mathbf{v}_{i})-D\in\Gamma^{\prime}\) for any integer \(1\leq i\leq k\), and \(\sum_{i=1}^{k}a_{i}\mathbf{v}_{i}=\mathbf{v}_{0}\). In particular,
\[K_{X}+B=\sum_{i=1}^{k}a_{i}\left(K_{X}+B_{i}\right).\]
Recall that \(f_{\infty}\) is an Iitaka fibration of \(K_{X}+B(\mathbf{v})-D\) for any \(\mathbf{v}\in V\cap\mathbb{Q}^{m}\). Thus \(f_{\infty}\) is an Iitaka fibration of \(K_{X}+B_{i}\) for any integer \(1\leq i\leq k\). We conclude that \((X,B)\) has a \((\Gamma_{0},\Gamma^{\prime})\)-decomposable Iitaka fibration, which is (1).
Now suppose that Conjecture 1.2 holds in dimension \(d\). Then there exists a positive integer \(m\) depending only on \(d\) and \(\Gamma^{\prime}\), such that the map defined by \(|\lfloor m(K_{X}+B_{i})\rfloor|\) is birational to \(f_{\infty}\) for any integer \(1\leq i\leq k\). Therefore (2) holds.
**Corollary 6.4**.: _Conjecture 6.2 holds when \(d\leq 3\)._
Proof.: It follows from Theorem 6.3 and the existence of good minimal models in dimension \(\leq 3\)[13, 14].
**Theorem 6.5**.: _Let \(d\) be a positive integer. Assume that the good minimal model conjecture (Conjecture 2.11) and the existence of complements (Conjecture 1.3) hold in dimension \(d\). Then Conjecture 6.2 holds in dimension \(d\)._
Proof.: It follows from Theorems 1.4 and 6.3.
Proof of Theorem 1.9.: It is a special case of Theorem 6.3.
Proof of Corollary 1.10.: It is a special case of Corollary 6.4.
Proof of Theorem 1.11.: It is a special case of Theorem 6.5.
|
2303.12808 | PACO: Provocation Involving Action, Culture, and Oppression | In India, people identify with a particular group based on certain attributes
such as religion. The same religious groups are often provoked against each
other. Previous studies show the role of provocation in increasing tensions
between India's two prominent religious groups: Hindus and Muslims. With the
advent of the Internet, such provocation also surfaced on social media
platforms such as WhatsApp.
By leveraging an existing dataset of Indian WhatsApp posts, we identified
three categories of provoking sentences against Indian Muslims. Further, we
labeled 7,000 sentences for three provocation categories and called this
dataset PACO. We leveraged PACO to train a model that can identify provoking
sentences from a WhatsApp post. Our best model is fine-tuned RoBERTa and
achieved a 0.851 average AUC score over five-fold cross-validation.
Automatically identifying provoking sentences could stop provoking text from
reaching out to the masses, and can prevent possible discrimination or violence
against the target religious group.
Further, we studied the provocative speech through a pragmatic lens, by
identifying the dialog acts and impoliteness super-strategies used against the
religious group. | Vaibhav Garg, Ganning Xu, Munindar P. Singh | 2023-03-19T04:39:36Z | http://arxiv.org/abs/2303.12808v1 | # PACO: Provocation Involving Action, Culture, and Oppression
###### Abstract
_Warning: This paper may include examples that can be triggering to some readers, especially to the people of a religious group._
In India, people identify with a particular group based on certain attributes such as religion. The same religious groups are often provoked against each other. Previous studies show the role of provocation in increasing tensions between India's two prominent religious groups: Hindus and Muslims. With the advent of the Internet, such provocation also surfaced on social media platforms such as WhatsApp.
By leveraging an existing dataset of Indian WhatsApp posts, we identified three categories of provoking sentences against Indian Muslims. Further, we labeled 7,000 sentences for three provocation categories and called this dataset Paco. We leveraged Paco to train a model that can identify provoking sentences from a WhatsApp post. Our best model is fine-tuned RoBERTa and achieved 0.851 average AUC score over five-fold cross-validation. Automatically identifying provoking sentences could stop provoking text from reaching out to the masses, and can prevent possible discrimination or violence against the target religious group.
Further, we studied the provocative speech through a pragmatic lens, by identifying the dialog acts and impoliteness super-strategies used against the religious group.
## 1 Introduction
In India, most people identify themselves with a religion, leading to two main groups: Hindus and Muslims, who form 79% and 13% of Indians, respectively [34]. However, the same identifying attribute, religion, is often misused to provoke these groups against each other [33].
In psychology, provocation is considered antecedent to aggression and violence [2, 4]. Hence, in our setting, we define _provoking sentences_ as those that
either make the readers angry (at a religious group) or urge them to take any action (against that religious group). Saha et al. [34] curate a dataset of WhatsApp posts that instill fear of Indian Muslims in the mind of readers. Our investigation found that such fearful posts also contain provoking sentences against the same religious group. Example 1 shows one such WhatsApp post. The sentence colored blue may instill fear among the readers, especially among citizens of the mentioned states due to the possible Islamic acquisition. Whereas, its subsequent sentences (colored red) typecast Muslims and can make the readers angry at them. The readers of such posts can be Muslims themselves but the readers who are provoked belong to other religious groups.
Example 1: Provocation accompanied with fear
"Leave chatting and read this post or else all your life will be left in chatting. In 1378, a part was separated from India, became an Islamic nation - named Iran...and now Uttar Pradesh, Assam and Kerala are on the verge of becoming an Islamic state...People who do love jihad is a Muslim. Those who think of ruining the country - Every single one of them is a Muslim!!! Everyone who does not share this message forward should be a Muslim...."
Our manual investigation of WhatsApp posts reveals the following three types of provoking sentences:
**Provocation involving religious culture:**: Sentences that can make readers annoyed at a religion's scriptures, religious practices, or religious leaders fall in this category. Moreover, the sentences stereotyping all people of that religion are also considered relevant.
**Provocation involving religious oppression:**: Sentences highlighting past mis-deeds (either real incidents or fake) of a religious group, such as its violence, domination, or superiority over others.
**Provocation involving action:**: Sentences that urge its readers to act against a religious group. Such actions include violence, discriminating against people of that religion, or boycotting that religious group or their monuments.
Example 2 shows a sentence for each type of provocation. The first sentence can make readers angry at the religious book, Quran (Islamic scripture). We consider all the sentences that target scriptures, leaders, or people of religion as provoking involving religious culture. The second sentence provokes readers by mentioning the past misdeeds of Muslims destroying Hindu temples, hence it falls under the religious oppression category. Whereas, the third sentence asks Hindu readers to not worship tombstones of other religions (Islam according to the context of the post), making it provocation involving action.
**Difference from hate speech:**: Hate speech generally leverages derogatory keywords to take direct digs at the target [19], which is not necessary for
provocative speech. The three sentences in Example 2 show how readers can be provoked without using abusive and derogatory words. Due to this indirect nature of provocative speech, the identification of provoking sentences is even more difficult.
Provoking sentences can be disturbing for the target religious group (Muslims in our case) and can emotionally harm them, especially on platforms such as WhatsApp where there is no content moderation due to end-to-end encryption [34]. On the other hand, non-target groups can end up sharing posts containing provoking sentences. This can even lead to violence against the target religious group [2]. Therefore, we propose the following research questions.
### Research Questions
**RQ\({}_{\text{identify}}\):**: How can we automatically identify three types of provoking sentences from a WhatsApp post?
Since a post can contain multiple types of provoking sentences, we target sentence-level classification in RQ\({}_{\text{identify}}\). Moreover, sentence-level classification can help in pinpointing the sentences (from the whole post) that are responsible for provoking readers.
**RQ\({}_{\text{pragmatics}}\):**: What dialogue acts and impoliteness super-strategies are used in provocative speech?
To understand provoking sentences from an illocutionary perspective, identifying dialogue acts is important [5, 36, 40]. A dialog act is defined as the function of the speaker's utterance and hence reflects their intention behind the dialog [5, 36, 40]. Recent social media research stress on identifying dialog acts specific to the scenario [21, 45]. Therefore, through our qualitative analysis, we find the dialog acts used in the provocation (against Muslims) scenario.
Provoking sentences involving religious culture and oppression can attack Muslims, their scriptures, their leaders, and so on. As a result, such sentences are impolite to Muslim readers. We identify what impoliteness super-strategies
are commonly used in these sentences. Identifying dialogue acts and impoliteness super-strategies provides a pragmatic understanding of the provocative text against Indian Muslims.
### Contributions and Novelty
We make the following contributions:
* Paco, a dataset of 7,000 sentences annotated for three provocation categories: religious culture, religious oppression, and action.
* On Paco, We train Natural Language Processing (NLP) model to identify provoking sentences from a post.
* We identify the characteristics of provocative language used in the religious context. To do so, we uncover dialog acts and impoliteness super-strategies used by the writer of such text.
Prior studies focus on either identifying hate speech or fear speech [3, 7, 14, 34] but not provocative speech. To the best of our knowledge, we are the first ones to computationally identify provocative speech in the religious context. We not only identify provoking sentences (against Muslims) but also study the pragmatics of such provocative language through dialogue acts and impoliteness super-strategies.
### Key Findings
Prior studies [3, 7, 14, 34] focused on identifying hate or fear speech but not provocative speech against a religious group. On the other hand, we trained a transformer-based model that achieved an average of 0.851 AUC-ROC score (Area Under the Receiver Operating Characteristic curve) for five-fold cross validation of Paco.
Our qualitative analysis revealed six types of dialog acts that are used in provocative speech against Muslims. They are as follows: _1) Accusation, 2) Defaming Muslims, 3) Criticizing Islam, 4) Comparison, 5) Commanding, and 6) Motivating_. Moreover, we leveraged Culpeper et al.'s [13] impoliteness model and found that negative impoliteness and bald-on-record are the most prominent super-strategies used across culture and oppression categories.
### Paper Organization
Section 2 lists the related work on hate and fear speech, and discusses how provocative speech is different from both of them. Section 3 describes the steps that we take to address RQidentify, including curation of Paco and training multiple models. Section 4 describes the qualitative analysis to address RQpragmatics. In the end, Section 5 concludes the paper, highlights the limitations of our study, and suggests future directions.
Related Work
In this section, we discuss existing studies related to hate and fear speech and show how our work is different from theirs.
### Hate Speech
There is no all-encompassing definition of hate speech [41]. But typically, hate speech is considered abusive speech or a direct serious attack on an individual or a group based on the attributes such as race, ethnicity, religion, and sexual orientation [19, 37]. Due to direct and abusive attacks, hate speech often contains toxic words such as n*gger and a**hole that are used against the target. Some provoking sentences (especially involving religious culture) may directly attack a religious community and can also fall into hate speech. However, many provoking sentences lie outside the scope of hate speech. This is because provoking sentences don't necessarily have toxic words. For example, _"Question- What is non-violence...?? Answer: Bakra Eid"_ uses sarcastic language to provoke readers against the Muslim tradition of killing goats ('Bakra' in Hindi) during the Eid celebration. As a result, this sentence falls under provocation involving religious culture but not hate speech.
Dangerous speech, a subcategory of hate speech, refers to the text that can invoke violence against a group [6]. Such violence-invoking cases also overlap with the provocation involving action sentences to some extent. For example, _"Someday Hindus should be ready to fight against Muslims."_ falls into both dangerous speech and provocation involving action. However, provocation involving action is not limited to violence, but also includes cases of supporting anti-sMuslim groups and going against Muslim businesses, organizations, and monuments. For example, _"You buy only from Hindus like all your festivals, etc."_, asks readers to buy goods only from Hindus and not other religious groups. In India, since Muslims are the next largest religious group after Hindus [34], such sentences indirectly urge readers to go against Muslim businesses but don't fall under the umbrella of dangerous speech.
Many studies focus on the automatic identification of hate speech from text [3, 7, 14, 16, 20, 28, 30, 34, 46]. To show that hate speech is different from provocative speech, we test the state-of-the-art hate speech models on Paco and compare their performance with our transformer-based model (shown in Section 3.3).
### Fear Speech and Islamophobia
The dictionary meaning of Islamophobia is fear of, dislike and hate toward, and discrimination against Islam and Muslims [24]. However, most of the studies on Islamophobia only consider the hate aspect [39, 42]. Only a few studies focus on the fear aspect [34]. Saha et al. [34] conduct a large-scale study on fearful messages posted on WhatsApp. They curate 27k posts from Indian public WhatsApp groups and find \(\sim\)8,000 of them to be fearful through manual
annotation and similarity hashing [22]. Further, they train models to identify such fearful messages. We leverage Saha et al.'s [34] dataset to find provoking sentences that either induce anger or call for some action. We focus on a different emotion (anger instead of fear) evoked against Muslims. We acknowledge that sometimes the same sentence may induce anger and fear in different individuals. In Example 2, provoking sentences involving religious culture and oppression may also induce fear among some individuals. However, many provoking sentences such as "You call Jinnah great...Be ashamed of something" predominantly instill anger and don't overlap with fear speech. In addition, cases of provocation involving action are not covered under fear speech.
To show that provocation is different than fear speech, we test Saha et al.'s [34] fear speech model on Paco and compare its performance with our transformer-based model (shown in Section 3.3).
To the best of our knowledge, we are the first ones to study the pragmatics of provocative speech against Muslims, curate a labeled dataset of provoking sentences, and build computational method for the identification.
## 3 Answering RQidentify
Figure 1 shows the overview of our method. Our method includes two phases. First, we leveraged fearful posts collected by Saha et al. [34] and curated Paco, a dataset of provoking sentences (Section 3.1). We also conducted an exploratory data analysis of Paco (Section 3.2). Second, we trained and evaluated multiple embeddings-based and transformer-based models over five-fold cross-validation of Paco (Section 3.3). Further, we chose the best-performing model for the identification of provoking sentences and compared its performance with the hate speech and fear speech approaches.
### Curation of Paco
Saha et al. [34] identify posts that can instill fear of Muslims, scraped from Indian public WhatsApp groups discussing politics. Out of 27k curated posts, they find \(\sim\)8,000 fearful posts. However, they only shared 4,782 posts publicly, out of which 1,142 posts were fearful. As indicated in Example 1, fearful posts also contain provoking sentences. Hence, we split 1,142 fearful posts into 25,468 sentences (using the sentence tokenizer of Natural Language Toolkit (NLTK) library [27]) and randomly sampled 7,000 sentences for annotation purposes.
The first two authors of this paper were the annotators. They were given a sentence and asked to label one of the following types: (i) provocation involving culture, (ii) provocation involving oppression, (iii) provocation involving action, and (iv) none. Along with the sentence to annotate, they were provided its preceding and succeeding sentences to get enough context while labeling.
The annotation was conducted in three phases. In the first phase, the annotators were provided with initial labeling instructions, which they used to label a total of 400 sentences, in four rounds of 100 sentences each. In this phase,
Cohen's kappa score came out to be 0.48 (moderate agreement) [11]. After each round, the annotators discussed their disagreements, which helped in finalizing the labeling instructions. In the second phase, the two annotators labeled 600 sentences using the final instructions, leading to Cohen's kappa score of 0.73 (substantial agreement). In the third phase, the remaining 6,000 sampled sentences were split among the two annotators, such that only one annotator labeled a sentence.
The final labeling instructions, including definitions and examples of each provocation category, are shown below:
* **Action:** Sentences that urge readers to act against a religious group were labeled as provocation involving action. For example, the following sentence asks readers to support Hindus in the fight against Muslims. This support-seeking is open to interpretation and can mean anything from boycotts to violence against Muslims. Thus, we considered such cases in this category. _"Always support Hindus in the fight of Hindu Muslim, right wrong no
Figure 1: Overview of our method. First, we curated a dataset of provoking sentences called Paco. Second, we leveraged Paco to train multiple models and choose the best one for RQidentify.
matter, all of them later!"_ We also considered subtle cases that indicate some action. For example, an indirect call to shut down a madrasa (Islamic school) is depicted in the following: _"If the madrasa is not closed, after 15 years, more than half of the Muslims of the country will be supportive of the ISIS ideology"_
* set fire
- heavy damage at Sartala; 3 out of four rooms were destroyed"_
* **Religious culture:** Sentences that can make readers annoyed at a religion's sacred books, leaders, or people of that religion, were labeled relevant for this category. Following are a few examples: "In Kashmir, every person who speaks 'Murdabad' is a Muslim" (targeting Muslims) "...Muslim children are well taught Jihadi Quran in madrasa" (targeting Quran, the main Islamic scripture) The first example above targets all Muslims that they say 'Murdabad' in Kashmir (meaning India's dismissal). Moreover, the second example mentions the Quran to be Jihadi, meaning it spreads terrorism. Indirectly, the latter example typecasts all Muslim children to become terrorists. Such sentences invoke anger against a religious group and hence are considered provoking.
The sentences not indicating any of the above types were labeled 'none'. Combining the sentences annotated in the three phases, we curated a dataset of 7,000 sentences. We call this dataset Paco.
**Ethics note:** We annotated the sentences present in fearful posts. Such fearful posts were shared by Saha et al. [34] and did not include any private information of WhatsApp users who wrote them. Moreover, since these posts were part of public WhatsApp groups, neither Saha et al. [34] nor us were required to
take the consent of WhatsApp users before using the text. We acknowledge that the nature of the text can be disturbing, especially for Muslims. That's why we did not hire crowd workers to annotate sentences. Instead, the authors of this paper who were aware of the nature of the text completed the annotation.
### Exploratory Data Analysis
Figure 2 shows the distribution of the annotated sentences across the four classes: (i) action, (ii) religious oppression, (iii) religious culture, and (iv) none. Out of 7,000 sentences, 433 (6.19%) were provoking sentences involving an action, 1,065 (15.21%) used religious culture, and 1,579 (22.55%) used religious oppression. Moreover, there were 3,923 (56.05%) sentences that did not belong to any of these provocation types (none category).
We also visualized word clouds for each of the provocation categories to know the most frequent words in them. In all three categories, words such as Hindus and Muslims were among the most frequent words. To see other frequent words, we removed the words, Hindus and Muslims, from the text and visualized the word clouds.
Figure 3 shows the word cloud for the religious oppression category. As this category describes past oppressive incidents, the word cloud shows words related to the victims of such oppression such as girl, women, daughter, kafir, and children. Moreover, it is showing words that describe the act of oppression such as killed, raped, cut, riot, terrorist, jihadi, and bomb.
Figure 4 shows the word cloud for the action category. Words such as country and India are prominent. This is because such sentences ask readers to eradicate Muslims (or their possessions) because of their negative impact on the 'country' or 'India'. In addition, we observed many action-describing words such as make, raise, wake, voice, stand, and fight.
Figure 2: Donut chart showing the number of sentences in each of the four classes.
Figure 5 shows the word cloud for the religious culture category. It has many words describing the target religious group such as islam, islamic, quran, and allah. In addition, it shows some words that are also present in the oppression word cloud, such as jihad, jihadi, and terrorism. These words have a different context for the culture category, basically used to stereotype the people of a religious group.
Moreover, there are words such as people, will, and said that are common in all word clouds.
### Model Training
We considered our problem a multiclass classification task. For multiclass classification, we explored multiple training approaches on Paco. Each sentence was input to a model and the output was one of the four classes: (i) religious culture, (ii) religious oppression, (iii) action, and (iv) none. We discuss multiple approaches and compare their performances below.
Figure 3: Word cloud for sentences of the religious oppression category.
* **IF-IDF**: weighs each word in the corpus according to its Term Frequency (TF) and Inverse Document Frequency (IDF) [8]. Based on the number of unique tokens in Paco, TF-IDF yielded a 11,441-dimensional embedding for each sentence of this dataset. We provided these embeddings as input to multiple classifiers such as Logistic Regression (LR) [18], Random Forest (RF) [44], and Support Vector Machine (SVM) [10], and compared their performance.
* **Word2Vec**: converted each word in Paco into a 300-dimensional embedding [26]. In our case, we obtained such word embeddings using the Word2Vec model pretrained on the Google News dataset. To form a sentence embedding, we averaged Word2Vec embeddings for all words present in that sentence. Further, sentence embeddings were provided as inputs to LR, RF, and SVM.
* **GloVe**: yields an embedding for each word in the corpus [31]. We used Stanford's GloVe model, which is trained on the Wikipedia dataset and returns
Figure 4: Word cloud for sentences of the action category.
a 100-dimensional word embedding. We averaged these word embeddings in the same way as we did in Word2Vec. Finally, we trained the above three classifiers on GloVe sentence embeddings.
**Universal Sentence Encoder (USE)**: leverages Deep Averaging Network (DAN) to extract 512-dimensional embeddings for each sentence [9]. We leveraged these embeddings as features of the above classifiers.
**Transformer-based models:**: We leveraged modern transformer-based approaches such as BERT [17], RoBERTa [25], and XLNet [43]. We fine-tuned all these models on our dataset by adding a layer (with softmax activation) in the forward direction, containing four output units (one for each class). Further, these models were trained using a batch size of 32, the maximum sequence length of 256, for five epochs to minimize the cross-entropy loss.
All the above approaches were evaluated on Paco for five-fold cross-validation. Moreover, Paco was divided into five folds in a stratified manner, leading to
Figure 5: Word cloud for sentences of the religious culture category.
the same class distribution in each fold. For evaluation, prior works on hate and fear speech [32, 34, 35] leveraged the AUC-ROC metric because it measures the goodness of fit, especially appropriate for imbalanced datasets. Paco also suffers from imbalanced class distribution (Figure 2). Hence, we also leveraged AUC-ROC score for evaluation.
Table 1 shows AUC-ROC score (obtained by one versus one method [29]) achieved by each of the above approaches in five folds. Among embeddings-based methods, TF-IDF with LR achieved 0.818 as the average AUC-ROC score, followed by Word2Vec with SVM (0.815), USE with SVM (0.812), and GloVe with SVM (0.800). However, all transformer-based approaches such as BERT (0.832 average AUC-ROC), RoBERTa (0.851 average AUC-ROC), and XLNet (0.845 average AUC-ROC) outperformed embedding-based approaches. Overall, RoBERTa achieved the highest average AUC-ROC score and was chosen as the best model.
On Paco, we checked the performance of existing approaches [3, 7, 14, 34] for fear and hate speech. To do so, we followed three steps. First, in Paco, we considered sentences of three provoking categories (action, religious culture, and religious oppression) as relevant (positive class) and others as irrelevant (none category as the negative class). Second, we leveraged the hate lexicon-based method [7], state-of-the-art hate speech models [3, 14], and fear speech model [34] to identify relevant sentences from Paco. We implemented Saha et al.'s [34] fear speech model using the Simple Transformers library [38] with the same hyperparameters that they used. Moreover, hate speech models [3, 14] were available on the Hugging Face platform [1, 15] and Bohra et al.'s [7] hate lexicons were available on their Github repository [23]. Third, we computed the AUC-ROC score for each of these approaches.
Table 2 shows the AUC-ROC scores of hate and fear speech models. Among them, the highest AUC-ROC score was achieved by Das et al.'s [14] hate speech model (0.726), followed by Aluru et al.'s [3] model (0.668), Saha et al.'s [34] fear identifying approach (0.643), and Bohra et al.'s [7] hate speech lexicons (0.602). In identifying provoking sentences, all these approaches perform worse than our RoBERTa model (average AUC-ROC score of 0.851 on five-fold cross-validation). Moreover, even while using these hate or fear speech models, there is no means of knowing the category of provocation, as opposed to our RoBERTa model.
## 4 Answering RQ\({}_{\text{pragmatics}}\): Qualitative Analysis
We randomly selected 300 provoking sentences (100 from each category) from Paco and read them to identify dialogue acts specific to religious provocation. For 200 of them (sentences of religious culture and oppression), we even identified impoliteness super-strategies used in the text.
### Dialog Acts
Austin [5] defined dialogue acts as the function of the speaker's utterance. Following Austin's [5] work, Searle and Searle [36] identified five types of dialog acts that are generally used in natural language. However, recent social-media research emphasizes on identifying dialog acts that are specific to the scenario [21, 45]. For the provocation (against Muslims) scenario, we identified the following dialog acts.
**Accusation:**: Sentences that blame Muslims or their leaders for a specific negative event in past. Some examples are listed below.
_"Entered, broke the Shivling into pieces, and acquired as much property as he could in the solution"_
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Approach & Fold 1 & Fold 2 & Fold 3 & Fold 4 & Fold 5 & Average \\ \hline TF-IDF + LR & 0.810 & 0.825 & 0.832 & 0.805 & 0.820 & 0.818 \\ TF-IDF + RF & 0.790 & 0.793 & 0.812 & 0.786 & 0.803 & 0.780 \\ TF-IDF + SVM & 0.810 & 0.818 & 0.832 & 0.801 & 0.812 & 0.814 \\ \hline Word2Vec + LR & 0.810 & 0.800 & 0.788 & 0.792 & 0.784 & 0.794 \\ Word2Vec + RF & 0.756 & 0.752 & 0.747 & 0.745 & 0.758 & 0.751 \\ Word2Vec + SVM & 0.823 & 0.820 & 0.805 & 0.816 & 0.814 & 0.815 \\ \hline GloVe + LR & 0.784 & 0.785 & 0.775 & 0.755 & 0.766 & 0.773 \\ GloVe + RF & 0.760 & 0.758 & 0.752 & 0.760 & 0.758 & 0.757 \\ GloVe + SVM & 0.809 & 0.812 & 0.793 & 0.794 & 0.795 & 0.800 \\ \hline USE + LR & 0.812 & 0.825 & 0.801 & 0.786 & 0.807 & 0.806 \\ USE + RF & 0.768 & 0.773 & 0.760 & 0.754 & 0.757 & 0.762 \\ USE + SVM & 0.821 & 0.829 & 0.811 & 0.798 & 0.803 & 0.812 \\ \hline BERT & 0.838 & 0.826 & 0.839 & 0.819 & 0.838 & 0.832 \\ RoBERTa & 0.851 & 0.849 & 0.854 & 0.845 & 0.856 & **0.851** \\ XLNet & 0.843 & 0.850 & 0.856 & 0.833 & 0.847 & 0.845 \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUC-ROC score for each of the five folds. Bold indicates the highest average AUC-ROC score among all approaches.
\begin{table}
\begin{tabular}{l c} \hline \hline Approach & AUC-ROC \\ \hline Hate speech lexicons (Bohra et al. [7]) & 0.602 \\ Fear speech model (Saha et al. [34]) & 0.643 \\ Hate speech model 1 (Aluru et al. [3]) & 0.668 \\ Hate speech model 2 (Das et al. [14]) & 0.726 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of hate and fear speech approaches on Paco.
**Defaming Muslims:**: Sentences that target Muslims for their present behavior or generalize them with a negative trait. For example,
_"So Pakistan and Kashmiri Muslims take a blind eye to the atrocities on Uyghurs in Xinjiang..."_
The above sentence targets Muslims' present behavior for not showing support to Uyghurs.
_"Now you can think that whom will these Muslims serve as IAS...Islam or country?"_
The above rhetorical question generalizes that all Muslims support Islam more than India. Defining generalizes Muslims as opposed to accusation that is made over a specific event.
**Criticizing Islam:**: Sentences that either find loopholes in Islam or completely dismiss it. For example, the following sentence criticizes Islam by pointing out that a non-Muslim's (also called kafir) statement does not hold in Islamic court (Sharia).
_"The testimony of Kafir in Sharia court, ie Qazi court, is not valid"_
Moreover, sentences such as following completely dismiss Islam by calling it 'anti-human religion'.
_"...Which superstitious religions are in India * * Yes Islam is the anti-human religion of superstition * * 1) If Allah, God is equally powerful * * There is God and he hates Kafiro * * And Kafiro does not have the right to live"_
**Comparison:**: Sentences that compare Muslims, their emperors, or traditions with those of other religious groups.
_"You keep teaching your children that Akbar was great but never think of yourself then what to say to that Rana Pratap, who wandered in the forests all his life, fighting with Akbar even after eating grass bread"_
In the above example, Akbar who was a Muslim emperor is compared with Rana Pratap, a Hindu emperor.
_"Today, innocent Hindus and saints are in jail and ungodly people are out today"_
**Commanding:**: Sentences that directly call for an action. Below are some examples. _"Be violent for religion"_
_"To save the respect of all Hindu countrymen and protect your security, keep weapons in your house"_
**Motivating:**: Sentences that indirectly urge to take an action. _"If the madrasa is not closed, after 15 years, more than half of the Muslims of the country will be supportive of the ISIS ideology"_
The above sentence claims that Muslims will join ISIS (a terrorist organization) if madrasas (Islamic schools) are not closed. Indirectly, this sentence motivates to shut these Islamic schools.
For religious culture and oppression categories, three dialog acts were widely used: accusation, defaming Muslims, and criticizing Islam. In the action category, commanding and motivating were the most frequent. For comparison, we found only a few cases that were distributed among religious culture and oppression categories.
### Impoliteness Super-Strategies
Culpeper [12] defined impoliteness as a negative attitude toward specific behaviors in specific situations. Such negative attitude can lead to emotional consequences within the target [12]. Since provoking sentences involving religious culture and oppression attack or accuse Muslims or Islam, they are impolite to Muslim readers. However, provoking sentences in the action category don't verbally attack Muslims or Islam. Hence, for impoliteness analysis, we only considered randomly selected sentences from the religious culture and oppression category.
Culpeper et al.'s [13] model describes the following five super-strategies for impoliteness.
**Bald on record**: is used when there is a direct attack to the face of the target.
**Positive impoliteness**: is used to destroy the positive face of the target. It includes the cases of ignoring or excluding them, being unsympathetic and unconcerned toward them, making them uncomfortable, and using obscure and taboo language.
**Negative impoliteness**: destroys the negative face of the target. It includes cases of associating the target with a negative aspect, condescend, frighten, or ridicule them, and invading their space.
**Sarcasm**: is used to indirectly say opposite to the literal meaning of the text.
**Withhold politeness**: refers to the scenario where politeness is expected of the speaker but they become silent or fail to act. For example, forgetting to say thanks after the other person helps you.
We analyzed the randomly chosen sentences (from religious culture and oppression categories) to find which of these super-strategies are commonly used. Since we deal with individual sentences and not the dynamics between posts, withhold politeness is ruled out. Moreover, we did not find obscure, taboo, or ignoring language showing positive impoliteness.
We found only one sentence using sarcasm, which targets Muslims for claiming that they are a minority, "India has the largest Muslim population in the world after Indonesia???? Oddly enough, it is still a minority??????. According to our analysis, sarcasm is not widely used across provoking sentences.
Predominantly, two super-strategies: bald on record and negative impoliteness were found in the provocative speech. For example, _"Muslims are not friends of anyone"_ directly attacks Muslims (bald on record). For negative impoliteness, we found cases that associate Muslims or Islam with a negative aspect. But such cases are indirect unlike bald on record. For example, _"Alauddin Khilji summoned Rana Ratan Singh of Chittor on the pretext of friendship and then killed"_, associates Alauddin Khilji (a Muslim emperor) with a negative aspect for killing Rana Ratan Singh (a Hindu emperor). This sentence with the whole context indirectly infers that Muslims always oppress Hindus.
## 5 Discussion
We now discuss our conclusion, limitations of our study, and propose future directions of research.
### Conclusion
Prior studies [2, 19, 34] focus on identifying hate and fear speech. To the best of our knowledge, there is no existing study on identifying provocative speech in the religious context. We aimed to identify provoking sentences against Indian Muslims. We labeled a dataset of 7,000 provoking sentences for the three categories: action, religious culture, and religious oppression. We call this dataset Paco. To solve the identification problem, we leverage Paco to train and evaluate multiple NLP models (embedding-based and transformer-based) over five-fold cross-validation. Our best-performing model, RoBERTa achieves an average of 0.851 AUC score. The automatic identification of provoking sentences can
prevent the spread of provocative speech and in turn prevent possible violence against the target religious group.
Moreover, we studied the provocative text against Muslims through the pragmatic lens and identified the dialog acts and the impoliteness super-strategies used.
### Limitations and Future Work
First, our work is specific to identifying sentences that provoke readers against Indian Muslims. In the future, we can expand our study to identify provoking sentences against other religious groups such as Hindus, Sikhs, and so on. Second, we leveraged sentences from only one social media platform, WhatsApp. We can expand Paco to include sentences from multiple platforms (such as Reddit and Twitter) and train cross-platform-based models for identification. Further, NLP models can be built with a broad vision of identifying all three: provocative speech, fear speech, and hate speech. Such models will serve as a one-stop solution to eliminate all the disturbing and targeted text from social media platforms.
|
2304.10060 | Optimality of Robust Online Learning | In this paper, we study an online learning algorithm with a robust loss
function $\mathcal{L}_{\sigma}$ for regression over a reproducing kernel
Hilbert space (RKHS). The loss function $\mathcal{L}_{\sigma}$ involving a
scaling parameter $\sigma>0$ can cover a wide range of commonly used robust
losses. The proposed algorithm is then a robust alternative for online least
squares regression aiming to estimate the conditional mean function. For
properly chosen $\sigma$ and step size, we show that the last iterate of this
online algorithm can achieve optimal capacity independent convergence in the
mean square distance. Moreover, if additional information on the underlying
function space is known, we also establish optimal capacity dependent rates for
strong convergence in RKHS. To the best of our knowledge, both of the two
results are new to the existing literature of online learning. | Zheng-Chu Guo, Andreas Christmann, Lei Shi | 2023-04-20T03:00:33Z | http://arxiv.org/abs/2304.10060v1 | # Optimality of Robust Online Learning +
###### Abstract
In this paper, we study an online learning algorithm with a robust loss function \(\mathcal{L}_{\sigma}\) for regression over a reproducing kernel Hilbert space (RKHS). The loss function \(\mathcal{L}_{\sigma}\) involving a scaling parameter \(\sigma>0\) can cover a wide range of commonly used robust losses. The proposed algorithm is then a robust alternative for online least squares regression aiming to estimate the conditional mean function. For properly chosen \(\sigma\) and step size, we show that the last iterate of this online algorithm can achieve optimal capacity independent convergence in the mean square distance. Moreover, if additional information on the underlying function space is known, we also establish optimal capacity dependent rates for strong convergence in RKHS. To the best of our knowledge, both of the two results are new to the existing literature of online learning.
**Keywords and Phrases:** Online learning, Robust regression, Convergence analysis, Reproducing kernel Hilbert space
**Mathematics Subject Classification:** 68T05, 62J02, 68Q32, 62L20
## 1 Introduction
Online learning is one of the most popular approaches to handle large-scale datasets due to its low computational complexity and low storage requirements. Although great success achieved in batch learning, it becomes numerically intractable when the dataset is extremely large, as solving the optimization problem usually scales between quadratic and cubic complexity in the sample size. Instead of processing the entire training data in a batch, online learning can lead to prominent computational speed-up by tackling the data one by one. Recently, the scenario of online learning has attracted tremendous interest and attention due to its successful applications in various fields [6, 15, 51, 53, 55].
In this paper, we consider an online learning algorithm for robust kernel regression. As a non-parametric method developed during the last three decades, kernel regression in a
reproducing kernel Hilbert space (RKHS) has a wide range of applications from machine learning to statistical inverse problems [2, 5, 7, 14, 23, 32, 40]. Let \(\rho\) be a Borel probability distribution on \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) is an arbitrary, non-empty set equipped with \(\sigma-\)algebra and \(\mathcal{Y}\subseteq\mathbb{R}\). The goal of non-parametric regression is to infer a functional relation between the explanatory variable \(X\) that takes values in \(\mathcal{X}\) and the response variable \(Y\in\mathcal{Y}\), under the assumption that \(\rho\) is the joint distribution of \((X,Y)\) but completely unknown. In most applications of regression analysis, the underlying functional relation of great importance is the conditional mean of \(Y\) given \(X=x\), namely the regression function. Denote by \(\rho(y|x)\) the conditional distribution of \(\rho\) for given \(x\in\mathcal{X}\). The regression function is defined by
\[f_{\rho}(x)=\int_{\mathcal{X}}yd\rho(y|x),\quad\forall x\in\mathcal{X}.\]
Though \(\rho\) is unknown, we have a sequence of samples \(\{(x_{t},y_{t})\}_{t\in\mathbb{N}}\) independently distributed from \(\rho\) instead. One typical way to estimate \(f_{\rho}\) is empirical risk minimization in which an empirical error associated with the least squares loss is minimized at the given samples. However, from a robustness point of view, the least squares loss is not a good choice for regression, as it is not Lipschitz continuous, and thus the generated estimator can be dramatically affected by the smallest amount of outliers [12, 13]. One of the main strategies to improve robustness is to replace the least squares loss by some robust alternatives, i.e., loss functions with bounded first derivatives. In this paper, we consider utilizing the loss function
\[\mathcal{L}_{\sigma}(u)=W\left(\frac{u^{2}}{\sigma^{2}}\right) \tag{1}\]
to estimate \(f_{\rho}\). Here \(W:\mathbb{R}_{+}\mapsto\mathbb{R}\) is a windowing function and \(\sigma>0\) is a scaling parameter. Moreover, the windowing function \(W\) is required to satisfy the following two conditions:
\[W^{\prime}_{+}(0)>0,\ W^{\prime}(s)>0\ \text{for}\ s>0,\ C_{W}:=\sup_{s\in(0, \infty)}\{|W^{\prime}(s)|\}<\infty, \tag{2}\]
and there exist constants \(p>0\) and \(c_{p}>0\) such that
\[|W^{\prime}(s)-W^{\prime}_{+}(0)|\leq c_{p}|s|^{p},\quad\forall s>0, \tag{3}\]
where \(W^{\prime}_{+}(0)\) denotes the right derivative of \(W(x)\) at \(x=0\).
By choosing different windowing functions, loss function of form (1) can include a large variety of commonly used robust losses. We give some examples as follows, where \(\mathbb{I}_{A}\) denote the indicator function of the set \(A\).
* Fair loss [16]: \(\mathcal{L}_{\sigma}(u)=\frac{|u|}{\sigma}-\log\left(1+\frac{|u|}{\sigma} \right),W(s)=\sqrt{s}-\log(1+\sqrt{s}),p=\frac{1}{2},c_{p}=\frac{1}{2}\).
* Cauchy (aka. Lorentzian) loss [4]: \(\mathcal{L}_{\sigma}(u)=\log\left(1+\frac{u^{2}}{2\sigma^{2}}\right),W(s)= \log(1+\frac{s}{2}),p=1,c_{p}=\frac{1}{4}\).
* Welsch loss [27]: \(\mathcal{L}_{\sigma}(u)=1-\exp(-\frac{u^{2}}{2\sigma^{2}}),W(s)=1-\exp\left( -\frac{s}{2}\right),p=1,c_{p}=\frac{1}{4}\).
* Geman-McClure loss [20]: \(\mathcal{L}_{\sigma}(u)=\frac{\sigma^{2}}{\sigma^{2}+t^{2}},W(s)=\frac{1}{1+ s},p=1,c_{p}=2\).
* Tukey's biweight loss [25]: \(\mathcal{L}_{\sigma}(u)=\frac{c^{2}}{6}\left(1-\left(1-\frac{u^{2}}{\sigma^{2 }}\right)^{3}\mathbb{I}_{\{|u|\leq\sigma\}}\right),W(s)=\frac{c^{2}}{6}\big{(} 1-(1-s)^{3}\mathbb{I}_{\{s\leq 1\}}\big{)},p=1,c_{p}=c^{2}\).
Robust losses have been extensively studied in parametric regression, which leads to the development of robust statistics [29]. All the concrete examples of \(\mathcal{L}_{\sigma}\) loss listed above were initially proposed in robust statistics to build robust estimator of linear regression. It should be pointed out that robust \(\mathcal{L}_{\sigma}\) loss satisfying condition (2) and (3) could be non-convex, e.g., Cauchy and Welsh loss. Empirical and theoretical studies show that non-convex \(\ell_{\sigma}\) loss can lead to more robust estimators, compared with their convex counterparts, e.g., Huber's loss and Fair loss [34, 36]. This is mainly due to the redescending property, which can be illustrated by taking Welsh loss as an example. The roots of the second derivative of Welsh loss is \(\pm\sigma\), which tells us at what value of \(u\) the loss begins to redescend. When \(|u|\leq\sigma\), Welsh loss is convex and behaves as the least squares loss; when \(|u|>\sigma\) the loss function becomes concave and rapidly tends to be flat as \(|u|\to\infty\). Therefore, with a suitable chosen scale parameter \(\sigma\), Welsh loss can completely reject gross outliers while keeping a similar prediction accuracy as that of least squares loss, which makes it a more efficient robust loss. Recently, non-convex \(\mathcal{L}_{\sigma}\) loss such as Welsh loss has drawn much attention in the signal processing community and shown its efficiency in non-parametric regression [18, 19, 28, 31, 33, 42].
To construct robust non-parametric estimators of \(f_{\rho}\), we propose an online learning algorithm to minimize the empirical error \(\frac{1}{T}\sum_{t=1}^{T}\mathcal{L}_{\sigma}(f(x_{t})-y_{t})\) for \(T\in\mathbb{N}\) in an RKHS \(\mathcal{H}_{K}\). The function space \(\mathcal{H}_{K}\) is uniquely determined by a symmetric and positive semi-definite kernel \(K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\)[1]. Let \(K_{x}:\mathcal{X}\to\mathbb{R}\) be the function defined by \(K_{x}(s)=K(x,s)\) for \(x,s\in\mathcal{X}\) and denote by \(\langle\cdot,\cdot\rangle_{K}\) the inner product of \(\mathcal{H}_{K}\). Then \(K_{x}\in\mathcal{H}_{K}\) and the reproducing property
\[f(x)=\langle f,K_{x}\rangle_{K} \tag{4}\]
holds for all \(x\in\mathcal{X}\) and \(f\in\mathcal{H}_{K}\). The online algorithm considered in this work adopts a single-pass, fixed step-size stochastic gradient descent scenario in \(\mathcal{H}_{K}\). Instead of computing the gradient of the empirical error with respect to the entire training set, the online learning algorithm only computes the gradient term of one sample randomly at each step. Given \(z_{t}=(x_{t},y_{t})\in\mathcal{X}\times\mathcal{Y}\), the local error \(\mathcal{L}_{\sigma}(f(x_{t})-y_{t})\) of \(f\in\mathcal{H}_{K}\) at the sample \(z_{t}\) can be regarded as a functional on \(\mathcal{H}_{K}\). Due to the reproducing property of (4), the gradient of \(\mathcal{L}_{\sigma}(f(x_{t})-y_{t})\) at \(f\in\mathcal{H}_{K}\) is \(\mathcal{L}^{\prime}_{\sigma}(f(x_{t})-y_{t})K_{x_{t}}\). Then the online learning algorithm is explicitly given by the following definition.
**Definition 1**.: _Let \(\{z_{t}=(x_{t},y_{t})\}_{t\in\mathbb{N}}\) be a sequence of random samples independently distributed according to \(\rho.\) The online learning algorithm with the loss function \(\mathcal{L}_{\sigma}\) in (1) is defined by \(f_{1}=0\), and_
\[f_{t+1}=f_{t}-\eta\mathcal{L}^{\prime}_{\sigma}(f(x_{t})-y_{t})K_{x_{t}}=f_{t} -\eta W^{\prime}\left(\xi_{t,\sigma}\right)(f_{t}(x_{t})-y_{t})K_{x_{t}},\quad t \in\mathbb{N}, \tag{5}\]
_where \(\eta>0\) is the step size and \(\xi_{t,\sigma}=\frac{(y_{t}-f_{t}(x_{t}))^{2}}{\sigma^{2}}\)._
In kernel regression, the previous studies are mainly based on convex risk minimization to build robust estimators, where the empirical error with a convex robust loss adding a regularization term is minimized in some infinite-dimensional RKHS. The typical examples include support vector machines with Huber's loss, logistic loss, absolute value loss and its asymmetric variant known as the pinball loss [9, 10]. In general, these surrogate robust losses for least squares loss can not be used to estimate the regression function unless the conditional distributions of \(Y|X=x\) are known to be symmetric [47]. However, as a basic estimator in data analysis, regression function is used in many situations for forecasting, modeling and
analysis of trends, which is of most interest to us. On the other hand, computation of robust estimates is much more computationally intensive than least squares estimations, especially in large-scale data analysis. The online learning algorithm in Definition 1 can provide a robust estimator of regression function for large-scale data analysis. The computational cost of this algorithm is \(\mathcal{O}(T^{2})\) when the sample size is \(T\), that is, the algorithm terminates after \(T\) iterations with final output \(f_{T+1}\). At each iteration, the main computational cost is due to the evaluation of \(f_{t}(x_{t})\) which needs to calculate \(K(x_{i},x_{t})\) for \(i\) from \(1\) to \(t\). If one can compute and store all \(\{K(x_{i},x_{j})\}_{i,j=1}^{T}\) in advance, the computational cost can be reduced to linear \(\mathcal{O}(T)\) at the requirement of large memory and fast memory access.
In this paper, we aim to evaluate the optimality of robust online learning algorithm in Definition 1. Under the framework of non-parametric regression, the stochastic optimality of online learning and its variants have been extensively studied in a vast literature, see e.g., [15, 30, 52, 53, 54]. However, the developed theoretical analysis only focuses on least squares loss or other convex losses. When \(\mathcal{L}_{\sigma}\) is a non-convex loss function, the iteration sequences produced by online learning algorithm are often trapped at stationary points of the objective function, which brings essential difficulties to the mathematical analysis. As far as we know, no optimality analysis so far have been given to support the efficiency of online learning with robust \(\mathcal{L}_{\sigma}\) loss of form (1). In this work, we show that with an appropriately chosen scale parameter \(\sigma\), the iteration sequences of constant step-size robust online learning can approximate the regression function \(f_{\rho}\). The approximation accuracy is measured by the rates of convergence in the standard mean square distance and the strong RKHS norm. We shall establish a novel convergence analysis by fully exploiting the properties of the loss function \(\mathcal{L}_{\sigma}\) and the structure of the underlying RKHS. Our analysis of the convergence is tight as the derived upper bounds on the performance of robust online learning almost match the minimax lower bounds in batch least squares regression learning. Especially, we obtain the capacity dependent optimal rates of strong convergence in the sense of RKHS norm. This kind of convergence is very important but seldom considered in previous studies of online learning.
The rest of this paper will be organized as follows. We present main results in Section 2. Discussions and comparisons with related work are given in Section 3. Section 4 establishes an error decomposition of the algorithm (5) as well as some basic estimates which is useful for our convergence analysis. The proofs of main results are given in Section 5.
## 2 Main Results
Before presenting main results, we first introduce some notations and assumptions. In our setting, the input space \(\mathcal{X}\) is a general measurable space with the \(\sigma-\)algebra \(\mathcal{A}\) and \(K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) is a symmetric and positive semi-definite kernel. We suppose that the kernel \(K(\cdot,\cdot)\) is measurable on \(\mathcal{X}\times\mathcal{X}\) for the product \(\sigma-\)algebra \(\mathcal{A}\otimes\mathcal{A}\), and \(\kappa:=\sup_{x\in\mathcal{X}}\sqrt{K(x,x)}<\infty\). Therefore, the underlying RKHS \(\mathcal{H}_{K}\) consists of bounded measurable real-valued functions on \(\mathcal{X}\) and the function \(x\to K(x,x)\) is measurable on \((\mathcal{X},\mathcal{A})\) (see, for instance, [48]). Let \(\rho_{\mathcal{X}}\) be the marginal distribution of \(\rho\) on \(\mathcal{X}\) and \(L^{2}_{\rho_{\mathcal{X}}}\) be the Hilbert space of square-integrable functions with respect to \(\rho_{\mathcal{X}}\). Denote by \(\|\cdot\|_{\rho}\) the norm in the space \(L^{2}_{\rho_{\mathcal{X}}}\) induced by the inner product \(\langle f,g\rangle_{\rho}=\int_{\mathcal{X}}f(x)g(x)d\rho_{\mathcal{X}}(x)\). Since \(\int_{\mathcal{X}}K(x,x)d\rho_{\mathcal{X}}(x)\leq\kappa^{2}\), i.e., \(K\) is integrable on the diagonal, the RKHS \(\mathcal{H}_{K}\) is compactly embedded into \(L^{2}_{\rho_{\mathcal{X}}}\). Then the integral operator
\(L_{K}:L^{2}_{\rho_{\mathcal{X}}}\to L^{2}_{\rho_{\mathcal{X}}}\), given by, for \(f\in L^{2}_{\rho_{\mathcal{X}}}\) and \(u\in\mathcal{X}\),
\[L_{K}(f)(u)=\int_{\mathcal{X}}f(x)K(x,u)d\rho_{\mathcal{X}}(x),\]
is a compact, self-adjoint, and positive operator on \(L^{2}_{\rho_{\mathcal{X}}}\). Due to the Spectral theorem, there exists in \(L^{2}_{\rho_{\mathcal{X}}}\) an orthonormal basis \(\{\phi_{k}\}_{k\geq 1}\) consisting of eigenfunctions of \(L_{K}\), and the corresponding eigenvalues \(\{\lambda_{k}\}_{k\geq 1}\) (repeated according to their algebraic multiplicity) are nonnegative. Then we can define the \(r-\)th power of \(L_{K}\) on \(L^{2}_{\rho_{\mathcal{X}}}\) by \(L^{r}_{K}(\sum_{k\geq 1}c_{k}\phi_{k})=\sum_{k\geq 1}c_{k}\sigma^{r}_{k}\phi_{k}\) with \(r>0\) and \(\{c_{k}\}_{k\geq 1}\in\ell^{2}(\mathbb{R})\) (i.e., \(\sum_{k\geq 1}c_{k}^{2}<\infty\)). In particular, \(L^{1/2}_{K}\) is an isomorphism from \(\overline{\mathcal{H}_{K}}\), the closure of \(\mathcal{H}_{K}\) in \(L^{2}_{\rho_{\mathcal{X}}}\), to \(\mathcal{H}_{K}\), i.e., for each \(f\in\overline{\mathcal{H}_{K}}\), \(L^{1/2}_{K}f\in\mathcal{H}_{K}\) and
\[\left\|f\right\|_{\rho}=\|L^{1/2}_{K}f\|_{K}. \tag{6}\]
Moreover, for all \(f\in L^{2}_{\rho_{\mathcal{X}}}\), we have \(L_{K}f\in\mathcal{H}_{K}\). In view of the above discussions, the operator \(L_{K}\) can also be interpreted as an operator on \(\mathcal{H}_{K}\). For simplicity, whether it is viewed as an operator on \(L^{2}_{\rho_{\mathcal{X}}}\) or \(\mathcal{H}_{K}\), we will keep the same notation. In both cases, \(L_{K}\) is a nuclear operator under our assumptions.
Our error analysis is based on the following regularity condition on the target function \(f_{\rho}\), which is classical in the literature of kernel regression [7].
**Assumption 1**.: \[f_{\rho}=L^{r}_{K}g_{\rho}\quad\text{with $r>0$ and $g_{\rho}\in L^{2}_{\rho_{ \mathcal{X}}}$}.\] (7)
This assumption implies that \(f_{\rho}\) belongs to the range space of \(L^{r}_{K}\) expressed as
\[L^{r}_{K}(L^{2}_{\rho_{\mathcal{X}}})=\left\{f\in L^{2}_{\rho_{\mathcal{X}}}: \sum_{k\geq 1}\frac{\langle f,\phi_{k}\rangle^{2}_{\rho}}{\lambda^{2r}_{k}}< \infty\right\}.\]
Then \(L^{r_{1}}_{K}(L^{2}_{\rho_{\mathcal{X}}})\subseteq L^{r_{2}}_{K}(L^{2}_{\rho_ {\mathcal{X}}})\) whenever \(r_{1}\geq r_{2}\). The regularity of \(f_{\rho}\) is measured by the decay rate of its expansion coefficients in terms of \(\{\phi_{k}\}_{k\geq 1}\). Condition (7) means that \(\langle f,\phi_{k}\rangle^{2}_{\rho_{\mathcal{X}}}\) decays faster than the \(2r-\)th power of the eigenvalues of \(L_{K}\). Apparently, larger parameters \(r\) will result in faster decay rates, and thus indicate higher regularities of \(f_{\rho}\). This assumption is standard in the literature of learning theory, and can be further interpreted by the theory of interpolation spaces [44].
Throughout the paper, we assume that the output \(y\) is uniformly bounded, i.e., for some constant \(M>0\), \(|y|\leq M\) almost surely. Recall that the sample \(\{z_{t}=(x_{t},y_{t})\}_{t\in\mathbb{N}}\) is drawn independently from the probability distribution \(\rho\). For \(k\in\mathbb{N}\), let \(\mathbb{E}_{z_{1},\cdots,z_{k}}\) denote taking expectation with respect to \(z_{1},\cdots,z_{k}\), which is written as \(\mathbb{E}_{Z^{k}}\) for short. Our first main result is related to the convergence in the mean square distance, which establishes upper bounds of \(L^{2}_{\rho_{\mathcal{X}}}\) norm in expectation.
**Theorem 1**.: _Let \(\{f_{t}\}_{t=1}^{T+1}\) be defined by algorithm (5) with a windowing function \(W\). Assume that \(W\) satisfies (2) and (3) with some \(p>0\) and the regularity condition (7) holds with \(r>0\). Choose the step size \(\eta=\frac{1}{\eta_{0}}T^{\frac{-2r}{2r+1}}\) with_
\[\eta_{0}\geq\max\left\{C_{W}\kappa^{2},\left(\frac{1}{e}+2\kappa^{2}W^{\prime} _{+}(0)\right)^{2}\right\}. \tag{8}\]
_Then_
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{\rho}^{2}\right]\leq C\max\left\{T^ {-\frac{2r}{2r+1}}\log T,T^{\frac{2p+2}{2r+1}}\sigma^{-4p}\right\},\]
_where \(C_{W}:=\sup_{s\in(0,\infty)}\{|W^{\prime}(s)|\}\) and the constant \(C\) is independent of \(T\) and will be given explicitly in the proof._
Especially, let \(\sigma\geq T^{\frac{r+p+1}{2p(2r+1)}},\) the rate of convergence is at least \(\mathcal{O}\left(T^{-\frac{2r}{2r+1}}\log T\right)\), which is almost minimax optimal (up to a logarithmic term) due to the discussion in [53]. The bound presented in Theorem 1 is called capacity independent as we can establish it without requiring further assumptions except for the boundedness of kernel functions.
Moreover, if we know some additional information about the kernel \(K\), or the capacity of the hypothesis space \(\mathcal{H}_{K}\), we can establish optimal capacity dependent convergence rates in \(\mathcal{H}_{K}\) in the minimax sense. As pointed out in [45], the convergence in \(\mathcal{H}_{K}\) implies the convergence in \(C^{n}(\mathcal{X})\) if \(K\in C^{2n}(\mathcal{X}\times\mathcal{X})\). Here \(n\in\mathbb{N}\) and \(C^{n}(\mathcal{X})\) is the space of all functions on \(\mathcal{X}\subset\mathbb{R}^{d}\) whose partial derivatives up to order \(n\) are continuous with \(\|f\|_{C^{n}(\mathcal{X})}=\sum_{|s|\leq n}\|D^{s}f\|_{\infty}.\) So the convergence in \(\mathcal{H}_{K}\) is much stronger, which ensures that the estimators can not only approximate the regression function itself, but also approximate its derivatives. In this paper, we use the following condition to measure the capacity of the hypothesis space \(\mathcal{H}_{K}.\)
**Assumption 2**.: _For \(0<\beta<1,\) we assume_
\[\mathrm{Tr}(L_{K}^{\beta})<\infty. \tag{9}\]
_Here \(\mathrm{Tr}(A)\) denotes the trace of the operator \(A.\)_
The definition of \(L_{K}^{\beta}\) gives that \(\mathrm{Tr}(L_{K}^{\beta})=\sum_{k\geq 1}\lambda_{k}^{\beta}\), here \(\{\lambda_{k}\}_{k\in\mathbb{N}}\) are the eigenvalues of \(L_{K}\). Since \(L_{K}\) is a trace class operator satisfying \(\mathrm{Tr}(L_{K})=\sum_{k\geq 1}\lambda_{k}=\int_{\mathcal{X}}K(x,x)d\rho_{ \mathcal{X}}(x)\leq\kappa^{2},\) the capacity assumption (9) holds trivially with \(\beta=1\). Thus the case of \(\beta=1\) corresponds to the capacity independent case as we don't require any complexity measures of the underlying function space. Capacity assumption (9) incorporates the information of marginal distribution \(\rho_{\mathcal{X}},\) which is a tighter measurement for the complexity of the RKHS than more classical covering, or entropy number assumptions [11, 48]. This assumption is essentially an eigenvalue decaying condition imposed on the operator \(L_{K}\). In fact, if \(\mathrm{Tr}(L_{K}^{\beta})<\infty\), since the eigenvalues \(\{\lambda_{k}\}_{k\in\mathbb{N}}\) are sorted in a decreasing order, then for any \(k\geq 1\), we have
\[k\lambda_{k}^{\beta}\leq\sum_{j=1}^{k}\lambda_{j}^{\beta}\leq\sum_{j\geq 1} \lambda_{j}^{\beta}=\mathrm{Tr}(L_{K}^{\beta})<\infty.\]
It follows that \(\lambda_{k}\leq k^{-1/\beta}(\mathrm{Tr}(L_{K}^{\beta}))^{1/\beta},\forall k\geq 1\). A small value of \(\beta\) implies a fast polynomially decaying rate at least achieved by the eigenvalues \(\{\lambda_{k}\}_{k\in\mathbb{N}}\). One can refer to Theorem 5 in our recent work [21], which provides a characterization of the relationship between the capacity assumption (9) in this paper and the decaying rate of integral operator eigenvalues. If the eigenvalues decay exponentially, the index \(\beta\) can be arbitrarily close to zero. The polynomially decaying of the eigenvalue is typical for the Sobolev smooth kernels on domains in Euclidean spaces, and the parameter \(\beta\) depends on the smoothness of the kernel \(K.\) While
the exponentially decaying integral operator is typical for the analytic kernels on domains in Euclidean spaces. Recently, under the same capacity assumption, the work [37] studies the mean square convergence of averaging online estimator with least square loss and multiple passes.
Under the regularity condition (7) on the target function \(f_{\rho}\) and the capacity assumption (9) on the hypothesis space \(\mathcal{H}_{K}\), we obtain the following sharp capacity dependent results for strong convergence in \(\mathcal{H}_{K}\).
**Theorem 2**.: _Let \(\{f_{t}\}_{t=1}^{T+1}\) be defined by algorithm (5) with a windowing function \(W\). Assume that \(W\) satisfies (2) and (3) with some \(p>0\), the regularity condition (7) holds with \(r>\frac{1}{2}\) and the capacity assumption (9) holds with \(0<\beta<1\). Choose step size \(\eta=\frac{1}{\eta_{0}}T^{\frac{1-2r-\beta}{2r+\beta}}\) with_
\[\eta_{0}\geq\max\left\{C_{W}\kappa^{2},\left(\frac{1}{e}+2\kappa^{2}W_{+}^{ \prime}(0)\right)^{2}\right\}.\]
_Then_
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2}\right]\leq\tilde{C}\max \left\{T^{-\frac{2r-1}{2r+\beta}},T^{\frac{2p+3}{2r+\beta}}\sigma^{-4p}\right\},\]
_where \(C_{W}:=\sup_{s\in(0,\infty)}\{|W^{\prime}(s)|\}\) and the constant \(\tilde{C}\) is independent of \(T\) and will be given explicitly in the proof._
If we choose \(\sigma\geq T^{\frac{p+r+1}{2p(2r+\beta)}}\), the convergence rate is of the form \(\mathcal{O}(T^{-\frac{2r-1}{2r+\beta}})\) which is tight as it matches the minimax lower bounds of non-parametric regression in batch learning [5]. As far as we know, this is the first capacity dependent optimal rates obtained for strong convergence of online learning. We will prove all these results in Section 5.
## 3 Discussion on Related Work
There has been a recent surge of research that applies and explores specific \(\mathcal{L}_{\sigma}\) losses (e.g., Cauchy and Welsh loss) in the context of non-parametric regression [3, 26, 31, 42, 50]. Convergence properties of these methods are also the subject of intense study but only limited to the classical batch learning setting, in which we collect all the samples \(\mathbf{z}:=\{(x_{i},y_{i})\}_{i=1}^{T}\) initially and perform estimation only once. In [18], an empirical risk minimizer \(f_{\mathbf{z}}\) with Welsh loss is considered, aiming to estimate the regression function \(f_{\rho}\) over a compact hypothesis space \(\mathcal{H}\). The theoretical analysis in [18] shows that, if \(f_{\rho}\in\mathcal{H}\) and the logarithm of covering number of \(\mathcal{H}\) satisfies a polynomial increasing condition with a power index \(0<\alpha\leq 2\), by choosing \(\sigma=T^{\frac{1}{2+\alpha}}\), probabilistic bounds established for \(\|f_{\mathbf{z}}-f_{\rho}\|_{\rho}^{2}\) can converge to zero at a rate of \(T^{-\frac{2}{2+\alpha}}\). To make a comparison, given a Mercer kernel \(K\) on a compact metric space \(\mathcal{X}\), i.e., \(K\) is continuous, symmetric and positive semi-definite on \(\mathcal{X}\times\mathcal{X}\), let \(\mathcal{H}\) be a bounded ball in an RKHS \(\mathcal{H}_{K}\). Then \(\mathcal{H}\) is a compact set consisting of continuous functions on \(\mathcal{X}\), and the covering number condition of \(\mathcal{H}\) is satisfied with \(\alpha=2\), which leads to the convergence rate of the form \(\mathcal{O}(T^{-\frac{1}{2}})\). The covering number condition with \(\alpha=2\) corresponds to capacity independent case in our analysis and \(f_{\rho}\in\mathcal{H}\) implies that the regularity condition (1) is satisfied with some \(r\geq\frac{1}{2}\). Then Theorem 1 in our paper asserts that the estimator of online algorithm
5 will approximate \(f_{\rho}\) in \(L^{2}_{\rho_{\mathcal{X}}}-\)norm at a convergence rate \(\mathcal{O}\left(T^{-\frac{2r}{2r+1}}\log T\right)\), which is faster than \(\mathcal{O}(T^{-\frac{1}{2}})\) provided that \(r>\frac{1}{2}\). It should be pointed out that the convergence analysis in [18] needs \(f_{\mathbf{z}}\) to be a global minimizer of the empirical risk, but existing approaches applied to solve the corresponding minimization problem can not guarantee the global optimality due to the non-convexity of Welsh loss. To fill this gap, a gradient descent algorithm with robust loss function \(\mathcal{L}_{\sigma}\) is proposed in [22], which is defined as \(g_{1}=0\), and
\[g_{t+1}=g_{t}-\frac{\eta_{t}}{T}\sum_{i=1}^{T}W^{\prime}\left(\xi_{t,\sigma} \right)(g_{t}(x_{i})-y_{i})K_{x_{i}},\quad t\in\mathbb{N}. \tag{10}\]
where \(\xi_{t,\sigma}=\frac{(y_{i}-g_{t}(x_{i}))^{2}}{\sigma^{2}}\) and \(\eta_{t}>0\) is the step size. It is shown in [22] that, with an appropriately chosen scale parameter \(\sigma\) and early stopping rule \(\ell\in\mathbb{N}\), the output of (10) after \(\ell\) iterates can approximate \(f_{\rho}\) under regularity condition (7) and an eigenvalue decaying condition of \(L_{K}\). As mentioned in Section 2, Assumption 2 with \(0<\beta<1\) adopted in this paper is essentially an eigenvalue decaying condition, which implies that the eigenvalues of \(L_{K}\) decay polynomially as \(\lambda_{k}\leq c_{\beta}k^{-1/\beta}\), for all \(k\geq 1\) and some \(c_{\beta}>0\). Then under Assumption 1 and Assumption 2 in this paper, capacity dependent rate-optimal convergence analysis in both \(L^{2}_{\rho_{\mathcal{X}}}-\)norm and \(\mathcal{H}_{K}-\)norm is established in [22]. We take strong convergence in \(\mathcal{H}_{K}-\)norm for instance to illustrate the above results. Let \(\eta_{t}=\eta_{1}t^{-\theta}\) with \(0\leq\theta<1\) and some positive constant \(\eta_{1}\). Choose the early stopping rule as \(\ell=\lceil T^{\frac{1}{(1+\beta)(1-\theta)}}+1\rceil\), and \(\sigma\geq T^{\frac{r+(p+1)\beta}{2p(2r+\beta)}}\) where \(\lceil x\rceil\) denotes the smallest integer not less than \(x\in\mathbb{R}\). Then \(\|g_{\ell+1}-f_{\rho}\|_{K}^{2}\) will converge to zero at a rate of \(T^{-\frac{2r-1}{2r+\beta}}\) which is exactly the same as that obtained in Theorem 2. Therefore, gradient descent algorithm (10) and our online algorithm (5) are both provably statistical optimal. Furthermore, both of these two algorithms are plug-and-play: only a loss and its gradient are necessary to integrate an optimization process to approximate \(f_{\rho}\). However, since the gradient decent algorithm (10) is designed for batch learning, it still suffers from scalability issues. To see this, if algorithm stops after \(\ell\) iterations and all \(\{K(x_{i},x_{j})\}_{i,j=1}^{T}\) are computed and stored in advance, the aggregate time complexity is \(\mathcal{O}(\ell T^{2})\) scaling between \(\mathcal{O}(T^{2})\) and \(\mathcal{O}(T^{3})\), as calculation of gradient in each iteration involves all the training sample. Under the same situation, online learning algorithm (5) requires only one training sample to update, which enjoys linear \(\mathcal{O}(T)\) complexity but comparable theoretical performance. Last but not the least, convergence analysis in this paper is developed under more general setting: we only need the input space \(\mathcal{X}\) to be an arbitrary measurable space and the positive semi-definite kernel function \(K\) to be bounded. Capacity dependent analysis of algorithm (10) established in [22] essentially relies on the classical Mercer theorem to provides a link between the spectral properties of \(L_{K}\) and the capacity as well as approximation ability of \(\mathcal{H}_{K}\). Thereby, estimates in [22] requires \(\mathcal{X}\) to be a compact metric space and \(K\) to be a Mercer kernel. These settings are too restrictive if one has to perform regression analysis on more general \(\mathcal{X}\) such as the set of graphs and strings (see e.g.,[43] and the references therein). While our capacity dependent analysis can provide a rigorous theoretical demonstration to support the efficiency of online kernel regression (5) in a wider range of applications.
There is an extensive research on the minimax optimality of online learning in non-parametric regression, however almost all of them only focus on least squares loss. Next we review these work of particular relevance and make some comparisons. Online learning with least squares loss is defined as, \(g_{1}=0\), and
\[g_{t+1}=g_{t}-\eta_{t}(g_{t}(x_{t})-y_{t})K_{x_{t}},\qquad t\in\mathbb{N}. \tag{11}\]
The first category of convergence analysis for algorithm (11) only gives capacity independent error bounds. Asides from the regularity conditions for \(f_{\rho}\) stated in Assumption 1, there is no assumptions imposed on the capacity of the underlying RKHS \(\mathcal{H}_{K}\) to derive those error bounds. In [53], algorithm (11) was thoroughly investigated via a capacity independent approach, the performance of the last iterate with polynomially decaying step sizes and constant step sizes was studied. It shows in [53] that if Assumption 1 is satisfied with \(r>0\), and take a constant step size with \(\eta_{t}=\eta:=[r/64(1+\kappa)^{4}(2r+1)]T^{-\frac{2r}{2r+1}}\), it holds that
\[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{\rho}^{2}]=\mathcal{O}(T^{-\frac{2r} {2r+1}}\log T). \tag{12}\]
And if \(r>\frac{1}{2}\), there holds
\[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{K}^{2}]=\mathcal{O}(T^{-\frac{2r-1}{ 2r+1}}). \tag{13}\]
The convergence rate (13) in \(\mathcal{H}_{K}\) is capacity independently optimal, and convergence rate (12) in \(L^{2}_{\rho_{\mathcal{X}}}\) is capacity independently optimal, up to a logarithmic term, in the minimax sense. For the decreasing step size \(\eta_{t}=(1/\nu(2r/2r+1))\,t^{-\frac{2r}{2r+1}}\) with some \(\nu>0\), it shows in [53] that if Assumption 1 is satisfied with \(0<r\leq\frac{1}{2}\), there holds
\[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{\rho}^{2}]=\mathcal{O}(T^{-\frac{2r} {2r+1}}\log T), \tag{14}\]
one can easily see that the best convergence rate of (14) is \(\mathcal{O}(T^{-\frac{1}{2}}\log T)\) achieved at \(r=\frac{1}{2}\). While the analysis established in [53] can not lead to faster rates than \(\mathcal{O}(T^{-\frac{1}{2}}\log T)\) when \(f_{\rho}\) has higher regularities, i.e., \(r>\frac{1}{2}\) in Assumption 1, this is the so-called saturation phenomenon. Whether better rates can be derived with some additional capacity information is an open problem stated in [53]. If Assumption 1 holds with \(r>0\), our convergence rates in Theorem 1 is the same as (13) in [53] by selecting proper scaling parameter \(\sigma\). Moreover, if capacity information is available, we obtain more elegant capacity dependent rates of convergence. Concretely, under Assumption 1 with \(r>\frac{1}{2}\) and Assumption 2 with \(0<\beta<1\), Theorem 2 gives the convergence rate \(\mathcal{O}\left(T^{-\frac{2r-1}{2r+\beta}}\right)\) in \(\mathcal{H}_{K}\), which is always faster than \(\mathcal{O}\left(T^{-\frac{2r-1}{2r+1}}\right)\) in [53].
The other category of convergence analysis for algorithm (11) provides error bounds involving capacity of the hypothesis space \(\mathcal{H}_{K}\), which is more tight than the capacity independent bounds. Capacity dependent convergence rates for the last iterate of algorithm (11) with decreasing step size are recently derived in [24]. It shows that if Assumption 1 is satisfied with \(r>\frac{1}{2}\) and Assumption 2 is satisfied with \(0<\beta<1\),
1. if \(\frac{1}{2}<r\leq 1-\frac{\beta}{2}\), and \(\eta_{t}=\eta_{1}t^{-\frac{2r}{2r+1}}\) with \(0<\eta_{1}<\kappa^{2}\), there holds \[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{\rho}^{2}]=\mathcal{O}\left(T^{-\frac{ 2r}{2r+1}}\right),\]
2. if \(r>1-\frac{\beta}{2}\), and \(\eta_{t}=\eta_{1}t^{-\frac{2-\beta}{3-\beta}}\) with \(0<\eta_{1}<\kappa^{2}\), there holds \[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{\rho}^{2}]=\mathcal{O}\left(T^{-\frac {2-\beta}{3-\beta}}\right),\]
3. if \(r>\frac{1}{2},\) and \(\eta_{t}=\eta_{1}t^{-\frac{1}{2}}\) with \(0<\eta_{1}<\kappa^{2},\) there holds \[\mathbb{E}_{Z^{T}}[\|g_{T+1}-f_{\rho}\|_{K}^{2}]=\mathcal{O}\left(T^{-\frac{\min \{2r-1,1-\beta\}}{2}}(\log T)^{2}\right).\]
The above results improved the convergence rates (14) in \(L^{2}_{\rho_{X}}\) and established the first capacity dependent convergence rates in \(\mathcal{H}_{K}\) for the case of decreasing step size. Though these results give an positive answer to the open problem of [53], the convergence rates are suboptimal and they are saturated at \(r=1-\frac{\beta}{2}\). Our results stated in Theorem 1 and Theorem 2 are optimal in the minimax sense and can overcome the saturation phenomenon. Another recent work [15] provides capacity dependent convergence rates for the averaging estimator \(\bar{g}_{T}=\frac{1}{T}\sum_{t=1}^{T}g_{t}\) of algorithm (11) depending on polynomial eigendecay condition of \(L_{K}\). Averaging scheme can reduce variance and thus usually leads to better convergence rates (see e.g.,[38, 39, 52]). If Assumption 1 holds with \(r>0,\) and the eigenvalues of \(L_{K}\) behave as \(c_{1}k^{-\frac{1}{\beta}}\leq\lambda_{k}\leq c_{2}k^{-\frac{1}{\beta}}\) with \(0<\beta<1\) and constants \(c_{1},c_{2}>0.\) Let \(f_{\mathcal{H}}\) denote the orthogonal projection of \(f_{\rho}\) on \(\mathcal{H}_{K}\). Particularly, \(f_{H}=f_{\rho}\) if \(r\geq\frac{1}{2}\). For the decreasing step size, it shows in [15] that
1. if \(\frac{1}{2}-\frac{\beta}{2}<r\leq 1-\frac{\beta}{2},\) and \(\eta_{t}=\eta_{1}t^{\frac{-2r-\beta+1}{2r+\beta}},\) \[\mathbb{E}_{Z^{T-1}}\|\bar{g}_{T}-f_{\mathcal{H}}\|_{\rho}^{2}=\mathcal{O}(T^{ -\frac{2r}{2r+\beta}}),\] (15)
2. if \(r>1-\frac{\beta}{2},\) and \(\eta_{t}=\eta_{1}t^{-\frac{1}{2}},\) \[\mathbb{E}_{Z^{T-1}}\|\bar{g}_{T}-f_{\mathcal{H}}\|_{\rho}^{2}=\mathcal{O}(T^{ -(1-\frac{\beta}{2})}).\] (16)
For the constant step size \(\eta_{t}=\eta(T)\),
1. if \(0<r<\frac{1}{2}-\frac{\beta}{2},\) and \(\eta_{t}=\eta_{1}\) is a constant, \[\mathbb{E}_{Z^{T-1}}\|\bar{g}_{T}-f_{\mathcal{H}}\|_{\rho}^{2}=\mathcal{O}(T^{ -2r}),\] (17)
2. if \(r>\frac{1}{2}-\frac{\beta}{2},\) and \(\eta_{t}=\eta(T)=\eta_{1}T^{\frac{-2\min\{r,1\}-\beta+1}{2\min\{r,1\}+\beta}}\) with some constant \(\eta_{1}>0\), \[\mathbb{E}_{Z^{T-1}}\|\bar{g}_{T}-f_{\mathcal{H}}\|_{\rho}^{2}=\mathcal{O}\left( T^{-\frac{2\min\{r,1\}}{2\min\{r,1\}+\beta}}\right).\] (18)
One can see that the obtained asymptotic convergence rate (15) is optimal when \(\frac{1}{2}-\frac{\beta}{2}<r<1-\frac{\beta}{2},\) and (18) is optimal when \(\frac{1}{2}-\frac{\beta}{2}<r\leq 1,\)which achieve the mini-max lower bound proved in [7, 49]. Firstly, the capacity condition in Assumption 2 is more general than the polynomial eigendecay condition adopted in [15] and we do not require the lower bound for the decaying rates. Secondly, the established convergence rates for \(\bar{g}_{T}\) of algorithm (11), whether the step size is chosen to be decreasing or fixed, all suffer from saturation phenomenon, i.e., the convergence rate no longer improves once the regularity of the regression function is beyond certain level. Concretely, if regularity condition in Assumption 1 is satisfied with \(r>0\) and Assumption 2 holds with \(0<\beta<1,\) the convergence rate (15) with decreasing step size is saturated at \(r=1-\frac{\beta}{2}\) and ceases to improve as \(r>1-\frac{\beta}{2},\) while the convergence rates (18) with fixed step size are saturated at \(r=1\) and stops getting better when \(r>1\). While
our convergence analysis established in Theorem 1 and Theorem 2 can eliminate saturation phenomenon and adapt to favorable regularity of \(f_{\rho}\) to attain even faster convergence rates.
To obtain these nice results, e.g., the capacity independent optimal rates in \(L^{2}_{\rho_{X}}\) and the capacity dependent optimal rates in \(\mathcal{H}_{K}\) taking into account the spectral structure of \(L_{K}\), the crucial tool developed in our paper is the finer error decomposition techniques. The error decomposition in \(L^{2}_{\rho_{X}}\) presented in Proposition 4.1 enable us to utilize the properties of robust losses (see condition 2 and condition 3), and to establish a novel framework for error analysis of robust online learning. To achieve strong optimal rates in \(\mathcal{H}_{K}\), we present another error decomposition in Proposition 5.1, which incorporates the trace of the composite operators related to \(L_{K}\). Both of these two error decomposition are first established in this paper and play fundamental roles in our analysis. Through our theoretical analysis, we also demonstrate that the index \(\beta\) in trace condition given in assumption 2 is exactly the right parameter to introduce the capacity information of the underlying function space to the setting of online learning. Actually, the error analysis based on capacity assumption 2 established in our paper also gives a positive answer to an open question proposed by [41, 53], that is, whether we can obtain unsaturated fast and strong convergence under some additional capacity information. In contrast to the effective dimension widely adopted in the error analysis of bath learning (see, e.g., [7, 23, 32]), trace condition in assumption 2 serves the same role in online learning to establish an important connection between the spectral structure of the operators and the capacity information encoding the crucial properties of the marginal distribution. The capacity dependent analysis of online learning is more involved than that of batch learning. To derive tight convergence rates, we need to choose the step size carefully to settle a bias-variance trade-off based on regularity of \(f_{\rho}\) and capacity information of RKHS, requiring us to provide sharp bounds (in the \(\mathcal{H}_{K}\) norm as well as the operator norm) on the items appearing in the error decomposition. With the help of the error decomposition and sharp estimates established in this paper, we obtain minimax optimal rates of (both strong and weak) convergence for robust online learning. Our approach can be extended to study more complex models of online learning such as [8, 21, 46], which we will leave as our future work. Another promising line of research is to apply the shifted loss function proposed in [9] to design robust online non-parametric learning algorithm. We also consider introducing the \(\ell_{\sigma}\) loss to recent empirical studies [17, 56] of deep learning models when outliers or heavy-tailed noise are allowed.
## 4 Preliminaries: Error Decomposition and Basic Estimates
In this section, we first introduce an error decomposition for our convergence analysis. In what follows, \(\kappa:=\sup_{x\in\mathcal{X}}\sqrt{K(x,x)}\) and \(C_{W}:=\sup_{s\in(0,\infty)}\{|W^{\prime}(s)|\}\) where \(W(\cdot)\) is a windowing function satisfying (2) and (3). Recall that the sequence \(\{f_{t}\}_{t\in\mathbf{N}}\) is generated by online algorithm (5) with step size \(\eta\). Then we have
\[f_{t+1}-f_{\rho} =f_{t}-f_{\rho}-\eta W^{\prime}\left(\xi_{t,\sigma}\right)(f_{t} (x_{t})-y_{t})K_{x_{t}}\] \[=f_{t}-f_{\rho}-\eta W^{\prime}_{+}(0)(f_{t}(x_{t})-y_{t})K_{x_{t }}+\eta\big{(}W^{\prime}_{+}(0)-W^{\prime}(\xi_{t,\sigma})\big{)}(f_{t}(x_{t} )-y_{t})K_{x_{t}}\] \[=(I-\eta W^{\prime}_{+}(0)L_{K})(f_{t}-f_{\rho})+\eta W^{\prime}_ {+}(0)(L_{K}f_{t}-f_{t}(x_{t})K_{x_{t}})+\eta W^{\prime}_{+}(0)(y_{t}K_{x_{t} }-L_{K}f_{\rho})\] \[\quad+\eta\big{(}W^{\prime}_{+}(0)-W^{\prime}(\xi_{t,\sigma}) \big{)}(f_{t}(x_{t})-y_{t})K_{x_{t}}\] \[:=(I-\eta W^{\prime}_{+}(0)L_{K})(f_{t}-f_{\rho})+\eta\mathcal{B} _{t}+\eta E_{t,\sigma},\]
where
\[\mathcal{B}_{t} =W^{\prime}_{+}(0)\left[(L_{K}f_{t}-f_{t}(x_{t})K_{x_{t}})+(y_{t}K_{x _{t}}-L_{K}f_{\rho})\right]\] \[=W^{\prime}_{+}(0)\left[L_{K}(f_{t}-f_{\rho})+(y_{t}K_{x_{t}}-f(x_ {t})K_{x_{t}})\right]\]
and
\[E_{t,\sigma}=\big{(}W^{\prime}_{+}(0)-W^{\prime}(\xi_{t,\sigma})\big{)}(f_{t}(x _{t})-y_{t})K_{x_{t}}. \tag{19}\]
By induction, we can decompose \(f_{T+1}-f_{\rho}\) as
\[\begin{split} f_{T+1}-f_{\rho}&=-(I-\eta W^{\prime }_{+}(0)L_{K})^{T}f_{\rho}+\eta\sum_{t=1}^{T}(I-\eta W^{\prime}_{+}(0)L_{K})^{T -t}\mathcal{B}_{t}\\ &\quad+\eta\sum_{t=1}^{T}(I-\eta W^{\prime}_{+}(0)L_{K})^{T-t}E_ {\sigma,t}.\end{split} \tag{20}\]
Then we obtain the following error decomposition which is pivotal for convergence analysis in \(L^{2}_{\rho_{X}}\). Hereinafter, we use \(\|\cdot\|\) to denote the operator norm for operators on \(L^{2}_{\rho_{X}}\) or \(\mathcal{H}_{K}\), which is specified due to the context. We simply keep the same notion for the two operator norms as \(L_{K}\) is well-defined on both \(L^{2}_{\rho_{X}}\) and \(\mathcal{H}_{K}\).
**Proposition 4.1**.: _Let \(\{f_{t}\}_{t=1}^{T+1}\) be defined by (5). Then_
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{\rho}^{2}\right] \leq 2\left\|(I-\eta W^{\prime}_{+}(0)L_{K})^{T}f_{\rho}\right\|_ {\rho}^{2}+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W^{ \prime}_{+}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{\rho}^{2}\right]\] \[\quad+2\eta^{2}(\kappa W^{\prime}_{+}(0))^{2}\sum_{t=1}^{T}\left\| L^{\frac{1}{2}}_{K}(I-\eta W^{\prime}_{+}(0)L_{K})^{T-t}\right\|^{2}\mathbb{E}_{Z ^{t-1}}[\mathcal{E}(f_{t})],\]
_where \(E_{\sigma,t}\) is defined by (19) and_
\[\mathcal{E}(f):=\int_{\mathcal{X}\times\mathcal{Y}}(f(x)-y)d\rho,\quad\forall f :\mathcal{X}\rightarrow\mathcal{Y}\text{ is measurable}. \tag{21}\]
Proof.: By the decomposition (20), we have
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{\rho}^{2}\right]\] \[=\mathbb{E}_{Z^{T}}\left[\left\|-(I-\eta W^{\prime}_{+}(0)L_{K})^ {T}f_{\rho}+\eta\sum_{t=1}^{T}(I-\eta W^{\prime}_{+}(0)L_{K})^{T-t}\mathcal{B} _{t}+\eta\sum_{t=1}^{T}(I-\eta W^{\prime}_{+}(0)L_{K})^{T-t}E_{\sigma,t}\right\| _{\rho}^{2}\right]\] \[\leq 2\mathbb{E}_{Z^{T}}\left[\left\|-(I-\eta W^{\prime}_{+}(0)L_{K} )^{T}f_{\rho}+\eta\sum_{t=1}^{T}(I-\eta W^{\prime}_{+}(0)L_{K})^{T-t}\mathcal{ B}_{t}\right\|_{\rho}^{2}\right]\] \[\quad+2\mathbb{E}_{Z^{T}}\left[\left\|\eta\sum_{t=1}^{T}(I-\eta W ^{\prime}_{+}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{\rho}^{2}\right]\] \[=2\left\|(I-\eta W^{\prime}_{+}(0)L_{K})^{T}f_{\rho}\right\|_{\rho }^{2}+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W^{\prime }_{+}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right]\]
\[+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{\rho}^{2}\right].\]
The last equality holds since \(f_{t}\) depends only on \(z_{1},\cdots,z_{t-1}\), and \(\mathbb{E}_{z_{t}}[\mathcal{B}_{t}]=0\), it follows that
\[\mathbb{E}_{Z^{T}}\langle-(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{ \rho},\eta\sum_{t=1}^{T}(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t} \rangle_{\rho}\] \[=\langle-(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho},\eta\sum_{t =1}^{T}(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}\mathbb{E}_{z_{1},\cdots,z_{t-1}} \mathbb{E}_{z_{t}}[\mathcal{B}_{t}]\rangle_{\rho}=0\]
Furthermore, for the term \(\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{\prime}(0 )L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right]\), we have
\[\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+ }^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right]\] \[=\eta^{2}\sum_{t=1}^{T}\mathbb{E}_{Z^{t}}\left[\left\|(I-\eta W_{ +}^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right]\] \[\quad+\eta^{2}\sum_{t=1}^{T}\sum_{k\neq t}\mathbb{E}_{Z^{t}} \langle(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t},(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-k}\mathcal{B}_{k}\rangle_{\rho}\] \[=\eta^{2}\sum_{t=1}^{T}\mathbb{E}_{Z^{t}}\left[\left\|(I-\eta W_{ +}^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right],\]
the last equality holds since for \(t>k\)
\[\mathbb{E}_{z_{1},\cdots,z_{t-1}}\mathbb{E}_{z_{t}}\langle(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-t}\mathcal{B}_{t},(I-\eta W_{+}^{\prime}(0)L_{K})^{T-k} \mathcal{B}_{k}\rangle_{\rho}=0,\]
and also for \(t<k\)
\[\mathbb{E}_{z_{1},\cdots,z_{k-1}}\mathbb{E}_{z_{k}}\langle(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-t}\mathcal{B}_{t},(I-\eta W_{+}^{\prime}(0)L_{K})^{T-k} \mathcal{B}_{k}\rangle_{\rho}=0.\]
Moreover, recall that \(\mathcal{B}_{t}=W_{+}^{\prime}(0)\left[(y_{t}-f_{t}(x_{t}))K_{x_{t}}-L_{K}(f_ {\rho}-f_{t})\right]\) for \(1\leq t\leq T\). We have \(\mathbb{E}_{z_{t}}\left[\mathcal{B}_{t}\right]=0\) and
\[\mathbb{E}_{z_{t}}\left[\left\|\mathcal{B}_{t}\right\|_{K}^{2}\right] \leq(W_{+}^{\prime}(0))^{2}\mathbb{E}_{z_{t}}\left[\left\|(y_{t}-f _{t}(x_{t}))K_{x_{t}}\right\|_{K}^{2}\right]\] \[=(W_{+}^{\prime}(0))^{2}\mathbb{E}_{z_{t}}\left[(y_{t}-f_{t}(x_{t} ))^{2}K(x_{t},x_{t})\right]\] \[\leq(\kappa W_{+}^{\prime}(0))^{2}\int_{\mathcal{X}\times \mathcal{Y}}(f_{t}(x)-y)d\rho:=(\kappa W_{+}^{\prime}(0))^{2}\mathcal{E}(f_{t}).\]
It then follows that
\[\eta^{2}\sum_{t=1}^{T}\mathbb{E}_{Z^{T}}\left[\left\|(I-\eta W_{+ }^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{\rho}^{2}\right]\] \[=\eta^{2}\sum_{t=1}^{T}\mathbb{E}_{Z^{t}}\left[\left\|(I-\eta W_{+ }^{\prime}(0)L_{K})^{T-t}L_{K}^{\frac{1}{2}}L_{K}^{-\frac{1}{2}}\mathcal{B}_{t} \right\|_{\rho}^{2}\right]\]
\[\|f_{t+1}\|_{K}^{2} =\|f_{t}\|_{K}^{2}-2\eta\langle f_{t},H_{t}\rangle_{K}+\eta^{2}\|H_ {t}\|_{K}^{2} \tag{24}\] \[=\|f_{t}\|_{K}^{2}-2\eta W^{\prime}\left(\xi_{t,\sigma}\right)(f_{ t}(x_{t})-y_{t})f_{t}(x_{t})+\eta^{2}\|H_{t}\|_{K}^{2},\]
and one can easily see that
\[\|H_{t}\|_{K}^{2}\leq\kappa^{2}\left(W^{\prime}\left(\xi_{t,\sigma}\right) \right)^{2}(f_{t}(x_{t})-y_{t})^{2}.\]
Thus by (24), \(\|f_{t+1}\|_{K}^{2}\) can be bounded by
\[\|f_{t}\|_{K}^{2}+\eta\left[\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)(f _{t}(x_{t})-y_{t})^{2}-2(f_{t}(x_{t})-y_{t})f_{t}(x_{t})\right]W^{\prime}\left( \xi_{t,\sigma}\right). \tag{25}\]
For each \(t\), we further have
\[\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)(f_{t}(x_{t})-y _{t})^{2}-2(f_{t}(x_{t})-y_{t})f_{t}(x_{t})\] \[=\left(\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)-2 \right)\left((f_{t}(x_{t})-y_{t})-\frac{y_{t}}{\eta\kappa^{2}W^{\prime}\left( \xi_{t,\sigma}\right)-2}\right)^{2}+\frac{y_{t}^{2}}{2-\eta\kappa^{2}W^{\prime }\left(\xi_{t,\sigma}\right)}.\]
Since \(W^{\prime}\left(\xi_{t,\sigma}\right)\leq C_{W}\) and \(\eta\kappa^{2}C_{W}\leq 1\), it follows that \(\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)-2<0\) and \(2-\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)>1.\) Moreover, since \(|y|\leq M\), there holds
\[\eta\kappa^{2}W^{\prime}\left(\xi_{t,\sigma}\right)(f_{t}(x_{t})-y_{t})^{2}-2 (f_{t}(x_{t})-y_{t})f_{t}(x_{t})\leq\frac{y_{t}^{2}}{2-\eta\kappa^{2}W^{\prime }\left(\xi_{t,\sigma}\right)}\leq M^{2}.\]
Putting the above bound and the induction assumption \(\|f_{t}\|_{K}^{2}\leq M^{2}C_{W}(t-1)\eta\) into (25) yields
\[\|f_{t+1}\|_{K}^{2}\leq\|f_{t}\|_{K}^{2}+\eta M^{2}C_{W}\leq M^{2}C_{W}(t-1) \eta+\eta M^{2}C_{W}\leq M^{2}C_{W}t\eta.\]
Therefore, the proof is completed.
We also establish a uniform bound of \(\|E_{t,\sigma}\|_{K}\) for \(1\leq t\leq T\), which will play a crucial role in our convergence analysis. Recall that the windowing function \(W(\cdot)\) satisfies condition (3) with some constants \(c_{p}>0\) and \(p>0\).
**Proposition 4.3**.: _Let \(E_{\sigma,t}\) be defined by (19) with \(1\leq t\leq T.\) Then_
\[\|E_{t,\sigma}\|_{K}\leq\kappa c_{p}\left(M+\kappa M\sqrt{C_{W}}\right)^{2p+ 1}\frac{(\eta T)^{p+\frac{1}{2}}}{\sigma^{2p}}. \tag{26}\]
Proof.: Recall that \(E_{t,\sigma}=\left(W_{+}^{\prime}(0)-W^{\prime}\left(\frac{(y_{t}-f_{t}(x_{t}) )^{2}}{\sigma^{2}}\right)\right)(f_{t}(x_{t})-y_{t})K_{x_{t}}.\) Since the function \(W(\cdot)\) satisfies condition (3) with constants \(c_{p}>0\) and \(p>0\), we have
\[\left|W_{+}^{\prime}(0)-W^{\prime}\left(\frac{(y_{t}-f_{t}(x_{j}))^{2}}{ \sigma^{2}}\right)\right|\leq c_{p}\left(\frac{(y_{t}-f_{t}(x_{t}))^{2}}{ \sigma^{2}}\right)^{p}\leq c_{p}\left(\frac{M+\kappa\|f_{t}\|_{K}}{\sigma} \right)^{2p}.\]
Combining the above estimate with the bound (23) for \(\|f_{t}\|_{K}\) in Proposition 4.2, we get
\[\|E_{t,\sigma}\|_{K}\leq\frac{\kappa c_{p}(M+\kappa\|f_{t}\|_{K})^{2p+1}}{ \sigma^{2p}}\leq\kappa c_{p}\left(M+\kappa M\sqrt{C_{W}\eta(t-1)}\right)^{2p+ 1}\frac{1}{\sigma^{2p}}.\]
Hence, the uniform bound (26) holds true for all \(1\leq t\leq T\). Thus we complete the proof.
The following bound for the second term in Proposition 4.1 is an immediate consequence of Proposition 4.3.
**Proposition 4.4**.: _Let \(E_{\sigma,t}\) be defined by (19) with \(1\leq t\leq T.\) Then_
\[2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{\prime}(0) L_{K})^{T-t}E_{\sigma,t}\right\|_{\rho}^{2}\right]\leq C_{1}\frac{(\eta T)^{2p+2}}{ \sigma^{4p}}, \tag{27}\]
_where \(C_{1}=2\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}}\right)^{4p+2}\left( \kappa+\left(\frac{2}{eW_{+}^{\prime}(0)}\right)^{\frac{1}{2}}\right)^{2}\)._
Proof.: By the relationship (6) between \(\|\cdot\|_{\rho}\) and \(\|\cdot\|_{K}\), Lemma 4.1 with \(\alpha=\frac{1}{2}\) and \(s=t\), and the uniform bound (26) for \(\left\|E_{\sigma,t}\right\|_{K}^{2},\) we have
\[2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{ +}^{\prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{\rho}^{2}\right]\] \[=2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|L_{K}^{\frac{1}{2}}\sum _{t=1}^{T}(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{K}^{2}\right]\] \[\leq 2\eta^{2}\max_{1\leq t\leq T}\left\|E_{\sigma,t}\right\|_{K} ^{2}\times\left(\sum_{t=0}^{T-1}\left\|L_{K}^{\frac{1}{2}}(I-\eta W_{+}^{ \prime}(0)L_{K})^{t}\right\|\right)^{2}\] \[\leq 2\eta^{2}\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}} \right)^{4p+2}\frac{(\eta T)^{2p+1}}{\sigma^{4p}}\times\left(\kappa+\sum_{t=1} ^{T-1}\left(\frac{1}{2eW_{+}^{\prime}(0)}\right)^{\frac{1}{2}}(\eta t)^{-\frac {1}{2}}\right)^{2}\] \[\leq 2\eta^{2}\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}} \right)^{4p+2}\frac{(\eta T)^{2p+1}}{\sigma^{4p}}\times\left(\kappa+2\left( \frac{1}{2eW_{+}^{\prime}(0)}\right)^{\frac{1}{2}}\sqrt{\frac{T}{\eta}}\right) ^{2}\] \[\leq 2\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}}\right)^{4p+ 2}\left(\kappa+\left(\frac{2}{eW_{+}^{\prime}(0)}\right)^{\frac{1}{2}}\right) ^{2}\frac{(\eta T)^{2p+2}}{\sigma^{4p}}.\]
This finishes the proof.
The following uniform bound for \(\mathbb{E}_{Z^{t}}[\mathcal{E}(f_{t+1})]\) is also important for our analysis.
**Proposition 4.5**.: _If \(\eta\) satisfies_
\[\eta\leq\frac{1}{\left(\frac{1}{e}+2\kappa^{2}W_{+}^{\prime}(0) \right)^{2}\log T}, \tag{28}\]
_then for \(1\leq t\leq T\), there holds_
\[\mathbb{E}_{Z^{t}}[\mathcal{E}(f_{t+1})]\leq 2\mathcal{E}(f_{\rho})+4\|f_{ \rho}\|_{\rho}^{2}+2C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}} \tag{29}\]
_where \(C_{1}\) is given in Proposition 4.4._
Proof.: We bound \(\mathbb{E}_{Z^{t}}\left[\mathcal{E}(f_{t+1})\right]\) by induction. Due to Proposition 4.1, Lemma 4.1 with \(\alpha=\frac{1}{2}\) and \(s=t-i\), and Proposition 4.4, \(\mathbb{E}_{Z^{t}}\left[\|f_{t+1}-f_{\rho}\|_{\rho}^{2}\right]\) can be bounded as
\[\mathbb{E}_{Z^{t}}\left[\|f_{t+1}-f_{\rho}\|_{\rho}^{2}\right]\] \[\leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{t}f_{\rho}\right\|_{ \rho}^{2}++2\eta^{2}\mathbb{E}_{Z^{t}}\left[\left\|\sum_{i=1}^{t}(I-\eta W_{+}^ {\prime}(0)L_{K})^{t-i}E_{\sigma,i}\right\|_{\rho}^{2}\right]\] \[\quad+2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\sum_{i=1}^{t}\left\| (I-\eta W_{+}^{\prime}(0)L_{K})^{t-i}L_{K}^{\frac{1}{2}}\right\|^{2}\mathbb{E} _{Z^{i-1}}\left[\mathcal{E}(f_{i})\right]\] \[\leq 2\|f_{\rho}\|_{\rho}^{2}+C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4 p}}+2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\left(\kappa^{2}+\frac{1}{2eW_{+}^{ \prime}(0)}\sum_{i=1}^{t-1}(\eta(t-i))^{-1}\right)\sup_{1\leq i\leq t}\mathbb{ E}_{Z^{i-1}}\left[\mathcal{E}(f_{i})\right]\]
\[\leq 2\|f_{\rho}\|_{\rho}^{2}+C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}}+2 \eta\left(\kappa^{2}W_{+}^{\prime}(0)+\frac{1}{2e}\right)^{2}\sup_{1\leq i\leq t }\log t\mathbb{E}_{Z^{i-1}}\left[\mathcal{E}(f_{i})\right].\]
Then if bound (29) is true for \(\mathbb{E}_{Z^{i-1}}\left[\mathcal{E}(f_{i})\right]\), combining with the condition (28) on \(\eta\) and the relation \(\mathcal{E}(f_{t+1})=\mathcal{E}(f_{\rho})+\mathcal{E}(f_{t+1})-\mathcal{E}(f _{\rho})=\mathcal{E}(f_{\rho})+\|f_{t+1}-f_{\rho}\|_{\rho}^{2}\), we obtain
\[\mathbb{E}_{Z^{t}}(\mathcal{E}(f_{t+1})) \leq\mathcal{E}(f_{\rho})+2\|f_{\rho}\|_{\rho}^{2}+\frac{1}{2} \left(2\mathcal{E}(f_{\rho})+4\|f_{\rho}\|_{\rho}^{2}+2C_{1}\frac{(\eta T)^{2 p+2}}{\sigma^{4p}}\right)+C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}}\] \[=2\mathcal{E}(f_{\rho})+4\|f_{\rho}\|_{\rho}^{2}+2C_{1}\frac{( \eta T)^{2p+2}}{\sigma^{4p}}.\]
This finishes our proof.
## 5 Convergence Analysis
In this section, we give the proofs of Theorem 1 and Theorem 2.
### Convergence in \(L^{2}_{\rho_{X}}\)
This subsection is devoted to the proof of Theorem 1, which provides the convergence rates in \(L^{2}_{\rho_{X}}\).
**Proof of Theorem 1**. Due to the error decomposition in Proposition 4.1, we only need to estimate the three terms appeared in the upper bound of \(\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{\rho}^{2}\right]\) respectively. As an estimate for the second term is given by (27) of Proposition 4.4, we turn to bound the remaining two terms.
For the first term, since the regularity condition (7) holds with \(r>0\), i.e., \(f_{\rho}=L^{r}_{K}g_{\rho}\) with \(g_{\rho}\in L^{2}_{\rho_{X}}\) and \(r>0\), then by Lemma 4.1 with \(\alpha=r\) and \(s=T\), we have
\[2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho}\right\|_{\rho }^{2} \tag{30}\] \[\leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}L^{r}_{K}\right\| ^{2}\left\|g_{\rho}\right\|_{\rho}^{2}\] \[\leq 2\left(\frac{r}{eW_{+}^{\prime}(0)}\right)^{2r}\|g_{\rho} \|_{\rho}^{2}\,(\eta T)^{-2r}.\]
For the third term, by Lemma 4.1 with \(\alpha=\frac{1}{2}\) and \(s=T-t\), and the uniform bound (29) for \(\mathbb{E}_{Z^{t-1}}\left[\mathcal{E}(f_{t})\right]\) in Proposition 4.5, we have
\[2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T}\left\|(I- \eta W_{+}^{\prime}(0)L_{K})^{T-t}L^{\frac{1}{2}}_{K}\right\|^{2}\mathbb{E}_{Z ^{t-1}}\left[\mathcal{E}(f_{t})\right]\] \[\leq 2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\max_{1\leq t\leq T} \mathbb{E}_{Z^{t-1}}\left[\mathcal{E}(f_{t})\right]\left(\kappa^{2}+\sum_{t=1 }^{T-1}\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}L^{\frac{1}{2}}_{K}\right\|^ {2}\right)\] \[\leq 2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\left(2\mathcal{E}(f _{\rho})+4\|f_{\rho}\|_{\rho}^{2}+2C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}} \right)\left(\kappa^{2}+\frac{1}{2eW_{+}^{\prime}(0)}\sum_{t=1}^{T-1}\frac{1}{ \eta(T-t)}\right)\]
\[\leq 2(\kappa W_{+}^{\prime}(0))^{2}\left(2\mathcal{E}(f_{\rho})+4\|f_{ \rho}\|_{\rho}^{2}+2C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}}\right)\left(\kappa^ {2}+\frac{1}{2eW_{+}^{\prime}(0)}\right)\eta\log T.\]
Then the third term can be bounded as
\[\begin{split}& 2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T} \left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}L_{K}^{\frac{1}{2}}\right\|^{2} \mathbb{E}_{Z^{t-1}}\left[\mathcal{E}(f_{t})\right]\\ &\leq 4(\kappa W_{+}^{\prime}(0))^{2}\left(\mathcal{E}(f_{\rho})+2\| f_{\rho}\|_{\rho}^{2}+C_{1}\right)\left(\kappa^{2}+\frac{1}{2eW_{+}^{\prime}(0)} \right)\left(1+(\eta T)^{2p+2}\sigma^{-4p}\right)\eta\log T.\end{split} \tag{31}\]
Putting the estimates (30), (31) and (27) back into proposition 4.1, and by taking \(\eta=\frac{1}{\eta_{0}}T^{-\frac{2r}{2r+1}}\) yields
\[\begin{split}&\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{ \rho}^{2}\right]\\ &\leq 2\left(\frac{r}{eW_{+}^{\prime}(0)}\right)^{2r}\left\|g_{\rho} \right\|_{\rho}^{2}(\eta T)^{-2r}+C_{1}\frac{(\eta T)^{2p+2}}{\sigma^{4p}}\\ &\quad+4(\kappa W_{+}^{\prime}(0))^{2}\left(\mathcal{E}(f_{\rho}) +2\|f_{\rho}\|_{\rho}^{2}+C_{1}\right)\left(\kappa^{2}+\frac{1}{2eW_{+}^{ \prime}(0)}\right)^{2}\left(1+(\eta T)^{2p+2}\sigma^{-4p}\right)\eta\log T\\ &\leq C\max\left\{T^{-\frac{2r}{2r+1}}\log TT^{\frac{2p+2}{2r+1}} \sigma^{-4p}\right\},\end{split}\]
where
\[\begin{split} C=2\left(\frac{r}{eW_{+}^{\prime}(0)}\right)^{2r} \left\|g_{\rho}\right\|_{\rho}^{2}\eta_{0}^{2r}+4\eta_{0}(\kappa W_{+}^{\prime }(0))^{2}\left(\mathcal{E}(f_{\rho})+2\|f_{\rho}\|_{\rho}^{2}+C_{1}\right)\\ \times\left(\kappa^{2}+\frac{1}{2eW_{+}^{\prime}(0)}\right)^{2} \left(1+\eta_{0}^{-(2p+2)}\right)+C_{1}\eta_{0}^{-(2p+2)}.\end{split}\]
The proof is completed.
### Capacity Dependent Analysis in \(\mathcal{H}_{K}\)
In this section, we consider the convergence of algorithm (5) in \(\mathcal{H}_{K}\), develop a capacity dependent analysis. We show that the algorithm (5) can achieve the optimal learning rates in the minimax sense in \(\mathcal{H}_{K}\). Before proving the main result, we establish the following error decomposition which is different from the one in \(L_{\rho_{X}}^{2}\).
**Proposition 5.1**.: _Let \(\{f_{t}\}_{t=1}^{T}\) be defined by (5). Then_
\[\begin{split}\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2} \right]&\leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho} \right\|_{K}^{2}+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W _{+}^{\prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{K}^{2}\right]\\ &\quad+4\eta^{2}(W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T}\left(\kappa ^{2}\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K}^{2}]+M^{2}\right)\mathrm{Tr}\left(L_{K} (I-\eta W_{+}^{\prime}(0)L_{K})^{2(T-t)}\right),\end{split}\]
_where \(E_{\sigma,t}\) is defined by (19)._
Proof.: We first prove the claim.
**Lemma 5.2**.: _Let \(\{f_{t}\}_{t=1}^{T}\) be defined by (5). Then_
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2}\right] \leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho}\right\|_{K} ^{2}+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{K}^{2}\right]\] \[\leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{
Proof.: First, by the error decomposition (20) and proof of Proposition 4.1, we get
\[\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2}\right] \leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho}\right\|_{K} ^{2}+2\eta^{2}\mathbb{E}_{Z^{T}}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{ \prime}(0)L_{K})^{T-t}E_{\sigma,t}\right\|_{K}^{2}\right]\] \[\quad+2\eta^{2}\sum_{t=1}^{T}\mathbb{E}_{Z^{T}}\left[\left\|(I- \eta W_{+}^{\prime}(0)L_{K})^{T-t}\mathcal{B}_{t}\right\|_{K}^{2}\right].\]
For the third term \(2\eta^{2}\sum_{t=1}^{T}\mathbb{E}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^ {T-t}\mathcal{B}_{t}\right\|_{K}^{2}\right],\) recall that \(\mathcal{B}_{t}=W_{+}^{\prime}(0)(L_{K}(f_{t}-f_{\rho})+(y_{t}K_{x_{t}}-f(x_{ t})K_{x_{t}})),\) we have
\[\mathbb{E}_{z_{t}}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T- t}\mathcal{B}_{t}\right\|_{K}^{2}\right]\leq(W_{+}^{\prime}(0))^{2}\mathbb{E}_{z _{t}}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}(y_{t}-f_{t}(x_{t}))K_{ x_{t}}\right\|_{K}^{2}\right]\] \[\leq(W_{+}^{\prime}(0))^{2}\left(2\kappa^{2}\|f_{t}\|_{K}^{2}+2M^ {2}\right)\mathbb{E}_{z_{t}}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t }K_{x_{t}}\right\|_{K}^{2}\right]\] \[=(W_{+}^{\prime}(0))^{2}\left(2\kappa^{2}\|f_{t}\|_{K}^{2}+2M^{2} \right)\operatorname{Tr}\left(L_{K}(I-\eta W_{+}^{\prime}(0)L_{K})^{2(T-t)} \right).\]
Therefore, \(2\eta^{2}\sum_{t=1}^{T}\mathbb{E}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K}) ^{T-t}\mathcal{B}_{t}\right\|_{K}^{2}\right]\) can be bounded by
\[2\eta^{2}(W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T}\left(2\kappa^{2}\mathbb{E}_{Z^ {t-1}}[\|f_{t}\|_{K}^{2}]+2M^{2}\right)\operatorname{Tr}\left(L_{K}(I-\eta W_ {+}^{\prime}(0)L_{K})^{2(T-t)}\right).\]
This completes the proof of Proposition 5.1.
To prove our main results, we need the following bound for \(\mathbb{E}_{Z^{t-1}}\|f_{t}\|_{K}^{2}\).
**Lemma 5.1**.: _Let \(\{f_{t}\}_{t=1}^{T}\) be defined by (5). Then_
\[\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K}^{2}]\leq C_{2}(1+(\eta T)^{2p+3}\sigma^{-4p }),\quad\forall 1\leq t\leq T, \tag{32}\]
_where_
\[C_{2}=6\|f_{\rho}\|_{K}^{2}+8(\kappa W_{+}^{\prime}(0))^{2}\left(\mathcal{E}( f_{\rho})+2\|f_{\rho}\|_{\rho}^{2}+C_{1}\right)+2\kappa^{2}c_{p}^{2}\left(M+ \kappa M\sqrt{C_{W}}\right)^{4p+2}\]
_and \(C_{1}\) is given in Proposition 4.4._
Proof.: Recall that \(\mathcal{B}_{i}=W_{+}^{\prime}(0)\left((y_{i}-f_{i}(x_{i}))K_{x_{i}}-L_{K}(f_{ \rho}-f_{i})\right)\) for \(1\leq i\leq t\). Then \(\mathbb{E}_{z_{i}|Z^{i-1}}\left[\mathcal{B}_{i}\right]=0\) and
\[\mathbb{E}_{z_{i}|Z^{i-1}}\left[\|\mathcal{B}_{i}\|_{K}^{2}\right] \leq(W_{+}^{\prime}(0))^{2}\mathbb{E}_{z_{i}|Z^{i-1}}\left[\|(y_ {i}-f_{i}(x_{i}))K_{x_{i}}\|_{K}^{2}\right]\] \[=(W_{+}^{\prime}(0))^{2}\mathbb{E}_{z_{i}|Z^{i-1}}\left[(y_{i}-f_ {i}(x_{i}))^{2}K(x_{i},x_{i})\right]\leq(\kappa W_{+}^{\prime}(0))^{2}\mathcal{E }(f_{i}),\]
where we use the fact that \(f_{i}\) is a random variable independent of \(z_{i}\). Combining with the definition of \(f_{t}\) given by (5), we have
\[\mathbb{E}_{Z^{t}}\left[\|f_{t+1}-f_{\rho}\|_{K}^{2}\right]\leq 2\left\|(I- \eta W_{+}^{\prime}(0)L_{K})^{t}f_{\rho}\right\|_{K}^{2}+2\eta^{2}\sum_{i=1}^{t} \mathbb{E}_{Z^{i}}\left[\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{t-i}\mathcal{B} _{i}\right\|_{K}^{2}\right]\]
\[+2\eta^{2}\mathbb{E}_{Z^{t}}\left[\left\|\sum_{i=1}^{t}(I-\eta W_{+}^ {\prime}(0)L_{K})^{t-i}E_{\sigma,i}\right\|_{K}^{2}\right]\] \[\leq 2\|f_{\rho}\|_{K}^{2}+2\eta^{2}\sum_{i=1}^{t}\left\|(I-\eta W _{+}^{\prime}(0)L_{K})^{t-i}\right\|\mathbb{E}_{Z^{t}}\left[\left\|\mathcal{B} _{i}\right\|_{K}^{2}\right]\] \[\quad+2\eta^{2}\mathbb{E}_{Z^{t}}\left[\left\|\sum_{i=1}^{t}(I- \eta W_{+}^{\prime}(0)L_{K})^{t-i}E_{\sigma,i}\right\|_{K}^{2}\right]\] \[\leq 2\|f_{\rho}\|_{K}^{2}+2\eta^{2}(\kappa W_{+}^{\prime}(0))^{2 }\sum_{i=1}^{t}\mathbb{E}_{Z^{i-1}}\left[\mathcal{E}(f_{i})\right]+2\eta^{2} \mathbb{E}_{Z^{t}}\left[\sum_{i=1}^{t}\left\|E_{\sigma,i}\right\|_{K}\right] ^{2}.\]
Then putting the bounds (29) and (26) for \(\mathbb{E}_{Z^{i-1}}\left[\mathcal{E}(f_{i})\right]\) and \(\left\|E_{\sigma,i}\right\|_{K}\) back into the above inequality yields
\[\mathbb{E}_{Z^{t}}\left[\|f_{t+1}-f_{\rho}\|_{K}^{2}\right] \leq 2\|f_{\rho}\|_{K}^{2}+2\eta^{2}T(\kappa W_{+}^{\prime}(0))^{2 }\left(2\mathcal{E}(f_{\rho})+4\|f_{\rho}\|_{\rho}^{2}+2C_{1}(\eta T)^{2p+2} \sigma^{-4p}\right)\] \[+2\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}}\right)^{4p+2}( \eta T)^{2p+3}\sigma^{-4p}.\]
Finally we obtain the desired bound (32) by the relation
\[\mathbb{E}_{Z^{t}}\left[\|f_{t+1}\|_{K}^{2}\right]\leq 2\mathbb{E}_{Z^{t}}\left[\| f_{t+1}-f_{\rho}\|_{K}^{2}\right]+2\|f_{\rho}\|_{K}^{2}.\]
This completes the proof.
Now we are in a position to prove the convergence rates in \(\mathcal{H}_{K}\).
**Proof of Theorem 2**: Similar as the proof of Theorem 1, due to Proposition 5.1, we need to estimate the three terms appeared in the upper bound of \(\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2}\right]\) respectively.
For the first term, since the target function \(f_{\rho}\) satisfies the regularity condition (7) with \(r>\frac{1}{2}\), i.e., \(f_{\rho}=L_{K}^{r}g_{\rho}\) with \(g_{\rho}\in L_{\rho_{X}}^{2}\) and \(r>\frac{1}{2}\), then by Lemma 4.1 with \(\alpha=r-\frac{1}{2}\) and \(s=T\), we have
\[2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}f_{\rho}\right\|_{K}^{2}\] \[=2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}L_{K}^{r-\frac{1}{2}} L_{K}^{\frac{1}{2}}g_{\rho}\right\|_{K}^{2}\] \[\leq 2\left\|(I-\eta W_{+}^{\prime}(0)L_{K})^{T}L_{K}^{r-\frac{1}{ 2}}\right\|^{2}\left\|g_{\rho}\right\|_{\rho}^{2} \tag{33}\] \[\leq 2\left(\frac{r-\frac{1}{2}}{eW_{+}^{\prime}(0)}\right)^{2(r- \frac{1}{2})}\left\|g_{\rho}\right\|_{\rho}^{2}(\eta T)^{2(r-\frac{1}{2})}).\]
The property of trace shows that if \(A\) is an operator of trace class and \(B\) is a bounded linear operator, there holds \(\mathrm{Tr}(AB)\leq\mathrm{Tr}(A)\|B\|\). If the capacity condition (9) holds with
\(0<\beta<1\), then the third term in Proposition 5.1 can be bounded as
\[\begin{split}& 4\eta^{2}(W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T}\left( \kappa^{2}\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K}^{2}]+M^{2}\right)\mathrm{Tr}\left( L_{K}(I-\eta W_{+}^{\prime}(0)L_{K})^{2(T-t)}\right)\\ &\leq 2\eta^{2}(W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T}\left(2\kappa^{2 }\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K}^{2}]+2M^{2}\right)\mathrm{Tr}\left(L_{K}( I-\eta W_{+}^{\prime}(0)L_{K})^{2(T-t)}\right)\\ &\leq\eta^{2}\sum_{t=1}^{T}\left\|L_{K}^{1-\beta}(I-\eta W_{+}^{ \prime}(0)L_{K})^{2(T-t)}\right\|\\ &\qquad\qquad\times 2(W_{+}^{\prime}(0))^{2}\mathrm{Tr}(L_{K}^{ \beta})\left(2\kappa^{2}\max_{1\leq t\leq T}\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K }^{2}]+2M^{2}\right).\end{split} \tag{34}\]
Now we turn to estimate \(\eta^{2}\sum_{t=1}^{T}\left\|L_{K}^{1-\beta}(I-\eta W_{+}^{\prime}(0)L_{K})^{2 (T-t)}\right\|\) appeared in the bound above. One can get from Lemma 4.1 with \(\alpha=1-\beta\) and \(s=2t\) that
\[\begin{split}&\eta^{2}\sum_{t=1}^{T}\left\|L_{K}^{1-\beta}(I- \eta W_{+}^{\prime}(0)L_{K})^{2(T-t)}\right\|\\ &=\eta^{2}\|L_{K}^{1-\beta}\|+\eta^{2}\sum_{t=1}^{T-1}\left\|L_{K }^{1-\beta}(I-\eta W_{+}^{\prime}(0)L_{K})^{2t}\right\|\\ &\leq\eta^{2}\kappa^{2-2\beta}+\eta^{2}\left(\frac{1-\beta}{eW_{ +}^{\prime}(0)}\right)^{1-\beta}\sum_{t=1}^{T-1}\frac{1}{(2\eta t)^{1-\beta}} \\ &\leq\eta^{2}\kappa^{2-2\beta}+\left(\frac{1-\beta}{2eW_{+}^{ \prime}(0)}\right)^{1-\beta}\frac{1}{\beta}\eta^{1+\beta}T^{\beta}\\ &\leq\left(\kappa^{2-2\beta}+\left(\frac{1-\beta}{eW_{+}^{\prime }(0)}\right)^{1-\beta}\frac{1}{\beta}\right)\eta^{1+\beta}T^{\beta},\end{split}\]
Combining the bound (34) with the bound (32) for \(\mathbb{E}_{Z^{t-1}}\left[\|f_{t}\|_{K}^{2}\right]\), the third term can be bounded as
\[\begin{split}& 4\eta^{2}(W_{+}^{\prime}(0))^{2}\sum_{t=1}^{T} \left(\kappa^{2}\mathbb{E}_{Z^{t-1}}[\|f_{t}\|_{K}^{2}]+M^{2}\right)\mathrm{ Tr}\left(L_{K}(I-\eta W_{+}^{\prime}(0)L_{K})^{2(T-t)}\right)\\ &\leq\left(\kappa^{2-2\beta}+\left(\frac{1-\beta}{eW_{+}^{\prime }(0)}\right)^{1-\beta}\frac{1}{\beta}\right)\eta^{1+\beta}T^{\beta}\\ &\qquad\qquad\times 2(W_{+}^{\prime}(0))^{2}\mathrm{Tr}(L_{K}^{ \beta})\left(\kappa^{2}C_{4}(1+(\eta T)^{2p+3}\sigma^{-4p})+M^{2}\right). \end{split} \tag{35}\]
For the second term \(2\eta^{2}\mathbb{E}\left[\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{\prime}(0)L_{K})^ {T-t}E_{\sigma,t}\right\|_{K}^{2}\right]\), by the uniform bound (26) for \(\left\|E_{\sigma,t}\right\|_{K}\), we have
\[2\eta^{2}\left\|\sum_{t=1}^{T}(I-\eta W_{+}^{\prime}(0)L_{K})^{T-t}E_{ \sigma,t}\right\|_{K}^{2} \tag{36}\] \[\leq 2\eta^{2}\left(\sum_{t=1}^{T}\left\|(I-\eta W_{+}^{\prime}(0)L _{K})^{T-t}E_{\sigma,t}\right\|_{K}\right)^{2}\] \[\leq 2\eta^{2}\left(\sum_{t=1}^{T}\left\|E_{\sigma,t}\right\|_{K} \right)^{2}\leq 2\kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}}\right)^{4p+2} \frac{(\eta T)^{2p+3}}{\sigma^{4p}}.\]
Now putting the above estimates (33), (35) and (36) back into Proposition 5.1 yields the bound for \(\mathbb{E}_{Z^{T}}\left[\|f_{T+1}-f_{\rho}\|_{K}^{2}\right]\), which is given by
\[2\left(\frac{r-\frac{1}{2}}{eW_{+}^{\prime}(0)}\right)^{2(r- \frac{1}{2})}\left\|g_{\rho}\right\|_{\rho}^{2}(\eta T)^{2(r-\frac{1}{2})})+2 \kappa^{2}c_{p}^{2}\left(M+\kappa M\sqrt{C_{W}}\right)^{4p+2}\frac{(\eta T)^ {2p+3}}{\sigma^{4p}}\] \[+2(W_{+}^{\prime}(0))^{2}\mathrm{Tr}(L_{K}^{\beta})\left(\kappa^ {2}C_{4}(1+(\eta T)^{2p+3}\sigma^{-4p})+M^{2}\right)\left(\kappa^{2-2\beta}+ \left(\frac{1-\beta}{eW_{+}^{\prime}(0)}\right)^{1-\beta}\frac{1}{\beta} \right)\eta^{1+\beta}T^{\beta}.\]
Finally we choose \(\eta=\frac{1}{\eta_{0}}T^{\frac{1-2r-\beta}{2r+\beta}}\) in the bound above to obtain the desired result with
\[\tilde{C} =2\left(\frac{r-\frac{1}{2}}{eW_{+}^{\prime}(0)}\right)^{2r-1} \left\|g_{\rho}\right\|_{\rho}^{2}\eta_{0}^{2r-1}+2\kappa^{2}c_{p}^{2}\left(M +\kappa M\sqrt{C_{W}}\right)^{4p+2}\eta_{0}^{-(2p+3)}\] \[+2(W_{+}^{\prime}(0))^{2}\mathrm{Tr}(L_{K}^{\beta})\left(\kappa^ {2}C_{4}(1+\eta_{0}^{-(2p+3)})+M^{2}\right)\left(\kappa^{2-2\beta}+\left( \frac{1-\beta}{eW_{+}^{\prime}(0)}\right)^{1-\beta}\frac{1}{\beta}\right)\eta _{0}^{-(1+\beta)}.\]
The proof is finished.
## Acknowledgments
The work of Zheng-Chu Guo is supported by Zhejiang Provincial Natural Science Foundation of China [Project No. LR20A010001], National Natural Science Foundation of China [Project Nos. U21A20426 and 12271473], and Fundamental Research Funds for the Central Universities [Project No. 2021XZZX001]. The work of Andreas Christmann is partially supported by German Science Foundation (DFG) under Grant CH 291/3-1. The work of Lei Shi is supported by the National Natural Science Foundation of China [Project Nos.12171039 and 12061160462] and Shanghai Science and Technology Program [Project Nos. 21JC1400600 and 20JC1412700].
|
2307.09061 | A Hybrid Optimization and Deep RL Approach for Resource Allocation in
Semi-GF NOMA Networks | Semi-grant-free non-orthogonal multiple access (semi-GF NOMA) has emerged as
a promising technology for the fifth-generation new radio (5G-NR) networks
supporting the coexistence of a large number of random connections with various
quality of service requirements. However, implementing a semi-GF NOMA mechanism
in 5G-NR networks with heterogeneous services has raised several resource
management problems relating to unpredictable interference caused by the GF
access strategy. To cope with this challenge, the paper develops a novel hybrid
optimization and multi-agent deep (HOMAD) reinforcement learning-based resource
allocation design to maximize the energy efficiency (EE) of semi-GF NOMA 5G-NR
systems. In this design, a multi-agent deep Q network (MADQN) approach is
employed to conduct the subchannel assignment (SA) among users. While
optimization-based methods are utilized to optimize the transmission power for
every SA setting. In addition, a full MADQN scheme conducting both SA and power
allocation is also considered for comparison purposes. Simulation results show
that the HOMAD approach outperforms other benchmarks significantly in terms of
the convergence time and average EE. | Duc-Dung Tran, Vu Nguyen Ha, Symeon Chatzinotas, Ti Ti Nguyen | 2023-07-18T08:25:28Z | http://arxiv.org/abs/2307.09061v1 | # A Hybrid Optimization and Deep RL Approach for Resource Allocation in Semi-GF NOMA Networks
###### Abstract
Semi-grant-free non-orthogonal multiple access (semi-GF NOMA) has emerged as a promising technology for the fifth-generation new radio (5G-NR) networks supporting the coexistence of a large number of random connections with various quality of service requirements. However, implementing a semi-GF NOMA mechanism in 5G-NR networks with heterogeneous services has raised several resource management problems relating to unpredictable interference caused by the GF access strategy. To cope with this challenge, the paper develops a novel hybrid optimization and multi-agent deep (HOMAD) reinforcement learning-based resource allocation design to maximize the energy efficiency (EE) of semi-GF NOMA 5G-NR systems. In this design, a multi-agent deep Q network (MADQN) approach is employed to conduct the subchannel assignment (SA) among users. While optimization-based methods are utilized to optimize the transmission power for every SA setting. In addition, a full MADQN scheme conducting both SA and power allocation is also considered for comparison purposes. Simulation results show that the HOMAD approach outperforms other benchmarks significantly in terms of the convergence time and average EE.
## I Introduction
The future wireless networks are expected to be capable of serving a tremendous number of devices requiring heterogeneous services, e.g., enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine type communications (mMTC), together with different quality-of-service (QoS) demands [1, 2]. In this context, semi-GF NOMA has been considered as a promising solution for relieving the heavy accessing-process overhead in the dense systems [3]. Following this strategy, the subchannels (SCs) are opened for mMTC users to access freely without waiting for receiving the admission granted, i.e., grant-free (GF) access, while the association process of other users having stringent QoS requirements (e.g., eMBB/URLLC users) are scheduled by the system controllers (such as base stations or access points, etc.), which is also called as grant-based (GB) access. In addition, the NOMA transmission can be exploited when there is more than one user accessing the same SC [4].
However, the without-admission-control property of the GF strategy may result in a serious congestion problem in semi-GF NOMA systems when a tremendously large number of devices tries to access a limited number of SCs. Therefore, GF access needs to be carefully designed to mitigate this problem as well as guarantee the QoS requirements of both GB and GF users in semi-GF NOMA systems. Furthermore, in real-time systems, developing a dynamic resource allocation (RA) mechanism addressing the congestion problem and fulfilling the various QoS requirements from different services in semi-GF NOMA systems becomes more challenging. In recent years, reinforcement learning (RL) method has been applied to intelligently resolve the RA problem in communications [3]. Its application to GF NOMA and semi-GF NOMA systems has been investigated in [5, 6, 7, 8, 9, 10, 11, 12, 13]. However, these works have not considered the 5G-NR systems with the coexistence of multiple services. Furthermore, most of them aimed to discretize the continuous power variable to ease the learning process which may result in performance loss.
Regarding the drawback of the existing works, this paper develops two novel learning-based resource allocation designs maximizing EE while guaranteeing heterogeneous requirements relating to various communication services in semi-GF NOMA 5G-NR systems. Both of these proposed algorithms exploit the multi-agent deep RL method where the mMTC users are considered as agents that learn and optimize its SC and transmission power selection. The first algorithm, namely full multi-agent deep Q-network (Full-MAD), aims to set both SC assignment and power allocation (PA) as the action for the learning process, where the transmission power is quantized into a number of discrete levels. On a different method, the second algorithm, namely HOMAD, only considers the SC selection as the action model. In this learning-based solution, the transmission power corresponding to each SC setting can be determined by some efficient optimization-based analysis results. By doing so, the action space size is significantly degraded and the hybrid method can take advantage of both deep Q-network (DQN) and optimization-based approaches to gain better learning performance. The simulation results are then demonstrated to evaluate the performance of our proposed mechanisms in terms of convergence time and the system EE.
## II System Model
We investigate an uplink semi-GF NOMA 5G-NR network as shown in Fig. 1. The network consists of one BS located at the center of the cell with a radius of \(r\) (m) and a number of users randomly distributed in this cell requiring different services including eMBB/mMTC/URLLC. Let \(\mathcal{M}_{\text{U}}\), \(\mathcal{M}_{\text{E}}\) and \(\mathcal{M}_{\text{M}}\) be the sets of URLLC, eMBB, and mMTC devices, whose cardinalities are \(M_{\text{U}}\), \(M_{\text{E}}\) and \(M_{\text{M}}\), respectively. For convenience, we also denote the set of all users as \(\mathcal{M}=\mathcal{M}_{\text{U}}\cup\mathcal{M}_{\text{E}}\cup\mathcal{M}_{ \text{M}}\) and \(M=M_{\text{U}}+M_{\text{E}}+M_{\text{M}}\). To serve |
2305.10349 | Interactive Learning of Hierarchical Tasks from Dialog with GPT | We present a system for interpretable, symbolic, interactive task learning
from dialog using a GPT model as a conversational front-end. The learned tasks
are represented as hierarchical decompositions of predicate-argument structures
with scoped variable arguments. By using a GPT model to convert interactive
dialog into a semantic representation, and then recursively asking for
definitions of unknown steps, we show that hierarchical task knowledge can be
acquired and re-used in a natural and unrestrained conversational environment.
We compare our system to a similar architecture using a more conventional
parser and show that our system tolerates a much wider variety of linguistic
variance. | Lane Lawley, Christopher J. MacLellan | 2023-05-17T16:32:40Z | http://arxiv.org/abs/2305.10349v1 | # Interactive Learning of Hierarchical Tasks From Dialog With GPT
###### Abstract
We present a system for interpretable, symbolic, interactive task learning from dialog using a GPT model as a conversational front-end. The learned tasks are represented as hierarchical decompositions of predicate-argument structures with scoped variable arguments. By using a GPT model to convert interactive dialog into a semantic representation, and then recursively asking for definitions of unknown steps, we show that hierarchical task knowledge can be acquired and re-used in a natural and unrestrained conversational environment. We compare our system to a similar architecture using a more conventional parser and show that our system tolerates a much wider variety of linguistic variance.
## 1 Introduction
Exciting times lie ahead in the world of task knowledge representation. The Interactive Task Learning (ITL) approach introduced by Laird et al. (2017) has articulated a research vision dedicated to the goal of enabling machines to interactively learn general tasks in natural, human-like ways. It posits that acquired task knowledge should be general enough to be applicable in novel situations, and that it should be interpretable and modifiable, so that human teachers can actively shape the machine's understanding. ITL highlights many instructional modalities for the acquisition of task knowledge, including gestures, diagrams, and language--these modalities are also formally accounted for in the Natural Training Interaction (NTI) framework due to MacLellan et al. (2018). In this work, we focus on the language modality, endeavoring toward acquisition of human-interpretable, modular, hierarchical task knowledge from natural dialogs between humans and machines.
There has been considerable research into language-driven task acquisition. Early work with Instructo-Soar (Huffman and Laird, 1993) investigated how to acquire how hierarchical task descriptions for use in the Soar cognitive architecture (Laird et al., 1987). A decade later, this approach has evolved into Rosie (Kirk and Laird, 2014), a system that can learn game and robotics task knowledge from language instruction. Hinrichs and Forbus (2014) ground task learning in a digital sketching environment, supporting multimodal task learning from both dialog and sketches. The PLOW system (Allen et al., 2007) integrates a demonstrative modality and a rich internal knowledge base with instructional dialogs to facilitate one-session task instruction. And, most closely related to this work, Suddrey et al. (2017) introduce a system for learning hierarchical task representa
Figure 1: One iteration of a hierarchical task learning dialog in our system.
tions through the recursive clarification of unknown predicates in instructional dialogs.
Each of these systems, however, suffers the characteristic brittleness of the classical syntactic and semantic parsers they invariably use to convert natural instructional dialogs into task knowledge. Syntactic forms vastly outnumber their semantic counterparts, and problems like anaphora resolution, paraphrasing, and grammatical and transcription errors frequently take the "natural" out of "natural language" as the first casualty of their mitigation. Each of the systems described above not only falls subject to myriad parser errors on malformed input, but also lacks the ability to perform adequate paraphrase resolution when parsed verb structures do not match known ones exactly, generally relying on manually constructed knowledge bases of synonyms to solve even a portion of this problem. These issues make it unlikely that these systems will successfully achieve the ITL vision with real users-currently they only work with users in the lab that know their idiosyncratic syntax.
Here, we present an alternate strategy for mitigating the sheer variety of natural language: by exploiting the virtual mastery of the _form_ of natural English apparently acquired by large language models Tenney et al. (2019), we demonstrate the acquisition of task knowledge in a far less restrictive, far more natural dialog setting than possible in prior work. We integrate a GPT-family language model Brown et al. (2020) in a careful and principled way, using it to map natural dialog into a symbolic domain. The GPT model is used for two subtasks: the semantic parsing of text into predicate-argument structures, and the semantic unification of those structures to already-known actions with the same meaning. The former task allows the system to deal with error-laden and grammatically unrestricted input dialog, and the latter allows it to perform paraphrasal mappings that take into account meaning; together, these applications of GPT vastly widen the set of possible inputs to the task learning system, with minimal upfront engineering cost.
In Section 2, we describe the design of our system in detail and justify our particular application of GPT for the dialog understanding problem. In Section 3, we demonstrate that our system can reproduce the task knowledge acquired by Suddrey et al. (2017) using more complex and ambiguous input language. Finally, in Section 4, we discuss future expansions of this work, including additional evaluative work and the incorporation of additional instructional modalities.
## 2 System Design
In our system, tasks are conveyed from the _instructor_ to the _agent_ starting with an initial command provided by the instructor. The command can be an action or a sequence of multiple actions. All actions represented in the command are parsed into predicate-argument structures and matched to corresponding actions already known by the agent. If any actions in a command cannot be matched to semantically similar known actions, they are recursively clarified with new subcommands solicited from the instructor; the recursive clarification of actions into sequences of new actions forms the hierarchical structure of the learned task. One step of this process is depicted in Figure 1, and the full procedure is described in Algorithm 1.1
Footnote 1: In Algorithm 1, the \(\leftarrow_{+}\) operator, used on lines 6 and 11, represents concatenation-in-place.
To enable translation of instructions from the natural dialog space to the semantic action space, our system utilizes a pre-trained GPT model2 to perform two separate steps: the extraction of predicate-argument structures from the instructor's utterance (parseGPT), and the mapping of those structures to semantically equivalent actions that have already been acquired (matchGPT).
Footnote 2: We use the text–davinci-003 weights for the GPT-3 model, trained by OpenAI.
Figure 2: An example of an instructional dialog between a human instructor and our system. This example showcases GPT’s paraphrase and anaphora resolution capabilities, as discussed in Section 2.1.
### Integration of GPT Models
We use GPT models to solve atomic, linguistic subroutines within a well-defined algorithm. This allows us to obtain more focused, reliable output than if we had used language models to solve the entire task "end-to-end". It also allows us to perform better error correction on a more predictable set of error classes. However, the tasks performed by the language model here are by no means trivial; extraction of predicate-argument structures from text, and the mapping of those structures to known actions, both require resolution of a considerable number of linguistic tasks, some examples of which we will enumerate here, making reference to the example dialog presented in Figure 2:
1. **Syntactic parsing**, a necessary step of semantic parsing in other dialog-based instructional systems, is a challenging task that is highly prone to grammatical and spelling errors, often requiring significant engineering effort to overcome them. parseGPT's virtual mastery of the _form_ of natural language, even when that language may be ungrammatical, allows us to avoid most of this engineering effort.
2. **Anaphora resolution**, performed by parseGPT, allows constructions like _Pick up the pepper and put it on the counter_ to decompose into unambiguous structures like pickUp(pepper) and put(pepper, counter).
3. **Predicate naming**, performed by parseGPT, is important in distinguishing predicates with similar verbs, as verb senses may often include mandatory prepositional arguments, e.g., putAway vs. put.
4. **Paraphrase resolution**, performed by matchGPT, is one of the most challenging aspects of unifying natural speech with formal knowledge representations. While other task learning systems struggle with the sheer phrasal variance of natural language, matchGPT autonomously makes these determinations, such as identifying that both _pull your hand back_ and _bring it back to the original position_ invoke the known action RESETHANDPOSITION.
In all, our integration of GPT into an otherwise simple algorithm has allowed us to capitalize on the _fluency_ of large language models without falling victim to the myriad _failures_ that can arise from using them to complete non-linguistic tasks end-to-end--especially recursive tasks and tasks requiring well-defined output.
## 3 Evaluation
We evaluate our system on the same learned task as Suddrey et al. (2017), and in the same way, with an author providing dialog input. To highlight the contributions of GPT enumerated in Section 2.1, we provided dialog with ambiguous anaphora, multi-word predicates, and action paraphrasing. The full dialog can be seen in Figure 2. The tasks to be learned are shown in Table 1, both with the name of each sub-action in the original evaluation experiment by Suddrey et al. (2017), and with the name of that action in our system.3
Footnote 3: We changed the names of some of these actions to better convey the semantics of each action to GPT; these semantics are important for paraphrase resolution.
Our system learned each action from the original work, including their subtask structures, from the provided dialog. The final task decomposition was identical to the structure induced by Suddrey et al. (2017). The learned structure for the action put_away is partially presented in Figure 3. While further evaluation, especially with grounding in an environment, is called for, we aim here only to show that the task structures themselves can be induced from natural dialog with minimal engineering effort. Future work has the fortunate property of being modular with respect to this induction: environmental symbol grounding, for example, as we
discuss in Section 4, can be implemented as additional subroutines in an extension of Algorithm 1.
## 4 Discussion and Future Directions
This work is intended to show that the hierarchical task structures learned in the system due to Suddrey et al. (2017) can also be learned using GPT as a means of more fluently handling natural language input. However, the work presented in this short paper is not a complete task learning system: future expansion of this work must incorporate the grounding of objects and plans in an operational environment, as past work has. Additionally, the relatively limited evaluation scheme, chosen here to achieve parity with the reference paper, should be expanded to incorporate a wider range of tasks and dialogs in a future study.
The NTI framework (MacLellan et al., 2018) also highlights many additional instructional modalities, including gestures, images, feedback, and demonstrations. While GPT can be used to amplify the efficacy of linguistic instruction, the integration of these other modalities into a cohesive whole is still an open problem. We believe that the widely predicted upcoming advent of multi-modal transformers, and its combination with symbolic knowledge representations, could allow for interesting forms of joint dialog and image understanding that could be utilized to produce a multi-modal semantic representation for task learning. Exciting times lie ahead in the world of task knowledge representation.
## 5 Acknowledgement
This research was funded in part by Award 2112532 from NSF's AI-ALOE institute and Awards W911NF2120101 and W911NF2120126 from ARL's STRONG program. The views, opinions, and findings expressed are the authors' and should not be taken as representing official views or policies of these funding agencies.
|
2306.09348 | Seeing the World through Your Eyes | The reflective nature of the human eye is an underappreciated source of
information about what the world around us looks like. By imaging the eyes of a
moving person, we can collect multiple views of a scene outside the camera's
direct line of sight through the reflections in the eyes. In this paper, we
reconstruct a 3D scene beyond the camera's line of sight using portrait images
containing eye reflections. This task is challenging due to 1) the difficulty
of accurately estimating eye poses and 2) the entangled appearance of the eye
iris and the scene reflections. Our method jointly refines the cornea poses,
the radiance field depicting the scene, and the observer's eye iris texture. We
further propose a simple regularization prior on the iris texture pattern to
improve reconstruction quality. Through various experiments on synthetic and
real-world captures featuring people with varied eye colors, we demonstrate the
feasibility of our approach to recover 3D scenes using eye reflections. | Hadi Alzayer, Kevin Zhang, Brandon Feng, Christopher Metzler, Jia-Bin Huang | 2023-06-15T17:59:59Z | http://arxiv.org/abs/2306.09348v2 | # Seeing the World through Your Eyes
###### Abstract
The reflective nature of the human eye is an underappreciated source of information about what the world around us looks like. By imaging the eyes of a moving person, we can collect multiple views of a scene outside the camera's direct line of sight through the reflections in the eyes. In this paper, we reconstruct a 3D scene beyond the camera's line of sight using portrait images containing eye reflections. This task is challenging due to 1) the difficulty of accurately estimating eye poses and 2) the entangled appearance of the eye iris and the scene reflections. Our method jointly refines the cornea poses, the radiance field depicting the scene, and the observer's eye iris texture. We further propose a simple regularization prior on the iris texture pattern to improve reconstruction quality. Through various experiments on synthetic and real-world captures featuring people with varied eye colors, we demonstrate the feasibility of our approach to recover 3D scenes using eye reflections.
## 1 Introduction
_The only true voyage of discovery... would be not to visit strange lands but to possess other eyes, to behold the universe through the eyes of another... - Marcel Proust, 1927_
The human eye is a remarkable organ that enables vision and holds valuable information about the surrounding world. While we typically use our own eyes as two _lenses_ to focus light onto the photosensitive cells composing our retina, we would also capture the light reflected from the cornea if we look at someone else's eyes. When we use a camera to image the eyes of another, we effectively turn their eyes as a set of _mirrors_ in the overall imaging system. Since the light that reflects off the observer's eyes share the same source as the light that reaches their retina, our camera should form images containing information about the world the observer sees.
Prior studies have explored recovering a panoramic image of the world the observer sees from an image of two eyes [30, 31]. Follow-up works have further explored applications such as personal identification [12, 28], detecting
grasp posture [53], focused object estimation [42], and relighting [29]. Given the recent advancements in 3D vision and graphics, we wonder: Can we do more than reconstruct a single panoramic environment map or recognize patterns? Is it possible to recover the world seen by the observer in full 3D?
In this paper, we answer these questions by reconstructing a 3D scene from a sequence of eye images. We start from the insight that our eyes capture/reflect multi-view information as we naturally move our heads. We draw inspiration from the classical imaging formulation proposed by [30] and integrate it with the recent advances in 3D reconstruction spearheaded by Neural Radiance Fields (NeRF) [26]. Unlike the standard NeRF capture setup, which requires a _moving camera_ to capture multi-view information (often followed by camera pose estimation), our approach employs a _stationary camera_ and extracts the multi-view cues from eye images under head movement.
While conceptually straightforward, reconstructing a 3D NeRF from eye images is extremely challenging in practice. The first challenge is source separation. We need to separate the reflections from the intricate iris textures of human eyes. These complex patterns add a level of ambiguity to the 3D reconstruction process. Unlike the clear images of the scene typically assumed in standard captures, the eye images we obtain are inherently blended with iris textures. This composition disrupts the pixel correspondence and complicates the reconstruction process. The second challenge is cornea pose estimation. Eyes are small and hard to localize accurately from image observations. The multi-view reconstruction, however, depends on the accuracy of their locations and 3D orientations.
To address these challenges, in this work, we repurpose NeRF for training on eye images by introducing two crucial components: a) texture decomposition, which leverages a simple radial prior to facilitate separating the iris texture from the overall radiance field, and b) eye pose refinement, which enhances the accuracy of pose estimation despite the challenges presented by the small size of eyes.
To evaluate the performance and effectiveness of our approach, we generate a synthetic dataset of a complex indoor environment with images that capture the reflection from a synthetic cornea with realistic texture. We further implement a real-world setup with multiple objects to capture eye images. We conduct extensive experiments on synthetic and real-world captured eye images to validate several design choices in our approach.
Our primary contributions are as follows:
* **New 3D reconstruction problem**. We present a novel method for reconstructing 3D scenes of the observer's world from eye images, integrating earlier foundational work with the latest advancements in neural rendering.
* **Radial prior for irises**. We introduce a radial prior for iris texture decomposition in eye images, significantly improving the quality of the reconstructed radiance field.
* **Cornea pose refinement**. We develop a cornea pose refinement procedure to alleviate the noisy pose estimates of eyes, which overcomes the unique challenge of extracting features from human eyes.
These advancements extend the current capabilities of 3D scene reconstruction through neural rendering to handle partially corrupted image observations obtained from eye reflections, opening up new possibilities for research and development in the broader area of accidental imaging [45, 6, 20, 38] to reveal and capture 3D scenes beyond the visible line-of-sight.
## 2 Related Work
**Catadioptric imaging.** Catadioptric imaging uses a combination of lenses and mirrors for image capturing. The word catadioptric is derived from _catoptrics_ (related to the Greek words for specular and mirrors) and _dioptrics_ (related to an Ancient Greek lens-like instrument). In essence, catadioptric imaging seeks to leverage an additional (often curved) mirror to expand a lens-based imaging system's effective field of view. Early studies in catadioptric imaging focused primarily on the design of the mirror profiles and their impact on the final image quality. [2] studied three design criteria of a catadioptric imaging system: the shape of the mirrors, the resolution of the cameras, and the focus settings of the cameras. [41] provided a metric to quantify distortions and a method to minimize distortions in images acquired with a single viewpoint catadioptric camera. Moreover, a creative way to realize an accidental catadioptric imaging system is by treating human eyes as external
Figure 2: **NeRF for non-line-of-sight scene. The typical NeRF capture setup requires multiple posed images (e.g., captured from a moving camera) for reconstruction. In our setup, we gather multi-view information of the scene through light reflected from the eyes of a moving person.**
curved mirrors [31]. [30] uses a single image of the eyes as a stereo system to identify pixel correspondences with epipolar geometry, even successfully identifying what the person is looking at. Another application of using human eyes as part of the imaging system is estimating light direction from the eyes to perform relighting [29, 46]. Our work draws inspiration from previous works on eye-based catadioptric imaging systems and further extends this concept to achieve 3D scene recovery through NeRF-based modeling. In particular, this paper introduces several new techniques to process catadiotrically captured eye images, such as learnable texture decomposition and refined iris estimations.
**Neural radiance field.** Neural radiance fields (NeRF) [26] represent a significant milestone in novel view synthesis. NeRF adopts differentiable volume rendering to represent a 3D scene and uses neural networks to learn the density and color of each scene point. Following the success of NeRF, a plethora of follow-up works have been introduced to improve its rendering quality [3, 4], ability to handle scene dynamics [10, 17, 34, 35, 36, 34, 37], inaccurate camera poses [5, 18, 22, 25, 50], and rendering speed [1, 27, 52]. Our work uses NeRF to parametrize the unknown scene we wish to recover from eye reflections. In particular, we modify the training framework from merftudio [43] to implement the NeRF-based scene reconstruction. We note that our input images are captured at a fixed viewpoint, which differs from the typical NeRF setup, which requires multi-view input with additional requirements of camera pose optimization.
**Reflection removal.** Removing reflections from captured images is a longstanding computational photography problem. The related literature on this topic can be summarized into two main categories: _multi-frame_ and _single-image_. Multi-frame reflection removal methods [9, 23, 24, 40, 51] often exploit the differences of motion patterns between the background and reflection layers and impose various image priors as regularization. Single-image reflection removal methods tend to exploit visual cues available in a single image, such as depth-of-field [16, 49], defocus-disparity [37], or learned image features [55]. More recently, NeRF has emerged as a new tool for reflection removal, specifically under the multi-frame setting. Various NeRF-based methods have studied how to accurately model and extract specular reflections from shiny or metallic objects [44, 48, 54, 7]. Nerfren [11] demonstrates that by fitting two NeRFs to model the reflection and diffuse components of the scene separately, reflections from planar surfaces like mirrors can be removed and re-rendered as a separate 3D scene. Due to the simplicity of planar reflections, Nerfren achieves the joint learning of reflection and diffuse components by simply aggregating predictions from two NeRF models together (reflection and diffuse) weighted by alpha-compositing. In this work, unlike prior works that focus on planar surface geometry, our object of interest (the human eye) has an inherently more complicated curved geometry, which necessitates us developing several modifications to the standard NeRF rendering workflow, which we will detail in the following sections.
**Non-line-of-sight imaging.** Non-line-of-sight (NLOS) imaging attempts to recover images of objects that are not directly visible from the camera's position or are obstructed by an object in the line of sight. The principle behind NLOS imaging is that one can use light reflected off a visible relay surface to record information about an object outside of the line of sight. The NLOS literature largely falls under two categories: _active_ and _passive_. Active NLOS imaging techniques involve using controlled light sources, such as lasers, and often rely on time-of-flight measurements to reconstruct the hidden scene. [47] introduced an ultra-fast imaging system that records light in flight, allowing the reconstruction of non-line-of-sight objects. [13, 19, 32] later presented various methods to improve the resolution of active NLOS imaging systems. NeRF has also been recently introduced to active NLOS imaging, enabling more accurate reconstructions and better handling of noise [8, 39]. Passive NLOS imaging, on the other hand, exploits natural or ambient light and does not require a controlled light source. [45] introduced the concept of accidental pinhole and pinspeck cameras, which involves using incidental or unintentional imaging elements in the environment to capture unique perspectives or resolve hidden scenes. [38, 6] analyzed shadow patterns and showed that these patterns contain sufficient information to reconstruct the shape of the hidden scene. [20] use reflections captured by a thermal camera to reconstruct the 3D body pose of non-line-of-sight humans. [44] recently presented Orca, which uses reflections from a glossy object observed in multi-view images to train a 3D NeRF for the surrounding environment. In this context, our paper can be regarded as a special case of passive NLOS scene reconstruction. We focus on a specific relay surface (the human eye) and introduce techniques tai
Figure 3: **Cornea geometry.** The cornea can be modeled as an ellipsoid. The key fact that we exploit is that the cornea shape and size are largely consistent among adults, with similar eccentricity and curvature.
lored for better information extraction from eye reflections. Unlike Orca, which relies on images captured with a moving camera while the "mirror" object is fixed, our method works for a stationary camera and uses the natural movement of the human eye "mirrors", which is visualized in Figure 2.
## 3 Background: Eye Model
The geometry of the human eye has been extensively studied [33]. The major components that are visible in the eye are: the sclera; which is the white region of the eye, and the cornea; which includes the iris and the pupil. The cornea is covered by a thin film of tear fluid, making it highly reflective. As noted by [30], since the cornea can act as a mirror, the combination of a camera and the cornea resembles a _catadioptric system_. In our work, we follow the eye model adopted by [30] for the geometry we assume for the eye.
The eye is modeled as a section of an ellipsoid, as illustrated in Figure 3, which can be described using the equation
\[\left(1-e\right)z^{2}-2Rz+r^{2}=0 \tag{1}\]
where \(e\) is the eccentricity, \(R\) is the radius of the curvature at the apex, and \(r^{2}=x^{2}+y^{2}\). For an adult with healthy eyes, on average \(e\) is about 0.5 and \(R\) is about 7.8 mm, with very little variation across different people. The bounds of the ellipsoidal section are determined by the distance from the apex to the base, labeled \(t_{b}\) in Figure 3. From \(r_{L}\), the radius of the base of the ellipsoidal section, known to be approximately 5.5 mm in people, we can calculate \(t_{b}\) as about 2.18 mm. To compute the normal at each point on the surface of the ellipsoid, we can take the gradient of Eq. 1 and get
\[\overrightarrow{n}\left(x,y,z\right)=\left\langle 2x,2y,2\left(1-e\right)z-2R\right\rangle \tag{2}\]
To compute the depth of the cornea, we first assume a weak perspective projection model, which is valid because the diameter of the base is at most 11 mm and thus small compared to the depth. Next, notice that the projection of the cornea onto the image will be an ellipse. Let the major radius of the ellipse be \(r_{img}\). Then under the projection model, the average depth of the cornea can be computed as
\[\mathrm{depth}_{\mathrm{avg}}=r_{L}\frac{f}{r_{img}}. \tag{3}\]
## 4 Method
**Radiance field from reflection.** NeRF trains a parameterized radiance field through volumetric rendering. Each pixel color is computed by sampling the color and density along a ray using a parameterized MLP \(\theta\). In NeRF, the ray
Figure 4: **Joint optimization of radiance field and iris texture.** Standard NeRF rendering uses rays starting from the camera origin \(O\) along a viewing direction \(d\). In contrast, in our setup, we need to use rays that _bounce off_ the cornea. The reflected ray origin \(O^{\prime}\) is where the initial camera ray intersects with the cornea, and the new ray direction \(d^{\prime}\) is the reflection of \(d\) across the cornea’s normal \(\overrightarrow{n}\). Consequently, the eye image we observe is a composition of the iris texture and the reflected scene. The composition hinders standard NeRF training due to the highly-detailed iris texture. To address this issue, alongside the radiance field \(\theta\), we train an _eye texture field_\(\Phi\) whose input is the projection of \(O^{\prime}\) on the eye coordinate system in the given image (Eq. 5). The eye texture field is computed relative to the eye in the current image, while the radiance field takes 3D points in the world coordinates. The outputs from volumetric rendering with \(\theta\) and texture estimation with \(\Phi\) are composited together to reconstruct the cornea image. We apply a reconstruction loss \(L_{recon}\). We further regularize the texture field \(\Phi\) with a radial loss \(L_{radial}\) that encourages the estimated texture to be radially constant, reducing the absorption of scene regions into the eye texture.
associated with a pixel starts from the origin of that image's camera, denoted by \(O\), and the direction, denoted by \(\overrightarrow{d}\), is towards the projection of that pixel on the camera plane. By training the radiance field this way, we can recover a 3D reconstruction of the scene. However, in our setup, what we are interested in is to do a reconstruction of the scene reflected from the person's eyes. In Figure 4, we illustrate how we use the rays reflected from the eye. The reflected ray starts with the origin where the camera ray intersects with the cornea at \(O^{\prime}\), and in the direction of the reflected ray \(\overrightarrow{d}\) instead of using \(O\) and \(\overrightarrow{d}\). We compute the reflected ray explicitly using the standard reflection equation:
\[\overrightarrow{d}=\overrightarrow{d}-2\left(\overrightarrow{n}\cdot \overrightarrow{d}\right)\overrightarrow{n}, \tag{4}\]
where \(\overrightarrow{n}\) is the normal at the hit point \(O^{\prime}\). Note we only need to compute the hit points and normals once before training for pixels associated with the cornea. Since we model the cornea geometry as an ellipsoid, we directly compute the hit points and normals using closed-form ellipsoid ray intersection formulas during the data processing step.
**Texture decomposition** Since the target images are the scene reflections off the cornea, training NeRF naively cause the output radiance field of mixing scene geometry and iris texture. To recover only the scene geometry in the radiance field, we jointly optimize a 2D field \(\Phi\) to learn the eye texture. We assume that the iris texture remains the same across the different views while the person moves, while the scene reflections vary. For each pixel, the input to the 2D texture field is the pixel coordinate projected on the eye in the input image
\[\text{proj}_{\text{eye}}\left(y,x\right)=\left(\frac{y-c_{y}}{r_{img}},\frac{ x-c_{x}}{r_{img}}\right) \tag{5}\]
where \(\left(c_{x},c_{y}\right)\) is the coordinate of the center of the cornea, and \(r_{img}\) is the observed cornea radius. This parameterization enforces the texture field to naturally learn the invariant regions of the cornea, while the radiance field learns the 3D geometry of the scene.
However, when a part of the scene does not display considerable motion across the training views, it can be "absorbed" as part of the texture instead of the 3D scene. To resolve this issue, we propose a radial regularization that encourages radial symmetry of the recovered texture. We implement the loss by randomly sampling a rotation matrix \(\tilde{R}\), and penalize the model on the color deviation between coordinate \(p\) and coordinate \(\tilde{R}p\) as follows:
\[L_{radial}\left(p\right)=\lambda_{radial}\|\Phi\left(p\right)-\Phi\left( \tilde{R}p\right)\|_{2}^{2} \tag{6}\]
where \(\lambda_{radial}\) is the weight of the radial loss. While the iris is not perfectly radially constant, the simple radial loss effectively removes the scene reflection while maintaining an accurate estimated texture.
**Cornea pose optimization** Due to the small cornea size in the captured images, the cornea pose and normals estimate inevitably have some errors. Training with the erroneous poses significantly affects the radiance field reconstruction's quality. To alleviate the pose errors, we optimize the pose of each cornea independently. For each cornea, we optimize for a transformation matrix \(T=[R,t]\in\text{SE}(3)\), where \(R\in\text{SO}(3)\) and \(t\in\mathbb{R}^{3}\) denote the rotation and translation, respectively. We optimize the cornea poses during training similar to [18, 50, 25].
## 5 Experiments
### Synthetic data evaluation
We generate synthetic data in Blender with eye models placed in the scene. In Figure 5 we show the scene we reconstruct using only the reflections from the eyes reflections. Since we cannot estimate the cornea eye perfectly in real life, we evaluate the robustness of our cornea pose optimization to the noise in the estimated cornea radius. To simulate the depth estimation errors we may encounter in real data, we corrupt the observed cornea \(r_{img}\) radius for each image by scaling the estimated radius with varying noise levels. In Figure 7 we show how our method's performance varies for different noise levels. Note that as the amount of noise increases, our reconstruction with pose optimization is robust in terms of the reconstructed geometry and colors when compared to the reconstruction without pose optimization. This demonstrates that pose optimization is essential for our method to work in realistic scenarios where the initial ellipse fitting in the image to the projected cornea is imperfect. Furthermore, we show quantitative comparisons of our method with and without texture decomposition in Table 1. Our method performs better in terms of SSIM and LPIPS with texture decomposition than without. Notably, we do not compute PSNR because in our setting there is a drastic difference in lighting between the reflection and the scene itself.
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Scene** & **Method** & **SSIM**\(\uparrow\) & **LPIPS**\(\downarrow\) \\ \hline Classroom & w/o texture decomposition & 0.40 & 0.72 \\ & w/ texture decomposition & **0.42** & **0.62** \\ \hline Kitchen & w/o texture decomposition & 0.44 & 0.9 \\ & w/ texture decomposition & **0.48** & **0.82** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Texture Decomposition Ablation. We show that using a neural field to decompose the iris texture from the reflection improves reconstruction performance.**
### Real-world experiments
We describe capturing and processing real-world images and demonstrate the effectiveness of our method on real captures.
Image capture.To maintain a realistic field of view, we capture images with a field of view that matches a standard portrait capture where the entire head is visible within the frame. We place area lights on the person's sides to illuminate the object of interest. Figure 9 illustrates the capture setup. We ask the person to move within the camera's field of view and capture 5-15 frames per scene. We capture the images using a Sony RX IV camera and post-process the images using Adobe Lightroom to reduce the noise in the cornea's reflection. Since the captured images have a deep dynamic range due to the scene illumination, we use 16-bit images in all our experiments to avoid losing information
Figure 5: **Qualitative synthetic results.** We show that our method can achieve reasonable reconstructions from challenging measurements in simulation. We demonstrate that our method can reconstruct the 3D geometry of the scene by visualizing the accumulation of the learned radiance fields with respect to the camera poses. The accumulation is defined as the integral of the density along the camera rays.
from the observed reflections. We vary the illumination brightness and the reflected object size for a comprehensive evaluation. On average, the cornea only covers around 0.1% of each image, and the object of interest is reflected in a region of about 20x20 pixels and composited with the iris texture.
#### 5.2.1 Data processing
We estimate the cornea's center and radius on images to get an initial estimate of the cornea's 3D location. Once we have the radius, we can directly approximate the cornea's 3D location using the average depth from Eq. 3 and the camera's focal length, and also compute its surface normals using Eq. 2. To automate the process, we locate the eyes bounding boxes using Grounding Dino [21] and then use ELLSeg [15] to perform ellipse fitting for the iris. While the corneas are typically occluded, we only need the unoccluded regions, so we obtain a segmentation mask for the iris using Segment Anything [14].
Figure 6: **Additional real results.** We show that our method works in a variety of capture conditions, like smaller objects as in the small plant on the top row, and varying eye colors. We show that we can also reconstruct the observed object with a significantly smaller eye observations like in the bottom example.
#### 5.2.2 Results from real captures
Using our captured images, we show that our method enables the reconstruction of 3D scenes from real-world portrait captures, as shown in Figures 1 and 6, despite the cornea location and geometry estimate inaccuracies. In Figure 10, by ablating the cornea pose optimization and texture decomposition from our method, we demonstrate that cornea pose optimization and texture decomposition are necessary for successful 3D scene reconstructions. The initial pose estimate of the corneas is noisy because the blur-riness of the boundary of the cornea makes it challenging to be localized precisely in the image, as shown in Figure 11. In Figure 10 we show the rendered radiance field with and without the learned texture decomposition. We notice significantly more floaters when not explicitly modeling the texture. Furthermore, Figure 11 demonstrates that the radial regularization improves the quality of our reconstruction because, without it, the texture decomposition will absorb parts of the scene with low disparity among observed views. We note that for some eye colors, like green and blue, the 3D reconstruction is more difficult because the iris texture is brighter. One such example of a green iris texture is given in Figure 11, and to handle these cases, we can increase the amount of radial regularization.
Figure 8: **Data processing pipeline.** To compute the iris ellipse parameters, we first obtain eye bounding boxes using GroundingDINO [21] and then conduct ellipse fitting using ELLSeg [15]. Since we only want to use the visible regions of the cornea in our radiance field optimization, to handle occlusion, we generate a segmentation mask of the iris from the approximated cornea ellipse using Segment Anything [14].
Figure 7: **Synthetic pose optimization ablation.** In simulation, the cornea pose optimization refines the noisy initial poses and results in clearer reconstruction.
### Limitations
Our work demonstrates the feasibility of reconstructing the 3D world only from eye reflections. Two major limitations remain. First, our current real-world results are from a "laboratory setup", such as a zoom-in capture of a person's face, area lights to illuminate the scene, and deliberate person's movement. We believe more unconstrained settings remain challenging (e.g., video conferencing with natural head movement) due to lower sensor resolution, dynamic range, and motion blur. Second, our assumptions on the iris texture (e.g., constant texture, radially constant colors) may be too simplistic so our approach may break down with large eye rotations.
## 6 Conclusions
By leveraging the subtle reflections of light off human eyes, we develop a method that can reconstruct the (non-line-of-sight) scene observed by a person using monocular image sequences captured at a fixed camera position. We demonstrate that naively training a radiance field on the observed reflections is insufficient due to several factors: 1) the inherent noise in cornea localization, 2) the complexity of iris textures, and 3) the low-resolution reflections captured in each image. To address these challenges, we introduce cornea pose optimization and iris texture decomposition during training, aided by a radial texture regularization loss based on the nature of the human eye iris. We showcase the effectiveness of our approach to real-world data. Unlike conventional methods of training a neural field that requires a moving camera, our method places the camera at a fixed viewpoint and relies solely on the user's motion. With this work, we hope to inspire future explorations that leverage unexpected, accidental visual signals to reveal information about the world around us, broadening the horizons of 3D scene reconstruction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.